**MATE 350, Materials Thermodynamics, 3 cr, 3 cl hrs**

**(Crosslisted with ChE 349)**

*Prerequisites*: MATH 231, CHEM 121, PHYS 121. (ES 347 is recommended.)

*Course Description:* The mathematical structure of thermodynamics is developed and elucidated from a transport-processbased
perspective. Basic quantities such as heat and temperature are carefully defined.
The conserved nature of the First-Law and the non-conserved nature of the Second Law
are emphasized. The consequences of the ensuing stability-conditions are explored
in the area of phase equilibrium in multicomponent mixtures.

*Required Textbooks:* A Short Course in Thermodynamics + Exercises for a Short Course in Thermodynamics by J. McCoy

*Recommended Textbooks: *Stanley Sandler, Chemical and Engineering Thermodynamics (any edition)

*There is, perhaps, no scientific inquiry more full of human interest than the study
of the nature of heat, and the manner in which matter in general is affected by it.
No branch of physical science is so intimately connected with the everyday occupations
of life, and, consequently, none of them interests mankind more closely.*

*The influence of heat is manifestly so universal, and its actions so important and
necessary to the progress of all the operations of nature, that, to those who first
considered it with some attention, it must have at once appeared to be the general
principle of all life and activity on this globe. With its return in springtime the
bud breaks into blossom, and new life animates the vegetable kingdom. By its agency
the incubation of the egg progresses, a living thing is brought into the world, and
heat is still necessary to its support. Finally, to the power which man has acquired
over it is due that supernatural strength which has made him superior to all other
animals, and master of land and sea.*

*It is not surprising, therefore, that an agent at once so powerful and so serviceable,
so beneficent and yet sometimes so terrible, should have become a subject of adoration
and worship among the inhabitants of the earth.*

-Thomas Preston-The Theory of Heat (1894)

Why has statistical mechanics not completely replaced thermodynamics? Statistical mechanics bases explanations on atoms while thermodynamics for any non-ideal gas requires “curve-fit” parameters like those in the van der Waals EOS. Unfortunately, simply adopting a belief in atoms does not tie up all the loose ends. The interactions between the atoms must be described mathematically, and this requires a complicated curve fit procedure at the atomic level. Of course, these interactions can be explained at an even more basic level with quantum mechanics. Each step to a more fundamental understanding entails the introduction of new non-observables and of new mathematical complications. Thermodynamics retains a place of supreme importance because it can be developed without reference to atoms and their interactions. Thermodynamics remains relevant to modern physics because it provides a solid touchstone for statistical mechanics, quantum mechanics, and other modern theories.

### Course Topics

The three categories of ingredients of thermodynamics are introduced: observables like temperature, non-observables like entropy, and analytical operations like multiplication. The necessity of including the class of non-observables is discussed. It is seen that such non-observables are essential in modern scientific theories.

The development of the scientific method is reviewed. Empiricism encouraged the development of large databases of observations and was weak on prediction. Rationalism encouraged the use of high-powered mathematics applied to “simple” non-observables that there were good reasons for believing existed. Although able to make many predictions and explanations, there was often poor agreement with observation. Positivism attempted to get the best of both Empiricism and Rationalism by taking a less rigid view of the non-observables. They did not think of the non-observables as literally real, but rather thought of them as compact expression of many experiments and calculations on observables. In the final analysis, the Positivist treatment of non-observables could not cope with developments in the latter half of the 20th Century and the movement died out. The more modern perspectives that replaced it will be discussed in the next chapter.

The history of steam engines is briefly reviewed. These provided an important industrial driving force for the refinement of thermodynamics. Early steam engines were for pumping water out of mines. Later ones provided the power for factories. Most of our power is still supplied by steam engines. Steam locomotives came later and led to the development of rapid transportation.

Some of the problems with Positivism are discussed. First, and most directly related to thermodynamics, is that theoretical developments in the early 20th century caused non-observables to be taken more literally than warranted by Positivist thought. Second, the verification criteria for scientific truth (i.e., a theory that worked was a true theory), was attacked by Popper because it was too easy to change the theory to fit the facts (as was done in Marxism) and suggested that scientists should focus their effort on trying to prove theories wrong. Theories that had no possibility of being “falsified” were not considered to be scientific by Popper. Third, Kuhn addressed how science was really done rather than how science should be done. He claimed that major changes in theories came from “paradigm shifts” resulting from a sort of scientific “beauty contest”. Finally, Feyerabend argued that there was no such thing as a scientific method. Instead science was more like art: a highly individualistic and creative enterprise.

Directly observed thermodynamics quantities are volume, length, force, and mass. Pressure is defined as force divided by area. Work is defined from force and length. Temperature is defined from pressure, volume, and mass. Piston work is considered in the extreme cases of free expansion and reversible work. Heat is seen to have two aspects. One quantitative (how much heat is needed to increase the temperature by a degree?) and one qualitative (what is the true nature of heat?) Put in this manner, it is clear that the first is well defined given the definition of a calorie but that the second is vague. To answer the second as Lavoisier did – to answer that heat is a non-observable, conserved fluid – is to court trouble. The definition of the heat capacity is, in essence, a definition of heat flow, a useful label to stick on temperature change but without insight into the true nature of things. In the next chapter we will see why Lavoisier was wrong; in the chapter following that we will see how he was somewhat right.

In the early 1800’s the two aspects of the motive power of heat were well known. First, both work and heat could be used to increase temperature. Second, temperature differences were necessary to generate work. What was not agreed upon was how to develop a theoretical framework to explain them. “Both heat and work are linked to temperature” leads to “work is the same as heat” and because they are the same, they can be added together resulting in the First Law. To demonstrate that this was true, the mechanical equivalent of heat needed to be measured. This appears to be the same for all forms of work, which is the primary confirmation of the First Law. Because work involves motion, this position led to the idea that “heat was motion,” but not necessarily on a molecular scale. The “dynamic” in thermo-dynamics refers to the motion of work. On the other hand, the First Law says nothing about the limitations of converting heat to work and the importance of temperature differences.

The “temperature differences are most important” group could account for steam engine efficiency from the caloric fluid language. The bigger the temperature difference, the larger the cascade of heat fluid and the more work that could be extracted. This turned into the Second Law. They explained away the “work causes increase of temperature” fact by 1) citing experimental error and 2) linking work to a “freeing” of bound caloric fluid.

The language and basic definition of terms (including “caloric”) are from Lavoisier. His direct intellectual descendent was Carnot who built a theory around the caloric. The “Second Law” thinking continued through Clapeyron and Clausius. Those skeptical of the heat fluid perspective were Rumford, Joule, Mayer, and Helmholtz who formed a “First Law” camp. Although Kelvin started on the side of the Second Law, he realized that instead of the law of the motive power of heat, there were really two laws.

In the current chapter, we look at the First Law, which treats heat and work as the “same” in that they have the same units. It is proposed that both heat and work are types of transfer of an non-observable, conserved fluid: the energy. If you add a nickel and a dime to a box, there is 15 cents more in the box. If you add 10 joules of work and 5 joules of heat to a box, the box contains 15 more joules of energy. Because energy cannot be directly observed, you cannot look in the box and count the joules, but you know there are 15 more of them because you measured them going in.

The basic First Law is rewritten so that mass could be added to the box and the energy carried with the new mass could be accounted for. For convenience, this leads to the definition of enthalpy. It is often useful to write the First Law in differential form. It is also assumed that energy is a state function (temperature measures the “depth” of the energy, but pressure effects must also be included.) In other words, if you fix T, P and mass, you fix the energy of the material. Finally, energy is assumed to be extensive (a candy bar has twice the calories of half a candy bar.)

As a last comment on Carnot and the First Law, the following is from the biographical
sketch of Carnot in Magie’s book which quotes information from Carnot’s research notebooks.
*“It is interesting to note that the doubt of the validity of the substantial theory
of heat, expressed by [Carnot] in his memoir, developed later into complete disbelief,
and that he not only adopted the mechanical theory of heat, but planned experiments
to test it similar to those of Joule, and calculated that the mechanical equivalent
of heat is equal to 370 kg-meters.”* This converts to 1 calorie=3.63J compared to modern value of 4.18 Joules – not bad
for a first attempt, only a 13% error. The history of thermodynamics may have been
much different if Carnot had not died at 36.

We begin to use the concept of energy to explain experiment. It is both interesting and a little disturbing that a quantity that is a non-observable takes on a “real” nature in thermodynamics. The meaning of the term “real” varies. For the Positivists, reality of energy meant “a collection of observational and procedural facts”. In a groundbreaking paper, Quine argued that the foundations of the Positivists’ view were wrong. He concluded that all science has a mythological aspect, but that this is as it should be. From the Pragmatic perspective, “real” is only understood within the context of the theory and that a concept (such as energy) that greatly enhances the predictive power of a theory should be considered real. From his view, the question that should be asked of a non-observable is not if it is real or not, but how efficient the resulting theory is. Moreover, science should be viewed as a whole, and the whole of science should be verified as much as possible.

The use of energy in explanations of the motive power of heat simplifies language.
The constant volume heat capacity is the derivative of the energy with respect to
temperature. The constant pressure heat capacity is the derivative of the enthalpy
with respect to temperature. Different classes of materials have different characteristic
heat capacity behavior. Ideal gases have molar heat capacities that are constant with
respect to temperature and independent of volume or pressure. C_{V} equals 1.5R for ideal gases. This can be explained by a molar energy that is linear
in temperature. Dilute gases (such as nitrogen) have molar heat capacities that are
independent of volume or pressure but are dependent upon temperature. Ideal solids
have a constant volume molar heat capacity of 3R. For an adiabatic expansion, the
pressure is related to volume as PV^{gamma}=constant.

Entropy and the Second Law are the result of the work of Sadi Carnot. He thought of heat as a caloric fluid. This permitted him to follow his father’s work analyzing the optimum water wheel (i.e., what is the maximum amount of work from a water wheel?) Much of the impact of his work was due to the later workers who built directly on it: Clapeyron, Clausius, Helmholtz, and Kelvin. Carnot’s work relied on the analogy of the water wheel where the height of the millstream is like the temperature; the flow of gravitational potential energy is like the heat flow; and the mass flow of the water is like the entropy flow in the heat engine.

The First Law focuses on the magnitude of the temperature and how it is related to the “depth” of the energy. The Second Law focuses on temperature difference and how it is related to efficiency of a heat engine. The new experimental statements are: 1) heat flow is proportional to temperature difference, and 2) heat flows from hot to cold. Notice how this ties the concept of entropy to that of time.

Entropy is another non-observable state function. The first one was energy and was real in a special way. Although one cannot see energy in the bulk, one can keep track of how much is moved from one place to another. Because energy is conserved, we can always tell how much the energy in a box has increased by keeping track of what was added. Otherwise, one would think the quantity would be fairly metaphysical. If not conserved, you could add a calorie to the box and have there be 100 more calories in it (or 1000 more or 1000 less) – it would be very difficult to say how much was in the box.

Entropy is not conserved and not observable. We can keep track of how much entropy has been added to the box (dS=dQ/T), but because it is not conserved, adding 1 cal./K to the box does NOT mean that there is 1 cal./K more entropy in the box. It turns out that we can meaningfully speak of the amount of entropy in the box, but it is a fine line we walk. The Second Law says that entropy is conserved in the limit of reversible processes. So, by following reversible processes, we can construct a table of S(T,P). Then, for any T and P, we can read the entropy off the chart. The entropy is not conserved, but is well defined. Also, the Second Law says that the true entropy change will always be larger (or equal to) the entropy change you would expect from keeping track of the entropy added.

The chapter begins with a follow-up on Fourier. His analysis of heat was the beginning of continuum treatments of materials. Such treatments include Navier–Stokes’ fluid dynamics, Fick’s diffusion equation, Maxwell’s electromagnetism, and, of course, thermodynamics. Fourier was also actively involved in the French Revolution; was briefly Governor of Lower Egypt; and discovered the Greenhouse Effect.

Because we have introduced non-observable entities, energy and entropy, it is necessary to quantify them in order for them to be useful. One of their features is that they are state functions, and, consequently, they can be tabulated as functions of temperature and pressure. This is a materials specific exercise. The energy function for benzene will be different than that for argon. A useful starting point is with ideal gases. This results in an analytical function for energy and entropy. Changes in molar energy (and enthalpy) for ideal gases are found to be proportional to the temperature. Changes in the molar entropy are found to depend upon both temperature and pressure changes.

In order to quantify the state functions for the ideal gas, it was found that a combined form of the First and Second Laws was useful. This turns out to be true, in general, and we shift to thinking in terms of the combined form of the laws. All quantities in the combined form (except Sgen) are state functions that are much easier to “track” than Q. The notation of “natural variables” was introduced. In terms of a machine picture of the equation, differentials such as dV are denoted as knobs. If a knob is not turned (e.g., V is constant), the differential is zero, and the whole term drops out. Different state functions have different natural variables. The energy has natural variables S, V, N. The entropy has natural variables U, V, N.

Even if all the knobs are left undisturbed, the needle may still drift. This is because the generation term remains. At fixed U, V, and N the entropy needle drifts higher and higher. The forward direction of time is the direction in which the needle increases. The Second Law is the only physical law that can tell the future from the past. This is called the “arrow of time.” Two excellent scientific writers were quoted concerning this, Eddington and Feynman.

The combined First and Second Law is in terms of state functions alone (excepting
S_{gen}). This permits the tools of differential calculus to be applied, and, of particular
use, the interrelations between different state functions can be viewed as surfaces.
In the next few chapters, the mathematical tools will be explained. We start with
a few basic ones.

The main point is the comparison of the combined First and Second Law with the total derivative form of U(S,V,N). This permits T, P, and G to be identified with partial derivatives of U. Again, the importance of the concept of natural variables is emphasized.

It is seen that quantities like dU are, in a sense, zero and, consequently, that the differential form of thermodynamics is really multiplying, dividing and summing zeros. Historically, this has been a confusing concept (particularly the dividing part). Early work by Cavelieri and Newton treated dU as a “little zero,” that is, small but not quite zero (so it would be acceptable to divide by it). Newton made a reasonable shot at the idea of the limit, and Lazare Carnot and others attempted further clarification. The modern concept of a limit was due to Cauchy.

One of the first to treat thermodynamics in terms of multi-dimensional surfaces was Gibbs. Maxwell was particularly impressed with Gibbs’ work and included a chapter in his highly influential book, Theory of Heat. He thought this such a powerful approach that he took up making plaster casts of thermodynamic surfaces (one of which he sent Gibbs). His book also includes the first statement of the Maxwell relations in modern form though only in a footnote.

In the calculus of multi-dimensional spaces, it is important to keep track of what direction the derivative is taken. This is done in the subscript. The Maxwell relations are the result of the equality of the cross-second derivatives. The student should be able to identify a partial derivative as having a Maxwell relation or not. The “circles and squares” methodology of generating Maxwell relations was introduced. We will see in the next chapter that there are many more generating functions for Maxwell relations.

The difference between the “old” thermodynamics of steam engines and the “new” thermodynamics
of material properties was discussed. Gibbs is usually acknowledged as the founder
of the “new” methods. Although Gibbs is certainly worthy, some feel that Massieu should
have more of the credit. Be that as it may, it is essential to realize the importance
of the energy surface in Gibbs’ thermodynamics. Maxwell was excited enough to make
a plaster cast of an energy surface. We will spend some time in the course learning
how to construct such surfaces, and then we will learn how to interpret the features
of the surface in terms of phase transitions. The explanatory role of the energy surface
is a little odd. Recall that the energy is not directly observable; however, here
it is playing a major role in explaining phase behavior. Not only do we believe that
energy (and entropy and enthalpy and…) is “real” but we believe that it is this hidden
reality that “causes” or “explains” what we do observe.

In order to generate these surfaces, we need to do a number of things. First, get
rid of flow terms. This was done in a previous chapter by combining the First and
Second Law. Second, integrate the differential form of the combined laws. This is
done with an Euler integration (e.g., glass to bucket integration.) Third, construct
alternate state functions (H, A, G, etc.). that have other natural variables (e.g.,
other “control knobs”). This is done with Legendre transforms.

Two results are singled out as being particularly important. First is the Gibbs-Duhem relation which results from an overly energetic use of Legendre transforms. Second is finding Maxwell relations from generating functions found by Legendre transforms.

The relation of thermodynamic state functions to fields is discussed. Fields are non-observables and used to explain observation. They form an underlying reality. Early names in field theory are Faraday and Maxwell. At the time, many scientists found fields disturbing.

The last few mathematical “tricks” we will need for the course were covered: Chain Rule; Not-the-Chain-Rule Rule; and the inverse. We also saw how a total derivative can be made into a partial derivative. Heat capacities were related to derivatives of entropy. Several examples of the manipulation of thermodynamic derivatives were considered. In order to permit derivatives to be evaluated from heat capacity and equation of state, the “four steps to thermodynamics excellence” were introduced.

Completing this chapter represents a benchmark in the course. First, we used experience to deduce the First and Second Laws. In this form, steam engines can be analyzed. The “new” thermodynamics of Gibbs was then developed by combining the two laws. The point is to construct a thermodynamic surface that will then let us explain phase transitions. Although, in principle, the energy as a function of S and V is all one needs, in practice, it is much better to work in one of the auxiliary state functions found by Euler integration and Legendre transform. This chapter filled in a few last tricks.

In the next part of the course, we will construct the thermodynamic surface from the equation of state and from the heat capacity for single component systems. This will then be used to explain phase transitions. In the last part of the course, the thermodynamic surface will be constructed for binary mixtures, and the phase behavior of these mixtures will be explained in a like manner.

In order to construct a thermodynamics surface, information about the heat capacity
and the equation of state are needed. Because this information sits outside of thermodynamics
proper, we are unconstrained by worries about energy and entropy considerations. Indeed,
we can adopt an atomic picture of materials for the EOS without adding any baggage
to the thermodynamic equations.

Before doing this, we noticed that the pressure dependence of the heat capacity could
be found from a high quality expression of the EOS. The “trick” in this derivation
is in the spirit of a Maxwell relation. It is worth noticing that this involves a
second derivative which makes numerical data particularly difficult to use because
derivatives increase noise.

The equation of state we consider as an example is the van der Waals EOS. This is
a correction of the ideal gas EOS for excluded volume and molecular attractions. The
van der Waals “b” is the specific volume where the pressure goes infinite (an incompressible
liquid.) The van der Waals “a” is the parameter that lowers the pressure because of
molecular attractions. The “a” and “b” parameters are often fit to get the location
of the liquid-gas critical point right.

Another approach is a Taylor series of the compressibility factor about the ideal
gas limit. These are virial EOS’s and can be expansions in either V or P. The expansion
coefficients are temperature dependent. Naturally, these work best for dilute gases.

The method of corresponding states uses a graphical approach. The claim is that if
the pressures and temperatures of phase coexistence are plotted as a function of reduced
T and P, all single component systems will look the same. Then, a good EOS for, say,
benzene could be used for argon. An example was done to predict the boiling point
of benzene from the boiling point of water.

Pierre Duhem was discussed in some detail. His theory of heat was non-molecular, thermodynamics
based. His work is important to us both because of the Gibbs-Duhem equation and because
of the Duhem-Quine thesis. The Duhem-Quine thesis claims that all of science is tested
in any given experiment and, consequently, small adjustments across the field of science
can be made in order to account for experiments that initially seem to falsify a hypothesis.

We begin with highlighting van der Waals. Although his equation is viewed as “straightforward”
today, van der Waals was the first to succeeded in developing an EOS that gave reasonable
results for the liquid-gas transition. A large number of other scientists (Dupre in
particular) were very close to his EOS. Van der Waals’ route to the 1910 Nobel Prize
was precarious; his lack of a classical education held him back. Important considerations
in his science were the idea of “internal pressure” with repulsive and attractive
forces just balancing and the correct identification of the attractive contribution
to the pressure as being proportional to the density squared.

The rest of the chapter addressed the manner in which the Second Law is used to analyze
the thermodynamic surface. We found that the maximization of the entropy (as a function
of U, V, N) results in two classes of conclusions. By setting the various first derivatives
equal to zero, the equilibrium conditions are found: T, P, and G are uniform in a
system at equilibrium. By requiring that the second derivatives are less than zero,
the stability conditions are found: CV≥0, and (dP/V)_{T}≤0 are the most important ones for single component systems.

Kamerlingh Onnes was one of the first to build an experimental research program with
the van der Waals EOS at its core. This rewarded his efforts to liquefy helium. Once
he had liquid helium on hand, he used it much as he had liquid nitrogen, to cool things.
A good choice of something to cool was mercury, and he discovered superconductivity
(Type I superconductivity to be precise). An essential aspect of the work was the
calculation of the inversion temperature. A gas above this temperature is heated on
expanding, below, it is cooled. This can be calculated from the van der Waals EOS.
Indeed, the maximum inversion temperature is proportional to the critical temperature.
By fitting liquid-vapor coexistence measurements to the van der Waals EOS, both critical
and inversion temperatures could be found. Onnes employed a Hampson-Linde refrigerator
which cools the gas below its inversion temperature by “pre-cooling” with a previously
condensed liquid (such as nitrogen). A number of quotes showed the spirit of Onnes’
work.

As an aside, I find Pieter Rijke an enigmatic figure. He was the PhD adviser of both
van der Waals and Lorentz. Van der Waals decided to take up physics as a profession
after one course with Rijke. Of Rijke’s seven PhD students, two won Nobel Prizes,
and both Jacobus van 't Hoff (1852-1911, Nobel Prize in Chemistry 1901) and Hugo de
Vries (1848-1935, a founder of modern genetics) attended his lectures. Rijke was primarily
a teacher. Lorentz recalled *“We, his students, admired best his scrupulous care. Rijke aimed constantly at developing
the students’ practical skills, and, since he would not tolerate the slightest failure
of his demonstrations, he prepared them with great thoroughness and inexhaustible
patience.”* Beyond this, little is known of him. No traditional obituaries were published. In
the Proceedings of the Academy that he had been a member of for 36 years, there was
only a short notice of his death to the effect that because the deceased wanted no
description of his life and work, the Academy and the country could only gratefully
honor his memory. [Kipnis]

We then turn to the description of phase coexistence. In the T-P plot the phase boundaries
are curves, while in the P-V isotherm plot, the 2-phase region is shown experimentally
by a horizontal line. The behavior in the T-P plot can be treated using the Gibbs-Duhem
equation. This results in the Clapeyron equation. By approximating the deltaV as the
gas volume of an ideal gas, the Clausius-Clapeyron equation is found. This relates
the vapor pressure curve to the latent heat of vaporization.

The description of isotherms is more difficult. At low temperature, the van der Waals
EOS predicts regions that violate a stability condition (i.e., violate the Second
Law of thermodynamics.) This is viewed in a positive light. The van der Waals EOS
assumed that the fluid was a single phase, consequently, the violation of the Second
Law is interpreted as a signal that the fluid has phase separated in this region.
In order to systematically excise this region, a Maxwell Construction is performed.
The requirement that the chemical potentials are the same in the liquid and gas is
enforced in deciding where to draw the horizontal line.

Multi-component thermodynamics is essentially the same as that of single component
systems. However, to construct an energy surface U(S,V,N1,N2) requires equation of
state information for solutions. The springboard for solution thermodynamics is osmotic
pressure, in much the way that the Ideal Gas Law is for the single component case.
The Ideal Gas Law was the starting point for building the van der Waals EOS, and the
ideal solution equation of osmotic pressure will permit us to build more complex behavior.

Why do dilute solutions behave “like” ideal gases? The experimental evidence is contained
in Pfeffer’s work. I tried to convey the extreme care and precision of his work, and
I encourage you to read his writings on osmotic pressure. He built on Traube’s work
but with better membranes and experimental setup. To be fair to Traube, osmotic pressure
was Pfeffer’s main research effort while Traube was primarily interested in fermentation.
Be that as it may, Pfeffer used his results to explain how sap gets to the top of
trees, and few have been able to answer so basic a question as why trees are tall.
The connection to the physics of solutions can be credited to van’t Hoff and rested
squarely on the shoulders of Pfeffer’s work. Van’t Hoff’s central conclusion was pi=cRT
for dilute solutions. Notice that this provides a way to calculate the molecular weight
because c[moles/liter] =[grams/liter]/MW. Temperature and grams per liter are set
by the experiment, R is the gas constant, and pi is measured. Molecular weight is
the only unknown.

Before we build the ideal nature of solutions into thermodynamics, the thermodynamic
equations need to be generalized. The Euler integration, the Gibbs-Duhem equation,
and the stability conditions are all straightforward generalizations. Life is made
easier by working at constant T and P. Also, the concept of partial molar quantities
is introduced. For understanding phase separation, the stability condition suggests
that chemical potential be plotted against mole fraction. This is difficult because
chemical potential is not easy to measure. Volume is used as an example of how partial
molar quantities are deduced from experiment.

The ideal gas-like behavior of dilute solutions is seen to have a difficulty in its
details. The osmotic pressure is said to be proportional to the molar concentration
(Avogadro’s Law.) This is only found to hold true for a particular class of substances,
for the most part, organic molecules. Salts have a higher osmotic pressure than would
be predicted on the basis of the Ideal Gas Law. The relative osmotic strength per
molar concentration was explored by observing the swelling in plant cells (de Vries)
and in red blood cells (Donders and Hamburger.) The explanation of the deviation from
van’t Hoff behavior was supplied by Arrhenius, who studied the electrical properties
of solutions. Nonconductive solutions obeyed the van’t Hoff equation while conducting
ones deviated, and the larger the conductivity, the larger the deviation. This implied
that ionization played an important role in osmotic pressure.

In order to investigate phase behavior, some form of the Gibbs Free Energy must be
known as a function of mole fraction. The delG_{MIX} is convenient. Because delG_{MIX}=delH_{MIX}-TdelS_{MIX}, and delH_{MIX} is known from experiment, the only difficulty is finding delS_{MIX}. This is where van’t Hoff’s idea is useful because we know how to calculate delSMIX
for a mixture of ideal gases. The basic stability condition is still in terms of the
derivative of the chemical potential with respect to mole fraction; however, this
can be rewritten in terms of delG_{MIX}. When delG_{MIX} is concave up, the mixture is stable. When it is concave down, unstable. Unstable
regions are crossed with a double tangent construction, which is the form the Maxwell
Construction takes on the delGMIX level. The ideal solution case is always stable
(that is, single phase.) Phase separation is built in by adding an excess term to
the ideal solution delG_{MIX}. A common excess form is that of Margules which leads to a liquid-liquid phase separation.

Four important contributions to thermodynamics are mentioned but not discussed in
detail. The Zeroth Law was formally stated by Planck and named by Fowler. It simply
says that temperature as measured with a thermometer is a useful measure of thermodynamic
state. The Third Law (Nernst, Richards), on the other hand, is quite complex and usually
misstated. It says that change either in chemical or physical state at absolute zero
is accompanied by an entropy change of zero. Its most practical restatement is that
0K can never be reached. From that comes the Third Law convention (Planck, Fowler),
which says that the arbitrary zero in entropy can be fixed at 0K.

There were two advances along the lines of Gibbs in the early 1900’s. The work of
Carathéodory treated thermodynamics in a more mathematically formal manner. In effect,
this was a continuation of Gibbs’ view of thermodynamics as a study of energy surfaces
that could be described by differential equations. His statement of the Second Law
is most closely tied to the mathematics of differential equations. Finally, Lewis
and Randall published an extremely popular book following Gibbs’ thermodynamics.

In this chapter, the formulism of the last few chapters is applied to liquid-liquid
phase transitions. The shape of ideal solution chemical potentials and delG_{Mix} were reviewed. The Margules excess term was then added to introduce an unstable region.
The “A” parameter was fit to the liquid-liquid critical temperature. In the example
considered, it was seen that the experimental results were skewed. Substituting volume
fraction for mole fraction helps and leads to Flory-Huggins Theory.

The phase rule is discussed. From the number of components and the number of phases,
the degrees of freedom, F, of a mixture can be found. A fixed point has F=0 (e.g.,
the triple point of water.) The liquid-vapor coexistence curve of water has F=1 (a
unique P for a T.) High temperature, “dry steam” has F=2 (a given temperature could
correspond to any of a number of pressures.) Because of the restrictions imposed by
the number of components, phase diagrams can be classified. It should be appreciated
that Gibbs’ analysis had considerably more depth. To quote George Morey (1888-1965),
*“It is unfortunate that the subject has developed in this manner, instead of by the
direct application of the equations which were developed by Gibbs. The Phase Rule
itself is but a n incidental qualitative deduction from these equations, and the justification
of the geometrical methods is their derivation as projections of the lines and surfaces
“of dissipated energy,” painstakingly exemplified by Gibbs. While in the first part
of “Equilibrium of Heterogeneous Substances” the actions of gravity, electrical influences,
and surface forces are excluded from consideration, these restrictions are later removed,
thus rendering unnecessary the various “extended” Phase Rules which have been proposed
to remedy this supposed defect.”*[Morey, 1936]

The specific problem of phase equilibrium between phases of different symmetries was
addressed. This is a case where an analysis based on Ideal Mixture Theory leads to
phase separation. The delG_{MIX} between phases of the same symmetry is approximated solely by t he idea mixture expression.
In order to mix two phases of different symmetries (e.g., liquid and fcc crystal),
one or the other must be “transformed” into the other phase (e.g., liquid changed
to fcc.) This change introduces a delG for the transition of the pure state from one
phase to the other (e.g., the Gibbs Free Energy of freezing.) This can be done both
ways to get a free energy curve for the mixture in the liquid state and another for
it in the fcc state. Depending on the composition (and temperature), one or the other
phase will be stable. Of course, phases of other symmetries can also exist, say body
centered cubic (bcc.) In this case, both fcc and liquid need to be “transformed” into
bcc. In order to go beyond simple mixtures, an excess term must be added (just like
in the liquid-liquid case.) A number of examples were considered.

Another piece of evidence that justifies the ideal mixture approximation (at least in dilute solutions) is gas absorption. For a given amount of liquid volume, a given volume of gas is found to be absorbed no matter what the pressure is. This is Henry’s Law is in its first form. Because at higher pressure there are more molecules in the volume of gas absorbed, one finds that the number of moles (or number of grams) of the gas increases linearly with pressure. This is comparable to the mole fraction of gas in the liquid increasing linearly with pressure. It should be mentioned that the volume absorbed is far larger than could be justified by thinking that there is volume in the liquid that needs to be filled up at the gas density. The gas molecules must be packed in more tightly in the liquid “voids.” This packing effect could be accounted for by a potential attraction, and, in the same way that gravity increases the density of the air near the ground, the density in the “liquid-voids” would be increased.

An extension of Henry’s Law is Raoult’s Law, which works for volatile liquids. The mole fraction in the vapor phase is proportional to the mole fraction in the liquid. If the total pressure is fixed and if the vapor pressures are known as a function of temperature for components in their pure state, the composition and temperature can be found at the dew point and at the bubble point. The resulting liquid-vapor phase diagrams always look much the same within this approximation. More complex phase diagrams can be explained by including a correction term based on Ideal Mixture Theory, as we will see in the next chapter.

Raoult studied the effect of organic solutes on the vapor pressure of a volatile solvent. He found the fractional reduction in vapor pressure to be independent of temperature and to be linear with mole fraction of solute. This was a behavior cited by van’t Hoff in his theory of Ideal Mixtures. This was also used as a way of analyzing dissociating solutes (salts). For instance, if the experiment was off by a factor of two from Raoult’s Law, one would believe that the molecules had broken into two parts (Na+ and Cl-).

To build a successful thermodynamic description around Raoult’s law and to find a systematic way to correct Raoult’s law, activity coefficients are introduced. In certain cases, the activity coefficient can be found. The two cases we looked at are from the Henry’s Law coefficient and from the azeotrope conditions. These, of course, only give a value for the activity coefficient at a single concentration. To find the full phase diagram, the activity coefficient needs to be known at each mole fraction. This is done by writing the Gibbs free energy of mixing in terms of fugacity, which is a natural way to link the activity coefficient to the excess chemical potential. A simple form of the Gibbs free energy of mixing is selected, and the parameters fit. This permits the vapor-liquid phase diagram to be calculated.

Why has statistical mechanics not completely replaced thermodynamics? Statistical mechanics bases explanations on atoms while thermodynamics for any non-ideal gas requires “curve-fit” parameters like those in the van der Waals EOS. Unfortunately, simply adopting a belief in atoms does not tie up all the loose ends. The interactions between the atoms must be described mathematically, and this requires a complicated curve fit procedure at the atomic level. Of course, these interactions can be explained at an even more basic level with quantum mechanics. Each step to a more fundamental understanding entails the introduction of new non-observables and of new mathematical complications. Thermodynamics retains a place of supreme importance because it can be developed without reference to atoms and their interactions. Thermodynamics remains relevant to modern physics because it provides a solid touchstone for statistical mechanics, quantum mechanics, and other modern theories.