19.2: Molecular Kinetic Theory of a Monatomic Ideal Gas
The empirical combined gas law is simply a generalization of observed relationships. Using kinetic theory, it is possible to derive it from the principles of Newtonian mechanics. Previously, we thought of an ideal gas as one that obeys the combined gas law exactly. Within the current model, however, we can give a specific definition. We treat a monatomic ideal gas as a system of an extremely large number of very small particles in random motion that collide elastically between themselves and the walls of their container, where there are no interaction between particles other than collisions.
Consider some amount ( atoms) of such a gas in a cubical container with side length . Let's trace the path of a single gas atom as it collides with the walls:
The path of a single gas atom as it undergoes collisions with the walls of its container
Further, let's restrict ourselves to considering the motion of the particle along the axis, and its collisions with the right wall, as shown in the picture. Therefore, we only consider the component of the velocity vector perpendicular to the wall.
If the particle's mass is , in one collision, the particle's momentum in the direction changes by
Also, since it has to travel a distance (back and forth, basically) in the direction between collisions with the right wall, the time between collisions will be
Illustration x-direction atom from above.
According to Newton's second law, the force imparted by the single particle on the wall is
Now, since there are (a very large number) atoms present, the net force imparted on the wall will be
Where the is averaged over all atoms.
Now let us attempt to relate this to the state variables we considered last chapter. Recall that pressure is defined as force per unit area:
Since the area of the wall in question is , the pressure exerted by the gas atoms on it will equal:
Since, for a cubical box, volume , the formula above can be reduced to:
By the Pythagorean theorem, any three-dimensional velocity vector has the following property:
Averaging this for the particles in the box, we get
Since the motions of the particles are completely random (as stated in our assumptions), it follows that the averages of the squares of the velocity components should be equal: there is no reason the gas particles would prefer to travel in the direction over any other. In other words,
Plugging this into the average equation above, we find:
Plugging this into equation [1], we get:
The left side of the equation should look familiar; this quantity is proportional to the average kinetic energy of the molecules in the gas, since
Therefore, we have:
This is a very important result in kinetic theory, since it expresses the product of two state variables, or system parameters, pressure and volume, in terms of an average over the microscopic constituents of the system. Recall the empirical ideal gas law from last chapter:
The left side of this is identical to the left side of equation [3], whereas the only variable on the right side is temperature. By setting the left sides equal, we find:
Therefore, according to the kinetic theory of an monoatomic ideal gas, the quantity we called temperature is --- up to a constant coefficient --- a direct measure of the average kinetic energy of the atoms in the gas. This definition of temperature is much more specific and it is based essentially on Newtonian mechanics.
Temperature, Again
Now that we have defined temperature for a monoatomic gas, a relevant question is: can we extend this definition to other substances? It turns out that yes, we can, but with a significant caveat. In fact, according to classical kinetic theory, temperature is always proportional to the average kinetic energy of molecules in a substance. The constant of proportionality, however, is not always the same.
Consider: the only way to increase the kinetic energies of the atoms in a monoatomic gas is to increase their translational velocities. Accordingly, we assumed above that the kinetic energies of such atoms are stored equally in the three components () of their velocities.
On the other hand, other gases --- called diatomic --- consist of two atoms held by a bond. This bond can be modeled as a spring, and the two atoms and bond together as a harmonic oscillator. Now, a single molecule's kinetic energy can be increased either by increasing its speed, by making it vibrate in simple harmonic motion, or by making it rotate around its center of mass. This difference is understood in physics through the concept of degrees of freedom: each degree of freedom for a molecule or particle corresponds to a possibility of increasing its kinetic energy independently of the kinetic energy in other degrees.
It might seem to you that monatomic gases should have one degree of freedom: their velocity. They have three because their velocity can be altered in one of three mutually perpendicular directions without changing the kinetic energy in other two --- just like a centripetal force does not change the kinetic energy of an object, since it is always perpendicular to its velocity. These are called translational degrees of freedom.
Diatomic gas molecules, on the other hand have more: the three translational explained above still exist, but there are now also vibrational and rotational degrees of freedom. Monatomic and diatomic degrees of freedom can be illustrated like this:
Temperature is an average of kinetic energy over degrees of freedom, not a sum. Let's try to understand why this is in reference to our monoatomic ideal gas. In the derivation above, volume was constant; so, temperature was essentially proportional to pressure, which in turn was proportional to the kinetic energy due to translational motion of the molecules. If the molecules had been able to rotate as well as move around the box, they could have had the same kinetic energy with slower translational velocities, and, therefore, lower temperature. In other words, in that case, or assumption that the kinetic energy of the atoms only depends on their velocities, implied between equations [2] and [3], would not have held. Therefore, the number of degrees of freedom in a substance determines the proportionality between molecular kinetic energy and temperature: the more degrees of freedom, the more difficult it will be to raise its temperature with a given energy input.
In solids, degrees of freedom are usually entirely vibrational; in liquids, the situation becomes more complicated. We will not attempt to derive results about these substances, however.
A note about the above discussion: since the objects at the basis of our understanding of thermodynamics are atoms and molecules, quantum effects can make certain degrees of freedom inaccessible at specific temperature ranges. Unlike most cases in your current physics class, where these can be ignored, in this case, quantum effects can make an appreciable difference. For instance, the vibrational degrees of freedom of diatomic gas molecules discussed above are, for many gases, inaccessible in very common conditions, although we do not have the means to explain this within our theory. In fact, this was one of the first major failures of classical physics that ushered in the revolutionary discoveries of the early 20th century.
Thermal Energy
In light of the above derivation, it should not surprise you that the kinetic energy from motion of molecules contributes to what is called the thermal energy of a substance. This type of energy is called sensible energy. In ideal gases, this is the only kind of thermal energy present.
Solids and liquids also have a different type of thermal energy as well, called Latent Energy, which is associated with potential energy of their intermolecular bonds in that specific phase --- for example the energy it takes to break the bonds between water molecules in melting ice (remember, we assumed molecules do not interact in the ideal gas approximation).
To recap, there are two types of Thermal Energy:
- The kinetic energy from the random motion of the molecules or atoms of the substance, called Sensible Energy
- The intermolecular potential energy associated with changes in the phase of a system (called Latent Energy).
Heat
The term heat is formally defined as a transfer of thermal energy between substances. Note that heat is not the same as thermal energy. Before the concept of thermal energy, physicists sometimes referred to the 'heat energy' of a substance, that is, the energy it received from actual 'heating' (heating here can be understood as it is defined above, though for these early physicists and chemists it was a more 'common sense' idea of heating: think beaker over Bunsen burner). The idea was then to try to explain thermodynamic phenomena through this concept.
The reason this approach fails is that --- as stated in the paragraphs above --- it is in fact thermal energy that is most fundamental to the science, and 'heating' is not the only way to change the thermal energy of a substance. For example, if you rub your palms together, you increase the thermal energy of both palms.
Once heat (a transfer of thermal energy) is absorbed by a substance, it becomes indistinguishable from the thermal energy already present: what methods achieved that level of thermal energy is no longer relevant. In other words, 'to heat' is a well defined concept as a verb: its use automatically implies some kind of transfer. When heat using as a noun, one needs to be realize that it must refer to this transfer also, not something that can exist independently.
Specific Heat Capacity and Specific Latent Heat
The ideas in the paragraphs above can be understood better through the concept of specific heat capacity (or specific heat for short), which relates an increase in temperature of some mass of a substance to the amount of heat required to achieve it. In other words, for any substance, it relates thermal energy transfers to changes in temperature. It has units of Joules per kilogram Kelvin. Here is how we can define and apply specific heat ( refers to heat supplied, to the mass of the substance and to its specific heat capacity):
Heat capacity is largely determined by the number of degrees of freedom of the molecules in a substance (why?). However, it also depends on other parameters, such as pressure. Therefore, the formula above implicitly assumes that these external parameters are held constant (otherwise we wouldn't know if we're measuring a change in specific heat is real or due to a change in pressure).
When a substance undergoes a phase change, its temperature does not change as it absorbs heat. We referred to this as an increase or decrease in latent energy earlier. In this case, the relevant question is how much heat energy does it require to change a unit mass of the substance from one phase to another? This ratio is known as latent heat, and is related to heat by the following equation ( refers to the latent heat):
During a phase change, the number of degrees of freedom changes, and so does the specific heat capacity. Heat capacity can also depend on temperature within a given phase, but many substances, under constant pressure, exhibit a constant specific heat over a wide range of temperatures. For instance, look at the graph of temperature vs heat input for a mole ( molecules) of water at http://en.wikipedia.org/wiki/File:Energy_thru_phase_changes.png. Note that the x-axis of the graph is called 'relative heat energy' because it takes a mole of water at 0 degrees Celsius as the reference point.
The sloped segments on the graph represent increases in temperature. These are governed by equation [1]. The flat segments represent phase transitions, governed by equation [2]. Notice that the sloped segments have constant, though different, slopes. According to equation [1], the heat capacity at any particular phase would be the slope of the segment that corresponds to that phase on the graph. The fact that the slopes are constant means that, within a particular phase, the heat capacity does not change significantly as a function of temperature.
Substance | Specific Heat, |
---|---|
Air | 6.96 |
Water | 1.00 |
Alcohol | 0.580 |
Steam | 0.497 |
Ice | 0.490 |
Aluminum | 0.215 |
Zinc | 0.0925 |
Brass | 0.0907 |
Silver | 0.0558 |
Lead | 0.0306 |
Gold Lead | 0.0301 |
Substance | Fusion, | Vaporization, |
---|---|---|
Water | 80.0 | 540 |
Alcohol | 26 | 210 |
Silver | 25 | 556 |
Zinc | 24 | 423 |
Gold | 15 | 407 |
Helium | - | 5.0 |
Entropy
The last major concept we are going to introduce in this chapter is entropy. We noted earlier that temperature is determined not just by how much thermal energy is present in a substance, but also how it can be distributed. Substances whose molecules have more degrees of freedom will generally require more thermal energy for an equal temperature increase than those whose molecules have fewer degrees of freedom.
Entropy is very much related to this idea: it quantifies how the energy actually is distributed among the degrees of freedom available. In other words, it is a measure of disorder in a system. An example may illustrate this point. Consider a monatomic gas with atoms (for any appreciable amount of gas, this number will be astronomical). It has degrees of freedom. For any given value of thermal energy, there is a plethora of ways to distribute the energy among these. At one extreme, it could all be concentrated in the kinetic energy of a single atom. On the other, it could be distributed among them all. According to the discussion so far, these systems would have the same temperature and thermal energy. Clearly, they are not identical, however. This difference is quantified by entropy: the more evenly distributed the energy, the higher the entropy of the system. Here is an illustration: