The Gift of Chaos

Many, if not most important engineering systems are chaotic to some extent, but as most college programs don’t deal with this behavior, or with this type of math, I thought I might write something on it. It was a big deal among my PhD colleagues some 30 years back as it revolutionized the way we looked at classic problems; it’s fundamental, but it’s now hardly mentioned.

Two of the first freshman engineering homework problems I had turn out to have been chaotic, though I didn’t know it at the time. One of these concerned the cooling of a cup of coffee. As presented, the coffee was in a cup at a uniform temperature of 70°C; the room was at 20°C, and some fanciful data was presented to suggest that the coffee cooled at a rate that was proportional the difference between the (changing) coffee temperature and the fixed room temperature. Based on these assumptions, we predicted exponential cooling with time, something that was (more or less) observed, but not quite in real life. The chaotic part in a real cup of coffee, is that the cup develops currents that move faster and slower. These currents accelerate heat loss, but since they are driven by the temperature differences within the cup they tend to speed up and slow down erratically. They accelerate when the cup is not well stirred, causing new stir, and slow down when it is stirred, and the temperature at any point is seen to rise and fall in an almost rhythmic fashion; that is, chaotically.

While it is impossible to predict what will happen over a short time scale, there are some general patterns. Perhaps the most remarkable of these is self-similarity: if observed over a short time scale (10 seconds or less), the behavior over 10 seconds will look like the behavior over 1 second, and this will look like the behavior over 0.1 second. The only difference being that, the smaller the time-scale, the smaller the up-down variation. You can see the same thing with stock movements, wind speed, cell-phone noise, etc. and the same self-similarity can occur in space so that the shape of clouds tends to be similar at all reasonably small length scales. The maximum average deviation is smaller over smaller time scales, of course, and larger over large time-scales, but not in any obvious way. There is no simple proportionality, but rather a fractional power dependence that results in these chaotic phenomena having fractal dependence on measure scale. Some of this is seen in the global temperature graph below.

Global temperatures measured from the antarctic ice showing stable, cyclic chaos and self-similarity.

Global temperatures measured from the antarctic ice showing stable, cyclic chaos and self-similarity.

Chaos can be stable or unstable, by the way; the cooling of a cup of coffee was stable because the temperature could not exceed 70°C or go below 20°C. Stable chaotic phenomena tend to have fixed period cycles in space or time. The world temperature seems to follow this pattern though there is no obvious reason it should. That is, there is no obvious maximum and minimum temperature for the earth, nor any obvious reason there should be cycles or that they should be 120,000 years long. I’ll probably write more about chaos in later posts, but I should mention that unstable chaos can be quite destructive, and quite hard to prevent. Some form of chaotic local heating seems to have caused battery fires aboard the Dreamliner; similarly, most riots, famines, and financial panics seem to be chaotic. Generally speaking, tight control does not prevent this sort of chaos, by the way; it just changes the period and makes the eruptions that much more violent. As two examples, consider what would happen if we tried to cap a volcano, or provided  clamp-downs on riots in Syria, Egypt or Ancient Rome.

From math, we know some alternate ways to prevent unstable chaos from getting out of hand; one is to lay off, another is to control chaotically (hard to believe, but true).

 

Statistics Joke

A classic statistics joke concerns a person who’s afraid to fly; he goes to a statistician who explains that planes are very, very safe, especially if you fly a respectable airline in good weather. In that case, virtually the only problem you’ll have is the possibility of a bomb on board. The fellow thinks it over and decides that flying is still too risky, so the statistician suggests he plant a bomb on the airplane, but rig it to not go off. The statistician explains: while it’s very rare to have a bomb onboard an airplane, it’s really unheard of to have two bombs on the same plane.

It’s funny because …. the statistician left out the fact that an independent variable (number of bombs) has to be truly independent. If it is independent, the likelihood is found using a poisson distribution, a non-normal distribution where the greatest likelihood is zero bombs, and there are no possibilities for a negative bomb. Poisson distributions are rarely taught in schools for some reason.

By Dr. Robert E. Buxbaum, Mar 25, 2013. If you’ve got a problem like this (particularly involving chemical engineering) you could come to my company, REB Research.

Hydrogen versus Battery Power

There are two major green energy choices that people are considering to power small-to-medium size, mobile applications like cars and next generation, drone airplanes: rechargeable, lithium-ion batteries and hydrogen /fuel cells. Neither choice is an energy source as such, but rather a clean energy carrier. That is, batteries and fuel cells are ways to store and concentrate energy from other sources, like solar or nuclear plants for use on the mobile platform.

Of these two, rechargeable batteries are the more familiar: they are used in computers, cell phones, automobiles, and the ill-fated, Boeing Dreamliner. Fuel cells are less familiar but not totally new: they are used to power most submarines and spy-planes, and find public use in the occasional, ‘educational’ toy. Fuel cells provided electricity for the last 30 years of space missions, and continue to power the international space station when the station is in the dark of night (about half the time). Batteries have low energy density (energy per mass or volume) but charging them is cheap and easy. Home electricity costs about 12¢/kWhr and is available in every home and shop. A cheap transformer and rectifier is all you needed to turn the alternating current electricity into DC to recharge a battery virtually anywhere. If not for the cost and weight of the batteries, the time to charge the battery (usually and hour or two), batteries would be the obvious option.

Two obvious problems with batteries are the low speed of charge and the annoyance of having to change the battery every 500 charges or so. If one runs an EV battery 3/4 of the way down and charges it every week, the battery will last 8 years. Further, battery charging takes 1-2 hours. These numbers are acceptable if you use the car only occasionally, but they get more annoying the more you use the car. By contrast, the tanks used to hold gasoline or hydrogen fill in a matter of minutes and last for decades or many thousands of fill-cycles.

Another problem with batteries is range. The weight-energy density of batteries is about 1/20 that of gasoline and about 1/10 that of hydrogen, and this affects range. While gasoline stores about 2.5 kWhr/kg including the weight of the gas tank, current Li-Ion batteries store far less than this, about 0.15 kWhr/kg. The energy density of hydrogen gas is nearly that of gasoline when the efficiency effect is included. A 100 kg of hydrogen tank at 10,000 psi will hold 8 kg of hydrogen, or enough to travel about 350 miles in a fuel-cell car. This is about as far as a gasoline car goes carrying 60 kg of tank + gasoline. This seems acceptable for long range and short-range travel, while the travel range with eVs is more limited, and will likely remain that way, see below.

The volumetric energy density of compressed hydrogen/ fuel cell systems is higher than for any battery scenario. And hydrogen tanks are far cheaper than batteries. From Battery University. http://batteryuniversity.com/learn/article/will_the_fuel_cell_have_a_second_life

The volumetric energy density of compressed hydrogen/ fuel cell systems is higher than for any battery scenario. And hydrogen tanks are far cheaper than batteries. From Battery University. http://batteryuniversity.com/learn/article/will_the_fuel_cell_have_a_second_life

Cost is perhaps the least understood problem with batteries. While electricity is cheap (cheaper than gasoline) battery power is expensive because of the high cost and limited life of batteries. Lithium-Ion batteries cost about $2000/kWhr, and give an effective 500 charge/discharge cycles; their physical life can be extended by not fully charging them, but it’s the same 500 cycles. The effective cost of the battery is thus $4/kWhr (The battery university site calculates $24/kWhr, but that seems overly pessimistic). Combined with the cost of electricity, and the losses in charging, the net cost of Li-Ion battery power is about $4.18/kWhr, several times the price of gasoline, even including the low efficiency of gasoline engines.

Hydrogen prices are much lower than battery prices, and nearly as low as gasoline, when you add in the effect of the high efficiency fuel cell engine. Hydrogen can be made on-site and compressed to 10,000 psi for less cost than gasoline, and certainly less cost than battery power. If one makes hydrogen by electrolysis of water, the cost is approximately 24¢/kWhr including the cost of the electrolysis unit.While the hydrogen tank is more expensive than a gasoline tank, it is much cheaper than a battery because the technology is simpler. Fuel cells are expensive though, and only about 50% efficient. As a result, the as-used cost of electrolysis hydrogen in a fuel cell car is about 48¢/kWhr. That’s far cheaper than battery power, but still not cheap enough to encourage the sale of FC vehicles with the current technology.

My company, REB Research provides another option for hydrogen generation: The use of a membrane reactor to make it from cheap, easy to transport liquids like methanol. Our technology can be used to make hydrogen either at the station or on-board the car. The cost of hydrogen made this way is far cheaper than from electrolysis because most of the energy comes from the methanol, and this energy is cheaper than electricity.

In our membrane reactors methanol-water (65-75% Methanol), is compressed to 350 psi, heated to 350°C, and reacted to produce hydrogen that is purified as it is made. CH3OH + H2O –> 3H2 + CO2, with the hydrogen extracted through a membrane within the reactor.

The hydrogen can be compressed to 10,000 psi and stored in a tank on board an automobile or airplane, or one can choose to run this process on-board the vehicle and generate it from liquid fuel as-needed. On-board generation provides a saving of weight, cost, and safety since you can carry methanol-water easily in a cheap tank at low pressure. The energy density of methanol-water is about 1/2 that of gasoline, but the fuel cell is about twice as efficient as a gasoline engine making the overall volumetric energy density about the same. Not including the fuel cell, the cost of energy made this way is somewhat lower than the cost of gasoline, about 25¢/kWhr; since methanol is cheaper than gasoline on a per-energy basis. Methanol is made from natural gas, coal, or trees — non-imported, low cost sources. And, best yet, trees are renewable.

Why the Boeing Dreamliner’s batteries burst into flames

Boeing’s Dreamliner is currently grounded due to two of their Li-Ion batteries having burst into flames, one in flight, and another on the ground. Two accidents of the same type in a small fleet is no little matter as an airplane fire can be deadly on the ground or at 50,000 feet.

The fires are particularly bad on the Dreamliner because these lithium batteries control virtually everything that goes on aboard the plane. Even without a fire, when they go out so does virtually every control and sensor. So why did they burn and what has Boeing done to take care of it? The simple reason for the fires is that management chose to use Li-Cobalt oxide batteries, the same Li-battery design that every laptop computer maker had already rejected ten years earlier when laptops using them started busting into flames. This is the battery design that caused Dell and HP to recall every computer with it. Boeing decided that they should use a massive version to control everything on their flagship airplane because it has the highest energy density see graphic below. They figured that operational management would insure safety even without the need to install any cooling or sufficient shielding.

All lithium batteries have a negative electrode (anode) that is mostly lithium. The usual chemistry is lithium metal in a graphite matrix. Lithium metal is light and readily gives off electrons; the graphite makes is somewhat less reactive. The positive electrode (cathode) is typically an oxide of some sort, and here there are options. Most current cell-phone and laptop batteries use some version of manganese nickel oxide as the anode. Lithium atoms in the anode give off electrons, become lithium ions and then travel across to the oxide making a mixed ion oxide that absorbs the electron. The process provides about 4 volts of energy differential per electron transferred. With cobalt oxide, the cathode reaction is more or less CoO2 + Li+ e– —> LiCoO2. Sorry to say this chemistry is very unstable; the oxide itself is unstable, more unstable than MnNi or iron oxide, especially when it is fully charged, and especially when it is warm (40 degrees or warmer) 2CoO2 –> Co2O+1/2O2. Boeing’s safety idea was to control the charge rate in a way that overheating was not supposed to occur.

Despite the controls, it didn’t work for the two Boeing batteries that burst into flames. Perhaps it would have helped to add cooling to reduce the temperature — that’s what’s done in lap-tops and plug-in automobiles — but even with cooling the batteries might have self-destructed due to local heating effects. These batteries were massive, and there is plenty of room for one spot to get hotter than the rest; this seems to have happened in both fires, either as a cause or result. Once the cobalt oxide gets hot and oxygen is released a lithium-oxygen fire can spread to the whole battery, even if the majority is held at a low temperature. If local heating were the cause, no amount of external cooling would have helped.

battery-materials-energy-densities-battery-university

Something that would have helped was a polymer interlayer separator to keep the unstable cobalt oxide from fueling the fire; there was none. Another option is to use a more-stable cathode like iron phosphate or lithium manganese nickel. As shown in the graphic above, these stable oxides do not have the high power density of Li-cobalt oxide. When the unstable cobalt oxide decomposed there was oxygen, lithium, and heat in one space and none of the fire extinguishers on the planes could put out the fires.

The solution that Boeing has proposed and that Washington is reviewing is to leave the batteries unchanged, but to shield them in a massive titanium shield with the vapors formed on burning vented outside the airplane. The claim is that this shield will protect the passengers from the fire, if not from the loss of electricity. This does not appear to be the best solution. Airbus had planned to use the same batteries on their newest planes, but has now gone retro and plans to use Ni-Cad batteries. I don’t think that’s the best solution either. Better options, I think, are nickel metal hydride or the very stable Lithium Iron Phosphate batteries that Segway uses. Better yet would be to use fuel cells, an option that appears to be better than even the best batteries. Fuel cells are what the navy uses on submarines and what NASA uses in space. They are both more energy dense and safer than batteries. As a disclaimer, REB Research makes hydrogen generators and purifiers that are used with fuel-cell power.

More on the chemistry of Boeing’s batteries and their problems can be found on Wikipedia. You can also read an interview with the head of Tesla motors regarding his suggestions and offer of help.

 

Two things are infinite

Einstein is supposed to have commented that there are only two things that are infinite: the size of the universe and human stupidity, and he wasn’t sure about the former.

While Einstein still appears to be correct about the latter infinite, there is now more disagreement about the size of the universe. In Einstein’s day, it was known that the universe appeared to have originated in a big bang with all mass radiating outward at a ferocious rate. If the mass of the universe were high enough, and the speed were slow enough the universe would be finite and closed in on itself. That is, it would be a large black hole. But in Einstein’s day, the universe didn’t look to have enough mass. It thus looked like the universe was endless, but non-uniform. It appeared to be mostly filled with empty space — something that kept us from frying from the heat of distant stars.

Since Einstein’s day we’ve discovered more mass in the universe, but not quite enough to make us a black hole given the universe’s size. We’ve discovered neutron stars and black holes, dark concentrated masses, but not enough of them. We’ve discovered neutrinos, tiny neutral particles that fill space, and we’ve shown that they have rest-mass enough that neutrinos are now thought to make up most of the mass of the universe. But even with these dark-ish matter, we still have not found enough for the universe to be non-infinite, a black hole. Worse yet, we’ve discovered dark energy, something that keeps the universe expanding at nearly the speed of light when you’d think it should have slowed by now; this fast expansion makes it ever harder to find enough mass to close the universe (why we’d want to close it is an aesthetic issue discussed below).

Still, there is evidence for another, smaller mass item floating in space, the axion. This particle, and it’s yet-smaller companion, the axiono, may be the source of both the missing dark matter and the dark energy, see figure below. Axions should have masses about 10-7 eV, and should interact enough with matter to explain why there is more matter than antimatter while leaving the properties of matter otherwise unchanged. From normal physics, you’d expect an equal amount of matter and antimatter as antimatter is just matter moving backwards in time. Further, the light mass and weak interactions could allow axions to provide a halo around galaxies (helpful for galactic stability).

Mass of the Universe with Axions, no axions. Here is a plot from a recent SUSY talk (2010) http://susy10.uni-bonn.de/data/KimJEpreSUSY.pdf

Mass of the Universe with Axions, no axions. Here is a plot from a recent SUSY talk (2010) http://susy10.uni-bonn.de/data/KimJEpreSUSY.pdf

The reason you’d want the universe to be closed is aesthetic. The universe is nearly closed, if you think in terms of scientific numbers, and it’s hard to see why the universe should not then be closed. We appear to have an awful lot of mass, in terms of grams or kg, but appear to have only 20% of the required mass for a black hole. In terms of orders of magnitudes we are so close that you’d think we’d have 100% of the required mass. If axions are found to exist, and the evidence now is about 50-50, they will interact with strong magnetic fields so that they change into photons and photons change into axions. It is possible that the mass this represents will be the missing dark matter allowing our universe to be closed, and will be the missing dark energy.

As a final thought I’ve always wondered why religious leaders have been so against mention of “the big bang.” You’d think that the biggest boost to religion would be knowledge that everything appeared from nothing one bright and sunny morning, but they don’t seem to like the idea at all. If anyone who can explain that to me, I’d appreciate it. Thanks, Robert E. B.

How is Chemical Engineering?

I’m sometimes asked about chemical engineering by high-schoolers with some science aptitude. Typically they are trying to decide between a major in chemistry or chemical engineering. They’ve typically figured out that chemical engineering must be some practical version of chemistry, but can’t quite figure out how that could be engineering. My key answer here is: unit operations.

If I were a chemist trying to make an interesting product, beer or whisky say, I might start with sugar, barley, water and yeast, plus perhaps some hops and tablets of nutrients and antimicrobial. After a few hours of work, I’d have 5 gallons of beer fermenting, and after a month I’d have beer that I could either drink or batch-distill into whisky. If I ran the cost numbers, I’d find that my supplies cost as much to make as buying the product in a store; the value of my time was thus zero and would not be any higher if I were to scale up production: I’m a chemist.

The key to making my time more valuable is unit operations. I need to scale up production and use less costly materials. Corn costs less than sugar but has to be enzyme processed into a form that can be fermented. Essentially, I have to cook a large batch of corn at the right temperatures (near boiling) and then add enzymes from the beer or from sprouted corn and then hold the temperature for an hour or more. Sounds simple, but requires good heat control, good heating, and good mixing, otherwise the enzymes will die or won’t work or the corn will burn and stick to the bottom of the pot. These are all unit operations; you’ll learn more about them in chemical engineering.

Reactor design is a classical unit operation. Do I react in large batches, or in a continuous fermentor. How do I hold on to the catalyst (enzymes); what is the contact time; these are the issues of reactor engineering, and while different catalysts and reactions have different properties and rates, the analysis is more-or-less the same.

Another issue is solid-liquid separation, in this case filtration of the dregs. When made in small batches, the bottoms of the beer barrel, the dregs, were let to settle and then washed down the sink. At larger scales, settling will take too long and will still leave a beer that is cloudy. Further, the dregs are too valuable to waste. At larger scales, you’ll want to filter the beer and will want to do something to the residue. Centrifugal filtration is typically used and the residue is typically dried and sold as animal feed. Centrifugal filtration is another unit operation.

Distillation is another classical unit operation. An important part here is avoiding hangover-producing higher alcohols and nasty tasting, “fusel oils.” There are tricks here that are more-or-less worth doing depending on the product you want. Typically, you start with a simple processes and equipment and keep tweaking them until the product and costs are want you want. At the end, typically, the process equipment looks more like a refinery than like a kitchen: chemical engineering equipment is fairly different from the small batch equipment that was used as the chemist.

The same approach to making things and scaling them up also applied in management situations, by the way, and many of my chemical engineering friends have become managers.

The martian sky: why is it yellow?

In a previous post, I detailed my calculations concerning the color of the sky and sun. Basically the sun gives off light mostly in the yellow to green range, with fairly little red or purple. A lot of the blue and green wavelengths scatter leaving the sun  looking yellow because yellow looks yellow and the red plus blue also looks yellow because of additive color.

If you look at the sky through a spectroscope, it’s pretty blue with some green. Sky blue involves a bit of an eye trick of additive color so that we see the scattered blue + green as sky blue and not aqua. At sundown, the sun becomes reddish and the majority of the sky becomes greenish-grey as more green and yellow light gets scattered. The sky near the sun is orange as the atmosphere is thick enough to scatter orange, while the blue and green scatters out.

Now, to talk about the color of the sky on Mars, both at noon and at sunset. Except for the effect of the red color of the dust on Mars I would expect the sky to be blue on Mars, just like on earth but a lighter shade of blue as the atmosphere is thinner. When you add some red from the dust, one would expect the sky to be grey. That is, I would expect to find a simple combination of a base of sky blue (blue plus green), plus some extra red-orange light scattered from the Martian dust. In additive colors, the combination of blue-green and red-orange is grey, so that’s the color I’d expect the Martian sky to be normally. Some photos of the Martian sky match this expectation; see below. My guess is this is on a day when there was not much dust in the air, though NASA provides no details here.

martian sky; looks grey

On some days (high dust days, I assume), the Martian sky is turns a shade of yellow-green. I’d guess that’s because the red-dust absorbs the blue and some of the green spectrum, but does not actually add red. We are thus involved with subtractive color and, in subtractive color orange plus blue-green = butterscotch, not grey or pink.

Martian sky color

I now present a photo of the Martian sky at sunset. This is something really peculiar that I would not have expected ahead of time, but think I can explain now that I see it. The sky looks yellow in general, like in the photo above, but blue around the sun. I could explain this picture by saying that the blue and green of the Martian sky is being scattered by the Martian air (CO2, mostly), just like our atmosphere scatters these colors on earth; the sky near the sun looks blue, not red-orange because the Martian atmosphere is thinner (at noon there is less air to scatter light, but at sun-down the atmosphere is the same thickness as ours, more or less). The red of the dust does not show up in the sky color near the sun since the red-color is back scattered near the sun, and not front scattered. The Martian sky is yellow elsewhere where there is some front scatter of the reddish light reflecting off of the dust. This sounds plausible to me; tell me what you think.

Martian sky at sunset

Martian sky at sunset

As an aside, while I have long understood there was an experimental difference between subtractive and additive color, I have never quite understood why this should be so. Why is it that subtractive color combinations are different, and uniformly different from additive color combinations. I’d have thought you’d get more-or-less the same color if you remove red from one part of a piece of paper and remove blue from another as if you add red, purple, and yellow. A mental model I have (perhaps wrong) is that subtractive color looks like it does because of the details of the spectral absorption of the particular pigment chemicals that are typically used. Based on this model, I expect to find someday some new red and green pigments where the combination looks yellow when mixed on a page. I’ve not found it yet, but that’s my expectation — perhaps you know of a really good explanation for why additive color is so different from subtractive color.

Some people have noticed that I’m wearing a rather dapper suit during the recent visit of the press to my lab. It’s important to dress sharp, I think, and that varies from situation to situation. Fashion is an obligation, not a privilege; you’ve got to be willing to suffer for it, for the greater good of all.

Do you think Lady Gaga finds her stuff comfortable?

Do you think Lady Gaga finds her stuff comfortable? She does it for the greater good. 

R.E. Buxbaum. You are your own sculpture; Be art.

 

Joke about antimatter and time travel

I’m sorry we don’t serve antimatter men here.

Antimatter man walks into a bar.

Is funny because … in quantum-physics there is no directionality in time. Thus an electron can change directions in time and then appears to the observer as a positron, an anti electron that has the same mass as a normal electron but the opposite charge and an opposite spin, etc. In physics, the reason electrons and positrons appear to annihilate is that it’s there was only one electron to begin with. That electron started going backwards in time so it disappeared in our forward-in-time time-frame.

The thing is, time is quite apparent on a macroscopic scales. It’s one of the most apparent aspects of macroscopic existence. Perhaps the clearest proof that time is flowing in one direction only is entropy. In normal life, you can drop a glass and watch it break whenever you like, but you can not drop shards and expect to get a complete glass. Similarly, you know you are moving forward in time if you can drop an ice cube into a hot cup of coffee and make it luke-warm. If you can reach into a cup of luke-warm coffee and extract an ice cube to make it hot, you’re moving backwards in time.

It’s also possible that gravity proves that time is moving forward. If an anti apple is just a normal apple that is moving backwards in time, then I should expect that, when I drop an anti-apple, I will find it floats upward. On the other hand, if mass is inherently a warpage of space-time, it should fall down. Perhaps when we understand gravity we will also understand how quantum physics meets the real world of entropy.

Heat conduction in insulating blankets, aerogels, space shuttle tiles, etc.

A lot about heat conduction in insulating blankets can be explained by the ordinary motion of gas molecules. That’s because the thermal conductivity of air (or any likely gas) is much lower than that of glass, alumina, or any likely solid material used for the structure of the blanket. At any temperature, the average kinetic energy of an air molecule is 1/2kT in any direction, or 3/2kT altogether; where k is Boltzman’s constant, and T is absolute temperature, °K. Since kinetic energy equals 1/2 mv2, you find that the average velocity in the x direction must be v = √kT/m = √RT/M. Here m is the mass of the gas molecule in kg, M is the molecular weight also in kg (0.029 kg/mol for air), R is the gas constant 8.29J/mol°C, and v is the molecular velocity in the x direction, in meters/sec. From this equation, you will find that v is quite large under normal circumstances, about 290 m/s (650 mph) for air molecules at ordinary temperatures of 22°C or 295 K. That is, air molecules travel in any fixed direction at roughly the speed of sound, Mach 1 (the average speed including all directions is about √3 as fast, or about 1130 mph).

The distance a molecule will go before hitting another one is a function of the cross-sectional areas of the molecules and their densities in space. Dividing the volume of a mol of gas, 0.0224 m3/mol at “normal conditions” by the number of molecules in the mol (6.02 x10^23) gives an effective volume per molecule at this normal condition: .0224 m3/6.0210^23 = 3.72 x10^-26 m3/molecule at normal temperatures and pressures. Dividing this volume by the molecular cross-section area for collisions (about 1.6 x 10^-19 m2 for air based on an effective diameter of 4.5 Angstroms) gives a free-motion distance of about 0.23×10^-6 m or 0.23µ for air molecules at standard conditions. This distance is small, to be sure, but it is 1000 times the molecular diameter, more or less, and as a result air behaves nearly as an “ideal gas”, one composed of point masses under normal conditions (and most conditions you run into). The distance the molecule travels to or from a given surface will be smaller, 1/√3 of this on average, or about 1.35×10^-7m. This distance will be important when we come to estimate heat transfer rates at the end of this post.

 

Molecular motion of an air molecule (oxygen or nitrogen) as part of heat transfer process; this shows how some of the dimensions work.

Molecular motion of an air molecule (oxygen or nitrogen) as part of heat transfer process; this shows how some of the dimensions work.

The number of molecules hitting per square meter per second is most easily calculated from the transfer of momentum. The pressure at the surface equals the rate of change of momentum of the molecules bouncing off. At atmospheric pressure 103,000 Pa = 103,000 Newtons/m2, the number of molecules bouncing off per second is half this pressure divided by the mass of each molecule times the velocity in the surface direction. The contact rate is thus found to be (1/2) x 103,000 Pa x 6.02^23 molecule/mol /(290 m/s. x .029 kg/mol) = 36,900 x 10^23 molecules/m2sec.

The thermal conductivity is merely this number times the heat capacity transfer per molecule times the distance of the transfer. I will now calculate the heat capacity per molecule from statistical mechanics because I’m used to doing things this way; other people might look up the heat capacity per mol and divide by 6.02 x10^23: For any gas, the heat capacity that derives from kinetic energy is k/2 per molecule in each direction, as mentioned above. Combining the three directions, that’s 3k/2. Air molecules look like dumbbells, though, so they have two rotations that contribute another k/2 of heat capacity each, and they have a vibration that contributes k. We begin with an approximate value for k = 2 cal/mol of molecules per °C; it’s actually 1.987 but I round up to include some electronic effects. Based on this, we calculate the heat capacity of air to be 7 cal/mol°C at constant volume or 1.16 x10^-23 cal/molecule°C. The amount of energy that can transfer to the hot (or cold) wall is this heat capacity times the temperature difference that molecules carry between the wall and their first collision with other gases. The temperature difference carried by air molecules at standard conditions is only 1.35 x10-7 times the temperature difference per meter because the molecules only go that far before colliding with another molecule (remember, I said this number would be important). The thermal conductivity for stagnant air per meter is thus calculated by multiplying the number of molecules times that hit per m2 per second, the distance the molecule travels in meters, and the effective heat capacity per molecule. This would be 36,900 x 10^23  molecules/m2sec x 1.35 x10-7m x 1.16 x10^-23 cal/molecule°C = 0.00578 cal/ms°C or .0241 W/m°C. This value is (pretty exactly) the thermal conductivity of dry air that you find by experiment.

I did all that math, though I already knew the thermal conductivity of air from experiment for a few reasons: to show off the sort of stuff you can do with simple statistical mechanics; to build up skills in case I ever need to know the thermal conductivity of deuterium or iodine gas, or mixtures; and finally, to be able to understand the effects of pressure, temperature and (mainly insulator) geometry — something I might need to design a piece of equipment with, for example, lower thermal heat losses. I find, from my calculation that we should not expect much change in thermal conductivity with gas pressure at near normal conditions; to first order, changes in pressure will change the distance the molecule travels to exactly the same extent that it changes the number of molecules that hit the surface per second. At very low pressures or very small distances, lower pressures will translate to lower conductivity, but for normal-ish pressures and geometries, changes in gas pressure should not affect thermal conductivity — and does not.

I’d predict that temperature would have a larger effect on thermal conductivity, but still not an order-of magnitude large effect. Increasing the temperature increases the distance between collisions in proportion to the absolute temperature, but decreases the number of collisions by the square-root of T since the molecules move faster at high temperature. As a result, increasing T has a √T positive effect on thermal conductivity.

Because neither temperature nor pressure has much effect, you might expect that the thermal conductivity of all air-filed insulating blankets at all normal-ish conditions is more-or-less that of standing air (air without circulation). That is what you find, for the most part; the same 0.024 W/m°C thermal conductivity with standing air, with high-tech, NASA fiber blankets on the space shuttle and with the cheapest styrofoam cups. Wool felt has a thermal conductivity of 0.042 W/m°C, about twice that of air, a not-surprising result given that wool felt is about 1/2 wool and 1/2 air.

Now we can start to understand the most recent class of insulating blankets, those with very fine fibers, or thin layers of fiber (or aluminum or gold). When these are separated by less than 0.2µ you finally decrease the thermal conductivity at room temperature below that for air. These layers decrease the distance traveled between gas collisions, but still leave the same number of collisions with the hot or cold wall; as a result, the smaller the gap below .2µ the lower the thermal conductivity. This happens in aerogels and some space blankets that have very small silica fibers, less than .1µ apart (<100 nm). Aerogels can have much lower thermal conductivities than 0.024 W/m°C, even when filled with air at standard conditions.

In outer space you get lower thermal conductivity without high-tech aerogels because the free path is very long. At these pressures virtually every molecule hits a fiber before it hits another molecule; for even a rough blanket with distant fibers, the fibers bleak up the path of the molecules significantly. Thus, the fibers of the space shuttle (about 10 µ apart) provide far lower thermal conductivity in outer space than on earth. You can get the same benefit in the lab if you put a high vacuum of say 10^-7 atm between glass walls that are 9 mm apart. Without the walls, the air molecules could travel 1.3 µ/10^-7 = 13m before colliding with each other. Since the walls of a typical Dewar are about 0.009 m apart (9 mm) the heat conduction of the Dewar is thus 1/150 (0.7%) as high as for a normal air layer 9mm thick; there is no thermal conductivity of Dewar flasks and vacuum bottles as such, since the amount of heat conducted is independent of gap-distance. Pretty spiffy. I use this knowledge to help with the thermal insulation of some of our hydrogen generators and hydrogen purifiers.

There is another effect that I should mention: black body heat transfer. In many cases black body radiation dominates: it is the reason the tiles are white (or black) and not clear; it is the reason Dewar flasks are mirrored (a mirrored surface provides less black body heat transfer). This post is already too long to do black body radiation justice here, but treat it in more detail in another post.

RE. Buxbaum