Category Archives: Science: Physics, Astronomy, etc.

Most Heat Loss Is Black-Body Radiation

In a previous post I used statistical mechanics to show how you’d calculate the thermal conductivity of any gas and showed why the insulating power of the best normal insulating materials was usually identical to ambient air. That analysis only considered the motion of molecules and not of photons (black-body radiation) and thus under-predicted heat transfer in most circumstances. Though black body radiation is often ignored in chemical engineering calculations, it is often the major heat transfer mechanism, even at modest temperatures.

One can show from quantum mechanics that the radiative heat transfer between two surfaces of temperature T and To is proportional to the difference of the fourth power of the two temperatures in absolute (Kelvin) scale.

Heat transfer rate = P = A ε σ( T^4 – To^4).

Here, A is the area of the surfaces, σ is the Stefan–Boltzmann constantε is the surface emissivity, a number that is 1 for most non-metals and .3 for stainless steel.  For A measured in m2σ = 5.67×10−8 W m−2 K−4.

Infrared picture of a fellow wearing a black plastic bag on his arm. The bag is nearly transparent to heat radiation, while his eyeglasses are opaque. His hair provides some insulation.

Unlike with conduction, heat transfer does not depend on the distances between the surfaces but only on the temperature and the infra-red (IR) reflectivity. This is different from normal reflectivity as seen in the below infra-red photo of a lightly dressed person standing in a normal room. The fellow has a black plastic bag on his arm, but you can hardly see it here, as it hardly affects heat loss. His clothes, don’t do much either, but his hair and eyeglasses are reasonably effective blocks to radiative heat loss.

As an illustrative example, lets calculate the radiative and conductive heat transfer heat transfer rates of the person in the picture, assuming he has  2 m2 of surface area, an emissivity of 1, and a body and clothes temperature of about 86°F; that is, his skin/clothes temperature is 30°C or 303K in absolute. If this person stands in a room at 71.6°F, 295K, the radiative heat loss is calculated from the equation above: 2 *1* 5.67×10−8 * (8.43×109 -7.57×109) = 97.5 W. This is 23.36 cal/second or 84.1 Cal/hr or 2020 Cal/day; this is nearly the expected basal calorie use of a person this size.

The conductive heat loss is typically much smaller. As discussed previously in my analysis of curtains, the rate is inversely proportional to the heat transfer distance and proportional to the temperature difference. For the fellow in the picture, assuming he’s standing in relatively stagnant air, the heat boundary layer thickness will be about 2 cm (0.02m). Multiplying the thermal conductivity of air, 0.024 W/mK, by the surface area and the temperature difference and dividing by the boundary layer thickness, we find a Wattage of heat loss of 2*.024*(30-22)/.02 = 19.2 W. This is 16.56 Cal/hr, or 397 Cal/day: about 20% of the radiative heat loss, suggesting that some 5/6 of a sedentary person’s heat transfer may be from black body radiation.

We can expect that black-body radiation dominates conduction when looking at heat-shedding losses from hot chemical equipment because this equipment is typically much warmer than a human body. We’ve found, with our hydrogen purifiers for example, that it is critically important to choose a thermal insulation that is opaque or reflective to black body radiation. We use an infra-red opaque ceramic wrapped with aluminum foil to provide more insulation to a hot pipe than many inches of ceramic could. Aluminum has a far lower emissivity than the nonreflective surfaces of ceramic, and gold has an even lower emissivity at most temperatures.

Many popular insulation materials are not black-body opaque, and most hot surfaces are not reflectively coated. Because of this, you can find that the heat loss rate goes up as you add too much insulation. After a point, the extra insulation increases the surface area for radiation while barely reducing the surface temperature; it starts to act like a heat fin. While the space-shuttle tiles are fairly mediocre in terms of conduction, they are excellent in terms of black-body radiation.

There are applications where you want to increase heat transfer without having to resort to direct contact with corrosive chemicals or heat-transfer fluids. Often black body radiation can be used. As an example, heat transfers quite well from a cartridge heater or band heater to a piece of equipment even if they do not fit particularly tightly, especially if the outer surfaces are coated with black oxide. Black body radiation works well with stainless steel and most liquids, but most gases are nearly transparent to black body radiation. For heat transfer to most gases, it’s usually necessary to make use of turbulence or better yet, chaos.

Robert Buxbaum

Metals and nonmetals

Hydrogen is both a metal an a non-metal. It says so on the specially produced coffee cups produced by my company (and sold by my company) but not on any other periodic table i’ve seen. That’s a shame for at least two reason. First, on a physiochemical level, while hydrogen is a metal in the sense that it combines with non-metals like chlorine and oxygen to form HCl and H2O, it’s not a metal in how it looks (not very shiny, malleable, etc.). Hydrogen acts like a chemical non-metal in the sense that it reacts with most metals to form metal hydrides like NaH CaH2 and YH3 (my company sells metal hydride getters, and metal membranes that use this property), and it also looks like a non-metal; it’s a gas like non-metallic chlorine, fluorine, and oxygen.

REB Research, Periodic table coffee cup

REB Research, Periodic table coffee cup

Most middle schoolers and high schoolers learn to differentiate metals and nonmetals by where they sit on the periodic tables they are given, and by general appearance and feel, that is by entirely non-scientific methods. Most of the elements on the left side of their periodic tables are shiny and conduct electricity reasonably well, so students come to believe that these are fundamental properties of metals without noting that boron and iodine (on the right side) are both shiny and conduct electricity, while hydrogen (presumably the first metal) does not. Students note that many metals are ductile without being told that calcium and chromium are brittle, while boron and tin (non-metals) are ductile. And what’s with the jagged dividing line: some borderline cases, like aluminum, look awfully metallic by normal standards.

The actual distinction, and the basis for the line, has nothing to do with the descriptions taught in middle school, but everything to do with water. When an element is oxidized to its most common oxide and dissolved in water the solution will be either acidic or basic. This is the basis of the key distinction: we call something a metal if the metal oxide solution is basic. We call something a non-metal if the oxide solution is an acid. To make sulfuric acid or nitric acid: you dissolve the oxides of sulfur or nitrogen respectively, in water. That’s why nitrogen and sulfur are nonmetals. Similarly, since you make boric acid by dissolving boron oxide in water boron is a non-metal. Calcium is a metal because calcium oxide is lime, a strong base. Aluminum and antimony are near borderline cases, because their oxides are nearly neutral.

And now we return to hydrogen and my cup. hydrogen is the only element listed as both a metal and a non-metal because hydrogen oxide is water. It is entirely neutral. When water dissolves in water the pH is 7; by definition, hydrogen is the only real borderline case. It is not generally shown that way, but it is shown as a metal and a non metal is on a cup produced by my company.

Science is the Opposite of Religion

Some years ago, my daughter came back from religions school and asked for a definition of science. I told her that science was the opposite of religion. I didn’t mean to insult religion or science; the big bang for one thing, strongly suggests there is a God -creator, and quantum mechanics suggests (to me) that there is a God -maintainer, but religion deals with other things beyond a belief in God, and I meant to point out that every basic of how science looks at things finds its opposite in religion.

Science is based on reproducibility and lack of meaning: if you do the same experiment over and over, you’ll always get the same result as you did before and the same result as anyone else — when the results are measured to some good, statistical norm. The meaning for the observation? that’s a meaningless question. Religion is based on the centrality of drawing meaning, and the centrality of non-reproducible, one-time events: creation, the exodus from Egypt, the resurrection of Jesus, the birth of Zeus, etc. A religious believer is one who changes his or her life based on the lesson of these; to him, a non-believer is one who draws no meaning, or needs reproducible events.

Science also requires that anyone will get the same result if they do the same process. Thus, chemistry class results don’t depend on the age, sex, or election of the students. Any student who mixes the prescribed two chemicals and heats to a certain temperature is expected to get the same result. The same applies to measures of the size of the universe, or its angular momentum or age. In religion, it is fundamentally important what sex you are, how old you are, who your parents were, or what you are thinking at the time. If the right person says “hoc es corpus” over wine and wafers, they change; if not, they do not. If the right person opens the door to heaven, or closes it, it matters in religion.

A main aspect of all religion is prayer; the idea that what you are thinking or saying changes things on high and here below. In science, we only consider experiments where the words said over the experiment have no effect. Another aspect of religion is tchuvah (regret, repentance); the idea is that thoughts can change the effect of actions, at least retroactively. Science tends to ignore repentance, because they lack the ability to measure things that work backwards in time, and because the scientific instruments we have currently do not take measurements on the soul to see if the repentance had any effect. Basically, the science-universe is only populated with those things which can be measured or reproducibly affected, and that pretty much excludes the soul. That the soul does not exist in the science universe doesn’t mean it doesn’t exist.

Another main aspect of religion is morality: you’re supposed to do the right thing. Morality varies from one religion to another, and you may think the other fellow’s religion has a warped morality, but at least there is one in all religions. In science, for better or worse, there is no apparent morality, either to man or to the universe. Based on science, the universe will end, either by a bang or a whimper, and in that void of end it would seem that killing a mouse is about as important as killing a person. No religion I know of sees the universe ending in either cold or hot death; as a result. Consistent with this, they all see murder is a sin against God. This difference is a big plus for religion, IMHO. That man sees murder as a true evil is either a sign that religion is true, or that it isn’t depending on the value you put on life. Another example of the moral divide: Scientists, especially academics, tend to be elitists. Their morality, such as it is, values great minds and great projects over the humble and stupid. Classical religion sees the opposite; it promoting the elevation of the poor, weak, and humble. There is no fundamental way to tell which one is right, and I tend to think that both are right in their own, mirror-image universes.

It is now worthwhile to consider what each universe sees as wisdom. An Explanation in the universe of science has everything to do with utility and not any internal sense of having understood, as such. I understand something only to the extent I predict that thing or can do something based on the knowledge. in religion, the motivation for all activity is always just understanding — typically of God on the bone-deep level. This difference shows up very clearly in dealing with quantum mechanics. To a scientist, the quantum world is fundamentally a door from religion because it is basically non-understandable but very useful. Religion totally ignores quantum mechanics for the same reason: it’s non-understandable, but very precise and useful. Anything you can’t understand is meaningless to them (literally), and useful is mostly defined in terms of building the particular religion; I think this is a mistake on many levels. I note that looking for disproof is the glory-work of all science development, but the devil’s work of every religion. A religious leader will grab on to statistical findings that suggest that his type of prayer cures people, but will always reject disproof, e.g. evidence that someone else’s prayers works better, or that his prayer does nothing at all. Each religion is thus in a war with the other, each trying to build belief, while not removing it. Science is the opposite. Religion starts with the answer and accepts any support it can; fundamental change is considered a bad thing in religion. The opposite is so with science; disproof is considered “progress,” and change is good.

These are not minor aspects of science and religion, by the way, but these are the fundamental basics of each, as best I can tell. History, politics, and psychology seem to be border-line areas, somewhere between science and religion. The differences do not reflect a lack in these fields, but just a recognition that each works according to its own logic and universe.

My hope in life is to combine science and religion to the extent possible, but find that supporting science in any form presents difficulties when I have to speak to others in the religious community, my daughter’s teachers among them. As an example of the problem that come up, my sense is that the big bang is a fine proof of creation and should be welcomed by all (most) religious people. I think its a sign that there is a creator when science says everything came from nothing, 14,000,000 years ago. Sorry to say, the religious leaders I’ve met reject the big bang, and claim you can’t believe in anything that happened 14,000,000,000 years ago. So long as science shows no evidence of a bearded observer at the center, they are not interested. Scientists, too have trouble with the bang, I find. It’s a one-time event that they can’t quite explain away (Steven Hawking keeps trying). The only sane approach I’ve found is to keep blogging, and otherwise leave each to its area. There seems to be little reason to expect communal agreement.

by Robert E. Buxbaum, Apr. 7, 2013. For some further thoughts, see here.

The Gift of Chaos

Many, if not most important engineering systems are chaotic to some extent, but as most college programs don’t deal with this behavior, or with this type of math, I thought I might write something on it. It was a big deal among my PhD colleagues some 30 years back as it revolutionized the way we looked at classic problems; it’s fundamental, but it’s now hardly mentioned.

Two of the first freshman engineering homework problems I had turn out to have been chaotic, though I didn’t know it at the time. One of these concerned the cooling of a cup of coffee. As presented, the coffee was in a cup at a uniform temperature of 70°C; the room was at 20°C, and some fanciful data was presented to suggest that the coffee cooled at a rate that was proportional the difference between the (changing) coffee temperature and the fixed room temperature. Based on these assumptions, we predicted exponential cooling with time, something that was (more or less) observed, but not quite in real life. The chaotic part in a real cup of coffee, is that the cup develops currents that move faster and slower. These currents accelerate heat loss, but since they are driven by the temperature differences within the cup they tend to speed up and slow down erratically. They accelerate when the cup is not well stirred, causing new stir, and slow down when it is stirred, and the temperature at any point is seen to rise and fall in an almost rhythmic fashion; that is, chaotically.

While it is impossible to predict what will happen over a short time scale, there are some general patterns. Perhaps the most remarkable of these is self-similarity: if observed over a short time scale (10 seconds or less), the behavior over 10 seconds will look like the behavior over 1 second, and this will look like the behavior over 0.1 second. The only difference being that, the smaller the time-scale, the smaller the up-down variation. You can see the same thing with stock movements, wind speed, cell-phone noise, etc. and the same self-similarity can occur in space so that the shape of clouds tends to be similar at all reasonably small length scales. The maximum average deviation is smaller over smaller time scales, of course, and larger over large time-scales, but not in any obvious way. There is no simple proportionality, but rather a fractional power dependence that results in these chaotic phenomena having fractal dependence on measure scale. Some of this is seen in the global temperature graph below.

Global temperatures measured from the antarctic ice showing stable, cyclic chaos and self-similarity.

Global temperatures measured from the antarctic ice showing stable, cyclic chaos and self-similarity.

Chaos can be stable or unstable, by the way; the cooling of a cup of coffee was stable because the temperature could not exceed 70°C or go below 20°C. Stable chaotic phenomena tend to have fixed period cycles in space or time. The world temperature seems to follow this pattern though there is no obvious reason it should. That is, there is no obvious maximum and minimum temperature for the earth, nor any obvious reason there should be cycles or that they should be 120,000 years long. I’ll probably write more about chaos in later posts, but I should mention that unstable chaos can be quite destructive, and quite hard to prevent. Some form of chaotic local heating seems to have caused battery fires aboard the Dreamliner; similarly, most riots, famines, and financial panics seem to be chaotic. Generally speaking, tight control does not prevent this sort of chaos, by the way; it just changes the period and makes the eruptions that much more violent. As two examples, consider what would happen if we tried to cap a volcano, or provided  clamp-downs on riots in Syria, Egypt or Ancient Rome.

From math, we know some alternate ways to prevent unstable chaos from getting out of hand; one is to lay off, another is to control chaotically (hard to believe, but true).

 

Why the Boeing Dreamliner’s batteries burst into flames

Boeing’s Dreamliner is currently grounded due to two of their Li-Ion batteries having burst into flames, one in flight, and another on the ground. Two accidents of the same type in a small fleet is no little matter as an airplane fire can be deadly on the ground or at 50,000 feet.

The fires are particularly bad on the Dreamliner because these lithium batteries control virtually everything that goes on aboard the plane. Even without a fire, when they go out so does virtually every control and sensor. So why did they burn and what has Boeing done to take care of it? The simple reason for the fires is that management chose to use Li-Cobalt oxide batteries, the same Li-battery design that every laptop computer maker had already rejected ten years earlier when laptops using them started busting into flames. This is the battery design that caused Dell and HP to recall every computer with it. Boeing decided that they should use a massive version to control everything on their flagship airplane because it has the highest energy density see graphic below. They figured that operational management would insure safety even without the need to install any cooling or sufficient shielding.

All lithium batteries have a negative electrode (anode) that is mostly lithium. The usual chemistry is lithium metal in a graphite matrix. Lithium metal is light and readily gives off electrons; the graphite makes is somewhat less reactive. The positive electrode (cathode) is typically an oxide of some sort, and here there are options. Most current cell-phone and laptop batteries use some version of manganese nickel oxide as the anode. Lithium atoms in the anode give off electrons, become lithium ions and then travel across to the oxide making a mixed ion oxide that absorbs the electron. The process provides about 4 volts of energy differential per electron transferred. With cobalt oxide, the cathode reaction is more or less CoO2 + Li+ e– —> LiCoO2. Sorry to say this chemistry is very unstable; the oxide itself is unstable, more unstable than MnNi or iron oxide, especially when it is fully charged, and especially when it is warm (40 degrees or warmer) 2CoO2 –> Co2O+1/2O2. Boeing’s safety idea was to control the charge rate in a way that overheating was not supposed to occur.

Despite the controls, it didn’t work for the two Boeing batteries that burst into flames. Perhaps it would have helped to add cooling to reduce the temperature — that’s what’s done in lap-tops and plug-in automobiles — but even with cooling the batteries might have self-destructed due to local heating effects. These batteries were massive, and there is plenty of room for one spot to get hotter than the rest; this seems to have happened in both fires, either as a cause or result. Once the cobalt oxide gets hot and oxygen is released a lithium-oxygen fire can spread to the whole battery, even if the majority is held at a low temperature. If local heating were the cause, no amount of external cooling would have helped.

battery-materials-energy-densities-battery-university

Something that would have helped was a polymer interlayer separator to keep the unstable cobalt oxide from fueling the fire; there was none. Another option is to use a more-stable cathode like iron phosphate or lithium manganese nickel. As shown in the graphic above, these stable oxides do not have the high power density of Li-cobalt oxide. When the unstable cobalt oxide decomposed there was oxygen, lithium, and heat in one space and none of the fire extinguishers on the planes could put out the fires.

The solution that Boeing has proposed and that Washington is reviewing is to leave the batteries unchanged, but to shield them in a massive titanium shield with the vapors formed on burning vented outside the airplane. The claim is that this shield will protect the passengers from the fire, if not from the loss of electricity. This does not appear to be the best solution. Airbus had planned to use the same batteries on their newest planes, but has now gone retro and plans to use Ni-Cad batteries. I don’t think that’s the best solution either. Better options, I think, are nickel metal hydride or the very stable Lithium Iron Phosphate batteries that Segway uses. Better yet would be to use fuel cells, an option that appears to be better than even the best batteries. Fuel cells are what the navy uses on submarines and what NASA uses in space. They are both more energy dense and safer than batteries. As a disclaimer, REB Research makes hydrogen generators and purifiers that are used with fuel-cell power.

More on the chemistry of Boeing’s batteries and their problems can be found on Wikipedia. You can also read an interview with the head of Tesla motors regarding his suggestions and offer of help.

 

Two things are infinite

Einstein is supposed to have commented that there are only two things that are infinite: the size of the universe and human stupidity, and he wasn’t sure about the former.

While Einstein still appears to be correct about the latter infinite, there is now more disagreement about the size of the universe. In Einstein’s day, it was known that the universe appeared to have originated in a big bang with all mass radiating outward at a ferocious rate. If the mass of the universe were high enough, and the speed were slow enough the universe would be finite and closed in on itself. That is, it would be a large black hole. But in Einstein’s day, the universe didn’t look to have enough mass. It thus looked like the universe was endless, but non-uniform. It appeared to be mostly filled with empty space — something that kept us from frying from the heat of distant stars.

Since Einstein’s day we’ve discovered more mass in the universe, but not quite enough to make us a black hole given the universe’s size. We’ve discovered neutron stars and black holes, dark concentrated masses, but not enough of them. We’ve discovered neutrinos, tiny neutral particles that fill space, and we’ve shown that they have rest-mass enough that neutrinos are now thought to make up most of the mass of the universe. But even with these dark-ish matter, we still have not found enough for the universe to be non-infinite, a black hole. Worse yet, we’ve discovered dark energy, something that keeps the universe expanding at nearly the speed of light when you’d think it should have slowed by now; this fast expansion makes it ever harder to find enough mass to close the universe (why we’d want to close it is an aesthetic issue discussed below).

Still, there is evidence for another, smaller mass item floating in space, the axion. This particle, and it’s yet-smaller companion, the axiono, may be the source of both the missing dark matter and the dark energy, see figure below. Axions should have masses about 10-7 eV, and should interact enough with matter to explain why there is more matter than antimatter while leaving the properties of matter otherwise unchanged. From normal physics, you’d expect an equal amount of matter and antimatter as antimatter is just matter moving backwards in time. Further, the light mass and weak interactions could allow axions to provide a halo around galaxies (helpful for galactic stability).

Mass of the Universe with Axions, no axions. Here is a plot from a recent SUSY talk (2010) http://susy10.uni-bonn.de/data/KimJEpreSUSY.pdf

Mass of the Universe with Axions, no axions. Here is a plot from a recent SUSY talk (2010) http://susy10.uni-bonn.de/data/KimJEpreSUSY.pdf

The reason you’d want the universe to be closed is aesthetic. The universe is nearly closed, if you think in terms of scientific numbers, and it’s hard to see why the universe should not then be closed. We appear to have an awful lot of mass, in terms of grams or kg, but appear to have only 20% of the required mass for a black hole. In terms of orders of magnitudes we are so close that you’d think we’d have 100% of the required mass. If axions are found to exist, and the evidence now is about 50-50, they will interact with strong magnetic fields so that they change into photons and photons change into axions. It is possible that the mass this represents will be the missing dark matter allowing our universe to be closed, and will be the missing dark energy.

As a final thought I’ve always wondered why religious leaders have been so against mention of “the big bang.” You’d think that the biggest boost to religion would be knowledge that everything appeared from nothing one bright and sunny morning, but they don’t seem to like the idea at all. If anyone who can explain that to me, I’d appreciate it. Thanks, Robert E. B.

The martian sky: why is it yellow?

In a previous post, I detailed my calculations concerning the color of the sky and sun. Basically the sun gives off light mostly in the yellow to green range, with fairly little red or purple. A lot of the blue and green wavelengths scatter leaving the sun  looking yellow because yellow looks yellow and the red plus blue also looks yellow because of additive color.

If you look at the sky through a spectroscope, it’s pretty blue with some green. Sky blue involves a bit of an eye trick of additive color so that we see the scattered blue + green as sky blue and not aqua. At sundown, the sun becomes reddish and the majority of the sky becomes greenish-grey as more green and yellow light gets scattered. The sky near the sun is orange as the atmosphere is thick enough to scatter orange, while the blue and green scatters out.

Now, to talk about the color of the sky on Mars, both at noon and at sunset. Except for the effect of the red color of the dust on Mars I would expect the sky to be blue on Mars, just like on earth but a lighter shade of blue as the atmosphere is thinner. When you add some red from the dust, one would expect the sky to be grey. That is, I would expect to find a simple combination of a base of sky blue (blue plus green), plus some extra red-orange light scattered from the Martian dust. In additive colors, the combination of blue-green and red-orange is grey, so that’s the color I’d expect the Martian sky to be normally. Some photos of the Martian sky match this expectation; see below. My guess is this is on a day when there was not much dust in the air, though NASA provides no details here.

martian sky; looks grey

On some days (high dust days, I assume), the Martian sky is turns a shade of yellow-green. I’d guess that’s because the red-dust absorbs the blue and some of the green spectrum, but does not actually add red. We are thus involved with subtractive color and, in subtractive color orange plus blue-green = butterscotch, not grey or pink.

Martian sky color

I now present a photo of the Martian sky at sunset. This is something really peculiar that I would not have expected ahead of time, but think I can explain now that I see it. The sky looks yellow in general, like in the photo above, but blue around the sun. I could explain this picture by saying that the blue and green of the Martian sky is being scattered by the Martian air (CO2, mostly), just like our atmosphere scatters these colors on earth; the sky near the sun looks blue, not red-orange because the Martian atmosphere is thinner (at noon there is less air to scatter light, but at sun-down the atmosphere is the same thickness as ours, more or less). The red of the dust does not show up in the sky color near the sun since the red-color is back scattered near the sun, and not front scattered. The Martian sky is yellow elsewhere where there is some front scatter of the reddish light reflecting off of the dust. This sounds plausible to me; tell me what you think.

Martian sky at sunset

Martian sky at sunset

As an aside, while I have long understood there was an experimental difference between subtractive and additive color, I have never quite understood why this should be so. Why is it that subtractive color combinations are different, and uniformly different from additive color combinations. I’d have thought you’d get more-or-less the same color if you remove red from one part of a piece of paper and remove blue from another as if you add red, purple, and yellow. A mental model I have (perhaps wrong) is that subtractive color looks like it does because of the details of the spectral absorption of the particular pigment chemicals that are typically used. Based on this model, I expect to find someday some new red and green pigments where the combination looks yellow when mixed on a page. I’ve not found it yet, but that’s my expectation — perhaps you know of a really good explanation for why additive color is so different from subtractive color.

Joke about antimatter and time travel

I’m sorry we don’t serve antimatter men here.

Antimatter man walks into a bar.

Is funny because … in quantum-physics there is no directionality in time. Thus an electron can change directions in time and then appears to the observer as a positron, an anti electron that has the same mass as a normal electron but the opposite charge and an opposite spin, etc. In physics, the reason electrons and positrons appear to annihilate is that it’s there was only one electron to begin with. That electron started going backwards in time so it disappeared in our forward-in-time time-frame.

The thing is, time is quite apparent on a macroscopic scales. It’s one of the most apparent aspects of macroscopic existence. Perhaps the clearest proof that time is flowing in one direction only is entropy. In normal life, you can drop a glass and watch it break whenever you like, but you can not drop shards and expect to get a complete glass. Similarly, you know you are moving forward in time if you can drop an ice cube into a hot cup of coffee and make it luke-warm. If you can reach into a cup of luke-warm coffee and extract an ice cube to make it hot, you’re moving backwards in time.

It’s also possible that gravity proves that time is moving forward. If an anti apple is just a normal apple that is moving backwards in time, then I should expect that, when I drop an anti-apple, I will find it floats upward. On the other hand, if mass is inherently a warpage of space-time, it should fall down. Perhaps when we understand gravity we will also understand how quantum physics meets the real world of entropy.

Heat conduction in insulating blankets, aerogels, space shuttle tiles, etc.

A lot about heat conduction in insulating blankets can be explained by the ordinary motion of gas molecules. That’s because the thermal conductivity of air (or any likely gas) is much lower than that of glass, alumina, or any likely solid material used for the structure of the blanket. At any temperature, the average kinetic energy of an air molecule is 1/2kT in any direction, or 3/2kT altogether; where k is Boltzman’s constant, and T is absolute temperature, °K. Since kinetic energy equals 1/2 mv2, you find that the average velocity in the x direction must be v = √kT/m = √RT/M. Here m is the mass of the gas molecule in kg, M is the molecular weight also in kg (0.029 kg/mol for air), R is the gas constant 8.29J/mol°C, and v is the molecular velocity in the x direction, in meters/sec. From this equation, you will find that v is quite large under normal circumstances, about 290 m/s (650 mph) for air molecules at ordinary temperatures of 22°C or 295 K. That is, air molecules travel in any fixed direction at roughly the speed of sound, Mach 1 (the average speed including all directions is about √3 as fast, or about 1130 mph).

The distance a molecule will go before hitting another one is a function of the cross-sectional areas of the molecules and their densities in space. Dividing the volume of a mol of gas, 0.0224 m3/mol at “normal conditions” by the number of molecules in the mol (6.02 x10^23) gives an effective volume per molecule at this normal condition: .0224 m3/6.0210^23 = 3.72 x10^-26 m3/molecule at normal temperatures and pressures. Dividing this volume by the molecular cross-section area for collisions (about 1.6 x 10^-19 m2 for air based on an effective diameter of 4.5 Angstroms) gives a free-motion distance of about 0.23×10^-6 m or 0.23µ for air molecules at standard conditions. This distance is small, to be sure, but it is 1000 times the molecular diameter, more or less, and as a result air behaves nearly as an “ideal gas”, one composed of point masses under normal conditions (and most conditions you run into). The distance the molecule travels to or from a given surface will be smaller, 1/√3 of this on average, or about 1.35×10^-7m. This distance will be important when we come to estimate heat transfer rates at the end of this post.

 

Molecular motion of an air molecule (oxygen or nitrogen) as part of heat transfer process; this shows how some of the dimensions work.

Molecular motion of an air molecule (oxygen or nitrogen) as part of heat transfer process; this shows how some of the dimensions work.

The number of molecules hitting per square meter per second is most easily calculated from the transfer of momentum. The pressure at the surface equals the rate of change of momentum of the molecules bouncing off. At atmospheric pressure 103,000 Pa = 103,000 Newtons/m2, the number of molecules bouncing off per second is half this pressure divided by the mass of each molecule times the velocity in the surface direction. The contact rate is thus found to be (1/2) x 103,000 Pa x 6.02^23 molecule/mol /(290 m/s. x .029 kg/mol) = 36,900 x 10^23 molecules/m2sec.

The thermal conductivity is merely this number times the heat capacity transfer per molecule times the distance of the transfer. I will now calculate the heat capacity per molecule from statistical mechanics because I’m used to doing things this way; other people might look up the heat capacity per mol and divide by 6.02 x10^23: For any gas, the heat capacity that derives from kinetic energy is k/2 per molecule in each direction, as mentioned above. Combining the three directions, that’s 3k/2. Air molecules look like dumbbells, though, so they have two rotations that contribute another k/2 of heat capacity each, and they have a vibration that contributes k. We begin with an approximate value for k = 2 cal/mol of molecules per °C; it’s actually 1.987 but I round up to include some electronic effects. Based on this, we calculate the heat capacity of air to be 7 cal/mol°C at constant volume or 1.16 x10^-23 cal/molecule°C. The amount of energy that can transfer to the hot (or cold) wall is this heat capacity times the temperature difference that molecules carry between the wall and their first collision with other gases. The temperature difference carried by air molecules at standard conditions is only 1.35 x10-7 times the temperature difference per meter because the molecules only go that far before colliding with another molecule (remember, I said this number would be important). The thermal conductivity for stagnant air per meter is thus calculated by multiplying the number of molecules times that hit per m2 per second, the distance the molecule travels in meters, and the effective heat capacity per molecule. This would be 36,900 x 10^23  molecules/m2sec x 1.35 x10-7m x 1.16 x10^-23 cal/molecule°C = 0.00578 cal/ms°C or .0241 W/m°C. This value is (pretty exactly) the thermal conductivity of dry air that you find by experiment.

I did all that math, though I already knew the thermal conductivity of air from experiment for a few reasons: to show off the sort of stuff you can do with simple statistical mechanics; to build up skills in case I ever need to know the thermal conductivity of deuterium or iodine gas, or mixtures; and finally, to be able to understand the effects of pressure, temperature and (mainly insulator) geometry — something I might need to design a piece of equipment with, for example, lower thermal heat losses. I find, from my calculation that we should not expect much change in thermal conductivity with gas pressure at near normal conditions; to first order, changes in pressure will change the distance the molecule travels to exactly the same extent that it changes the number of molecules that hit the surface per second. At very low pressures or very small distances, lower pressures will translate to lower conductivity, but for normal-ish pressures and geometries, changes in gas pressure should not affect thermal conductivity — and does not.

I’d predict that temperature would have a larger effect on thermal conductivity, but still not an order-of magnitude large effect. Increasing the temperature increases the distance between collisions in proportion to the absolute temperature, but decreases the number of collisions by the square-root of T since the molecules move faster at high temperature. As a result, increasing T has a √T positive effect on thermal conductivity.

Because neither temperature nor pressure has much effect, you might expect that the thermal conductivity of all air-filed insulating blankets at all normal-ish conditions is more-or-less that of standing air (air without circulation). That is what you find, for the most part; the same 0.024 W/m°C thermal conductivity with standing air, with high-tech, NASA fiber blankets on the space shuttle and with the cheapest styrofoam cups. Wool felt has a thermal conductivity of 0.042 W/m°C, about twice that of air, a not-surprising result given that wool felt is about 1/2 wool and 1/2 air.

Now we can start to understand the most recent class of insulating blankets, those with very fine fibers, or thin layers of fiber (or aluminum or gold). When these are separated by less than 0.2µ you finally decrease the thermal conductivity at room temperature below that for air. These layers decrease the distance traveled between gas collisions, but still leave the same number of collisions with the hot or cold wall; as a result, the smaller the gap below .2µ the lower the thermal conductivity. This happens in aerogels and some space blankets that have very small silica fibers, less than .1µ apart (<100 nm). Aerogels can have much lower thermal conductivities than 0.024 W/m°C, even when filled with air at standard conditions.

In outer space you get lower thermal conductivity without high-tech aerogels because the free path is very long. At these pressures virtually every molecule hits a fiber before it hits another molecule; for even a rough blanket with distant fibers, the fibers bleak up the path of the molecules significantly. Thus, the fibers of the space shuttle (about 10 µ apart) provide far lower thermal conductivity in outer space than on earth. You can get the same benefit in the lab if you put a high vacuum of say 10^-7 atm between glass walls that are 9 mm apart. Without the walls, the air molecules could travel 1.3 µ/10^-7 = 13m before colliding with each other. Since the walls of a typical Dewar are about 0.009 m apart (9 mm) the heat conduction of the Dewar is thus 1/150 (0.7%) as high as for a normal air layer 9mm thick; there is no thermal conductivity of Dewar flasks and vacuum bottles as such, since the amount of heat conducted is independent of gap-distance. Pretty spiffy. I use this knowledge to help with the thermal insulation of some of our hydrogen generators and hydrogen purifiers.

There is another effect that I should mention: black body heat transfer. In many cases black body radiation dominates: it is the reason the tiles are white (or black) and not clear; it is the reason Dewar flasks are mirrored (a mirrored surface provides less black body heat transfer). This post is already too long to do black body radiation justice here, but treat it in more detail in another post.

RE. Buxbaum