Category Archives: Physics

Do antimatter apples fall up?

by Dr. Robert E. Buxbaum,

The normal view of antimatter is that it’s just regular matter moving backwards in time. This view helps explain why antimatter has the same mass as regular matter, but has the opposite charge, spin, etc. An antiproton has the same mass as a proton because it is a proton. In our (forward) time-frame the anti-proton appears to be attracted by a positive plate and repelled by a negative one because, when you are going backward in time, attraction looks like repulsion.

In this view, the reason that antimatter particles annihilate when they come into contact with matter –sometimes– is that the annihilation is nothing more than the matter particle (or antimatter) switching direction in time. In our (forward) time-frame it looks like two particles met and both disappeared leaving nothing but photons (light). But in the time reversal view, shown in the figure below, there is only one normal matter particle. In the figure, this particle (solid line) comes from the left, and meets a photon, a wiggly line who’s arrow shows it traveling backwards in time. The normal proton then reverses in time, giving off a photon, another wiggly line. I’d alluded to this in my recent joke about an antimatter person at a bar, but there is also a famous poem.

proton-antiproton

This time reverse approach is best tested using entropy, the classical “arrow of time.” The best way to tell you can tell you are going forward in time is to drop an ice-cube into a hot cup of coffee and produce a warm cup of diluted coffee. This really only shows that you and the cup are moving in the same direction — both forward or both backward, something we’ll call forward. If you were moving in the opposite direction in time, e.g. you had a cup of anti-coffee that was moving backward in time relative to you, you could pull an anti -ice cube out of it, and produce a steaming cup of stronger anti-coffee.

We can not do the entropy test of time direction yet because it requires too much anti matter, but we can use another approach to test the time-reverse idea: gravity. You can make a very small drop of antimatter using only a few hundred atoms. If the antimatter drop is really going backwards in time, it should not fall on the floor and splatter, but should fly upward off the floor and coalesce. The Laboratory at CERN has just recently started producing enough atoms of anti-hydrogen to allow this test. So far the atoms are too hot but sometime in 2014 they expect to cool the atoms, some 300 atoms of anti hydrogen, into a drop or two. They will then see if the drop falls down or up in gravity. The temperature necessary for this study is about 1/100,000 of a degree K.

The anti-time view of antimatter is still somewhat controversial. For it to work, light must reside outside of time, or must move forward and backward in time with some ease. This makes some sense since light travels “at the speed of light,” and is thus outside of time. In the figure, the backwards moving photon would look like a forward on moving in the other direction (left). In a future post I hope to give instructions for building a simple, quantum time machine that uses the fact that light can move backwards in time to produce an event eraser — a device that erases light events in the present. It’s a somewhat useful device, if only for a science fair demonstration. Making one to work on matter would be much harder, and may be impossible if the CERN experiments don’t work out.

It becomes a little confusing how to deal with entropy in a completely anti-time world, and it’s somewhat hard to see why, in this view of time, there should be so little antimatter in the universe and so much matter: you’d expect equal amounts of both. As I have strong feelings for entropy, I’d posted a thought explanation for this some months ago imagining anti matter as normal forward-time matter, and posits the existence of an undiscovered particle that interacts with its magnetism to make matter more stable than antimatter. To see how it works, recall the brainteaser about a tribe that always speaks lies and another that always speaks truth. (I’m not the first to think of this explanation).

If the anti hydrogen drop at CERN is seen to fall upwards, but entropy still works in the positive direction as in my post (i.e. drops still splatter, and anti coffee cools like normal coffee), it will support a simple explanation for dark energy — the force that prevents the universe from collapsing. Dark energy could be seen to result from the antigravity of antimatter. There would have to be large collections of antimatter somewhere, perhaps anti-galaxies isolated from normal galaxies, that would push away the positive matter galaxies while moving forward in time and entropy. If the antigalaxies were close to normal galaxies they would annihilate at the edges, and we’d see lots of photons, like in the poem. Whatever they find at CERN, the future will be interesting. And if time travel turns out to be the norm, the past will be more interesting than it was.

Musical Color and the Well Tempered Scale

by R. E. Buxbaum, (the author of all these posts)

I first heard J. S. Bach’s Well Tempered Clavier some 35 years ago and was struck by the different colors of the different scales. Some were dark and scary, others were light and enjoyable. All of them worked, but each was distinct, though I could not figure out why. That Bach was able to write in all the keys without retuning was a key innovation of his. In his day, people tuned in fifths, a process that created gaps (called wolf) that prevented useful composition in affected keys.

We don’t know exactly how Bach tuned his instruments as he had no scientific way to describe it; we can guess that it was more uniform than the temper produced by tuning in fifths, but it probably was not quite equally spaced. Nowadays electronic keyboards are tuned to 12 equally spaced frequencies per octave through the use of frequency counters.  Starting with the A below “middle C”, A4, tuned at 440 cycles/second (the note symphonies tune to), each note is programmed to vibrate at a wavelength that is lower or higher than one next to it by a factor of the twelfth root of two, 12√2= 1.05946. After 12 multiples of this size, the wavelength has doubled or halved and there is an octave. This is called equal tempering.

Currently, many non-electric instruments are also tuned this way.  Equally tempering avoids all wolf, but makes each note equally ill-tempered. Any key can be transposed to another, but there are no pure harmonies because 12√2 is an irrational number (see joke). There is also no color or feel to any given key except that which has carried over historically in the listeners’ memory. It’s sad.

I’m going to speculate that J.S. Bach found/ favored a way to tune instruments where all of the keys were usable, and OK sounding, but where some harmonies are more perfect than others. Necessarily this means that some harmonies will be less-perfect. There should be no wolf gaps that would sound so bad that Bach could not compose and transpose in every key, but since there is a difference, each key will retain a distinct color that JS Bach explored in his work — or so I’ll assume.

Pythagoras found that notes sound best together when the vibrating lengths are kept in a ratio of small numbers. Consider the tuning note, A4, the A below middle C; this note vibrates a column of air .784 meters long, about 2.5 feet or half the length of an oboe. The octave notes for Aare called A3 and A5. They vibrate columns of air 2x as long and 1/2 as long as the original. They’re called octaves because they’re eight white keys away from A4. Keyboards add 4 black notes per octave so octaves are always 12 notes away. Keyboards are generally tuned so octaves are always 12 keys away. Based on Pythagoras, a reasonable presumption is that J.S Bach tuned every non-octave note so that it vibrates an air column similar to the equal tuning ratio, 12√2 = 1.05946, but whose wavelength was adjusted, in some cases to make ratios of small, whole numbers with the wavelength for A4.

Aside from octaves, the most pleasant harmonies are with notes whose wavelength is 3/2 as long as the original, or 2/3 as long. The best harmonies with A4 (0.784 m) will be with notes with wavelengths (3/2)*0.784 m long, or (2/3)*0.784m long. The first of these is called D3 and the other is E4. A4 combines with D3 to make a chord called D-major, the so-called “the key of glory.” The Hallelujah chorus, Beethoven’s 9th (Ode to Joy), and Mahler’s Titan are in this key. Scriabin believed that D-major had a unique color, gold, suggesting that the pure ratios were retained.

A combines with E (plus a black note C#) to make a chord called A major. Songs in this key sound (to my ear) robust, cheerful and somewhat pompous; Here, in A-major is: Dancing Queen by ABBA, Lady Madonna by the BeatlesPrelude and Fugue in A major by JS Bach. Scriabin believed that A-major was green.

A4 also combines with E and a new white note, C3, to make a chord called A minor. Since E4 and E3 vibrate at 2/3 and 4/3 the wavelength of A4 respectively, I’ll speculate that Bach tuned C3 to 5/3 the length of A4; 5/3*.0784m =1.307m long. Tuned his way, the ratio of wavelengths in the A minor chord are 3:4:5. Songs in A minor tend to be edgy and sort-of sad: Stairway to heaven, Für Elise“Songs in A Minor sung by Alicia Keys, and PDQ Bach’s Fugue in A minor. I’m going to speculate the Bach tuned this to 1.312 m (or thereabouts), roughly half-way between the wavelength for a pure ratio and that of equal temper.

The notes D3 and Ewill not sound particularly good together. In both pure ratios and equal tempers their wavelengths are in a ratio of 3/2 to 4/3, that is a ratio of 9 to 8. This can be a tensional transition, but it does not provide a satisfying resolution to my, western ears.

Now for the other white notes. The next white key over from A4 is G3, two half-tones longer that for A4. For equal tuning, we’d expect this note to vibrate a column of air 1.05946= 1.1225 times longer than A4. The most similar ratio of small whole numbers is 9/8 = 1.1250, and we’d already generated one before between D and E. As a result, we may expect that Bach tuned G3 to a wavelength 9/8*0.784m = .88 meters.

For equal tuning, the next white note, F3, will vibrate an air column 1.059464 = 1.259 times as long as the A4 column. Tuned this way, the wavelength for F3 is 1.259*.784 = .988m. Alternately, since 1.259 is similar to 5/4 = 1.25, it is reasonable to tune F3 as (5/4)*.784 = .980m. I’ll speculate that he split the difference: .984m. F, A, and C combine to make a good harmony called the F major chord. The most popular pieces in F major sound woozy and not-quite settled in my opinion, perhaps because of the oddness of the F tuning. See, e.g. the Jeopardy theme song, “My Sweet Lord,Come together (Beetles)Beethoven’s Pastoral symphony (Movement 1, “Awakening of cheerful feelings upon arrival in the country”). Scriabin saw F-major as bright blue.

We’ve only one more white note to go in this octave: B4, the other tension note to A4. Since the wavelengths for G3 was 9/8 as long as for A4, we can expect the wavelength for B4 will be 8/9 as long. This will be dissonant to A4, but it will go well with E3 and E4 as these were 2/3 and 4/3 of A4 respectively. Tuned this way, B4 vibrates a column 1.40 m. When B, in any octave, is combined with E it’s called an E chord (E major or E minor); it’s typically combined with a black key, G-sharp (G#). The notes B, E vibrate at a ratio of 4 to 3. J.S. Bach called the G#, “H” allowing him to spell out his name in his music. When he played the sequence BACH, he found B to A created tension; moving to C created harmony with A, but not B, while the final note, G# (H) provided harmony for C and the original B. Here’s how it works on cello; it’s not bad, but there is no grand resolution. The Promenade from “Pictures at an Exhibition” is in E.

The black notes go somewhere between the larger gaps of the white notes, and there is a traditional confusion in how to tune them. One can tune the black notes by equal temper  (multiples of 21/12), or set them exactly in the spaces between the white notes, or tune them to any alternate set of ratios. A popular set of ratios is found in “Just temper.” The black note 6 from A4 (D#) will have wavelength of 0.784*26/12= √2 *0.784 m =1.109m. Since √2 =1.414, and that this is about 1.4= 7/5, the “Just temper” method is to tune D# to 1.4*.784m =1.098m. If one takes this route, other black notes (F#3 and C#3) will be tuned to ratios of 6/5, and 8/5 times 0.784m respectively. It’s possible that J.S. Bach tuned his notes by Just temper, but I suspect not. I suspect that Bach tuned these notes to fall in-between Just Temper and Equal temper, as I’ve shown below. I suspect that his D#3 might vibrated at about 1.104 m, half way between Just and Equal temper. I would not be surprised if Jazz musicians tuned their black notes more closely to the fifths of Just temper: 5/5 6/5, 7/5, 8/5 (and 9/5?) because jazz uses the black notes more, and you generally want your main chords to sound in tune. Then again, maybe not. Jimmy Hendrix picked the harmony D#3 with A (“Diabolus”, the devil harmony) for his Purple Haze; it’s also used for European police sirens.

To my ear, the modified equal temper is more beautiful and interesting than the equal temperament of todays electronic keyboards. In either temper music plays in all keys, but with an un-equal temper each key is distinct and beautiful in its own way. Tuning is engineering, I think, rather than math or art. In math things have to be perfect; in art they have to be interesting, and in engineering they have to work. Engineering tends to be beautiful its way. Generally, though, engineering is not perfect.

Summary of air column wave-lengths, measured in meters, and as a ratio to that for A4. Just Tempering, Equal Tempering, and my best guess of J.S. Bach's Well Tempered scale.

Summary of air column wave-lengths, measured in meters, and as a ratio to that for A4. Just Tempering, Equal Tempering, and my best guess of J.S. Bach’s Well Tempered scale.

R.E. Buxbaum, May 20 2013 (edited Sept 23, 2013) — I’m not very musical, but my children are.

My steam-operated, high pressure pump

Here’s a miniature version of a duplex pump that we made 2-3 years ago at REB Research as a way to pump fuel into hydrogen generators for use with fuel cells. The design is from the 1800s. It was used on tank locomotives and steamboats to pump water into the boiler using only the pressure in the boiler itself. This seems like magic, but isn’t. There is no rotation, but linear motion in a steam piston of larger diameter pushes a liquid pump piston with a smaller diameter. Each piston travels the same distance, but there is more volume in the steam cylinder. The work from the steam piston is greater: W = ∫PdV; energy is conserved, and the liquid is pumped to higher pressure than the driving steam (neat!).

The following is a still photo. Click on the YouTube link to see the steam pump in action. It has over 4000 views!

Mini duplex pump. Provides high pressure water from steam power. Amini version of a classic of the 1800s Coffee cup and pen shown for scale.

Mini duplex pump. Provides high pressure water from steam power. A mini version of a classic of the 1800s Coffee cup and pen shown for scale.

You can get the bronze casting and the plans for this pump from Stanley co (England). Any talented machinist should be able to do the rest. I hired an Amish craftsman in Ohio. Maurice Perlman did the final fit work in our shop.

Our standard line of hydrogen generators still use electricity to pump the methanol-water. Even our latest generators are meant for nom-mobile applications where electricity is awfully convenient and cheap. This pump was intended for a future customer who would need to generate hydrogen to make electricity for remote and mobile applications. Even our non-mobile hydrogen is a better way to power cars than batteries, but making it mobile has advantages. Another advance would be to heat the reactors by burning the waste gas (I’ve been working on that too, and have filed a patent). Sometimes you have to build things ahead of finding a customer — and this pump was awfully cool.

Most Heat Loss Is Black-Body Radiation

In a previous post I used statistical mechanics to show how you’d calculate the thermal conductivity of any gas and showed why the insulating power of the best normal insulating materials was usually identical to ambient air. That analysis only considered the motion of molecules and not of photons (black-body radiation) and thus under-predicted heat transfer in most circumstances. Though black body radiation is often ignored in chemical engineering calculations, it is often the major heat transfer mechanism, even at modest temperatures.

One can show from quantum mechanics that the radiative heat transfer between two surfaces of temperature T and To is proportional to the difference of the fourth power of the two temperatures in absolute (Kelvin) scale.

Heat transfer rate = P = A ε σ( T^4 – To^4).

Here, A is the area of the surfaces, σ is the Stefan–Boltzmann constantε is the surface emissivity, a number that is 1 for most non-metals and .3 for stainless steel.  For A measured in m2σ = 5.67×10−8 W m−2 K−4.

Infrared picture of a fellow wearing a black plastic bag on his arm. The bag is nearly transparent to heat radiation, while his eyeglasses are opaque. His hair provides some insulation.

Unlike with conduction, heat transfer does not depend on the distances between the surfaces but only on the temperature and the infra-red (IR) reflectivity. This is different from normal reflectivity as seen in the below infra-red photo of a lightly dressed person standing in a normal room. The fellow has a black plastic bag on his arm, but you can hardly see it here, as it hardly affects heat loss. His clothes, don’t do much either, but his hair and eyeglasses are reasonably effective blocks to radiative heat loss.

As an illustrative example, lets calculate the radiative and conductive heat transfer heat transfer rates of the person in the picture, assuming he has  2 m2 of surface area, an emissivity of 1, and a body and clothes temperature of about 86°F; that is, his skin/clothes temperature is 30°C or 303K in absolute. If this person stands in a room at 71.6°F, 295K, the radiative heat loss is calculated from the equation above: 2 *1* 5.67×10−8 * (8.43×109 -7.57×109) = 97.5 W. This is 23.36 cal/second or 84.1 Cal/hr or 2020 Cal/day; this is nearly the expected basal calorie use of a person this size.

The conductive heat loss is typically much smaller. As discussed previously in my analysis of curtains, the rate is inversely proportional to the heat transfer distance and proportional to the temperature difference. For the fellow in the picture, assuming he’s standing in relatively stagnant air, the heat boundary layer thickness will be about 2 cm (0.02m). Multiplying the thermal conductivity of air, 0.024 W/mK, by the surface area and the temperature difference and dividing by the boundary layer thickness, we find a Wattage of heat loss of 2*.024*(30-22)/.02 = 19.2 W. This is 16.56 Cal/hr, or 397 Cal/day: about 20% of the radiative heat loss, suggesting that some 5/6 of a sedentary person’s heat transfer may be from black body radiation.

We can expect that black-body radiation dominates conduction when looking at heat-shedding losses from hot chemical equipment because this equipment is typically much warmer than a human body. We’ve found, with our hydrogen purifiers for example, that it is critically important to choose a thermal insulation that is opaque or reflective to black body radiation. We use an infra-red opaque ceramic wrapped with aluminum foil to provide more insulation to a hot pipe than many inches of ceramic could. Aluminum has a far lower emissivity than the nonreflective surfaces of ceramic, and gold has an even lower emissivity at most temperatures.

Many popular insulation materials are not black-body opaque, and most hot surfaces are not reflectively coated. Because of this, you can find that the heat loss rate goes up as you add too much insulation. After a point, the extra insulation increases the surface area for radiation while barely reducing the surface temperature; it starts to act like a heat fin. While the space-shuttle tiles are fairly mediocre in terms of conduction, they are excellent in terms of black-body radiation.

There are applications where you want to increase heat transfer without having to resort to direct contact with corrosive chemicals or heat-transfer fluids. Often black body radiation can be used. As an example, heat transfers quite well from a cartridge heater or band heater to a piece of equipment even if they do not fit particularly tightly, especially if the outer surfaces are coated with black oxide. Black body radiation works well with stainless steel and most liquids, but most gases are nearly transparent to black body radiation. For heat transfer to most gases, it’s usually necessary to make use of turbulence or better yet, chaos.

Robert Buxbaum

The Gift of Chaos

Many, if not most important engineering systems are chaotic to some extent, but as most college programs don’t deal with this behavior, or with this type of math, I thought I might write something on it. It was a big deal among my PhD colleagues some 30 years back as it revolutionized the way we looked at classic problems; it’s fundamental, but it’s now hardly mentioned.

Two of the first freshman engineering homework problems I had turn out to have been chaotic, though I didn’t know it at the time. One of these concerned the cooling of a cup of coffee. As presented, the coffee was in a cup at a uniform temperature of 70°C; the room was at 20°C, and some fanciful data was presented to suggest that the coffee cooled at a rate that was proportional the difference between the (changing) coffee temperature and the fixed room temperature. Based on these assumptions, we predicted exponential cooling with time, something that was (more or less) observed, but not quite in real life. The chaotic part in a real cup of coffee, is that the cup develops currents that move faster and slower. These currents accelerate heat loss, but since they are driven by the temperature differences within the cup they tend to speed up and slow down erratically. They accelerate when the cup is not well stirred, causing new stir, and slow down when it is stirred, and the temperature at any point is seen to rise and fall in an almost rhythmic fashion; that is, chaotically.

While it is impossible to predict what will happen over a short time scale, there are some general patterns. Perhaps the most remarkable of these is self-similarity: if observed over a short time scale (10 seconds or less), the behavior over 10 seconds will look like the behavior over 1 second, and this will look like the behavior over 0.1 second. The only difference being that, the smaller the time-scale, the smaller the up-down variation. You can see the same thing with stock movements, wind speed, cell-phone noise, etc. and the same self-similarity can occur in space so that the shape of clouds tends to be similar at all reasonably small length scales. The maximum average deviation is smaller over smaller time scales, of course, and larger over large time-scales, but not in any obvious way. There is no simple proportionality, but rather a fractional power dependence that results in these chaotic phenomena having fractal dependence on measure scale. Some of this is seen in the global temperature graph below.

Global temperatures measured from the antarctic ice showing stable, cyclic chaos and self-similarity.

Global temperatures measured from the antarctic ice showing stable, cyclic chaos and self-similarity.

Chaos can be stable or unstable, by the way; the cooling of a cup of coffee was stable because the temperature could not exceed 70°C or go below 20°C. Stable chaotic phenomena tend to have fixed period cycles in space or time. The world temperature seems to follow this pattern though there is no obvious reason it should. That is, there is no obvious maximum and minimum temperature for the earth, nor any obvious reason there should be cycles or that they should be 120,000 years long. I’ll probably write more about chaos in later posts, but I should mention that unstable chaos can be quite destructive, and quite hard to prevent. Some form of chaotic local heating seems to have caused battery fires aboard the Dreamliner; similarly, most riots, famines, and financial panics seem to be chaotic. Generally speaking, tight control does not prevent this sort of chaos, by the way; it just changes the period and makes the eruptions that much more violent. As two examples, consider what would happen if we tried to cap a volcano, or provided  clamp-downs on riots in Syria, Egypt or Ancient Rome.

From math, we know some alternate ways to prevent unstable chaos from getting out of hand; one is to lay off, another is to control chaotically (hard to believe, but true).

 

Why the Boeing Dreamliner’s batteries burst into flames

Boeing’s Dreamliner is currently grounded due to two of their Li-Ion batteries having burst into flames, one in flight, and another on the ground. Two accidents of the same type in a small fleet is no little matter as an airplane fire can be deadly on the ground or at 50,000 feet.

The fires are particularly bad on the Dreamliner because these lithium batteries control virtually everything that goes on aboard the plane. Even without a fire, when they go out so does virtually every control and sensor. So why did they burn and what has Boeing done to take care of it? The simple reason for the fires is that management chose to use Li-Cobalt oxide batteries, the same Li-battery design that every laptop computer maker had already rejected ten years earlier when laptops using them started busting into flames. This is the battery design that caused Dell and HP to recall every computer with it. Boeing decided that they should use a massive version to control everything on their flagship airplane because it has the highest energy density see graphic below. They figured that operational management would insure safety even without the need to install any cooling or sufficient shielding.

All lithium batteries have a negative electrode (anode) that is mostly lithium. The usual chemistry is lithium metal in a graphite matrix. Lithium metal is light and readily gives off electrons; the graphite makes is somewhat less reactive. The positive electrode (cathode) is typically an oxide of some sort, and here there are options. Most current cell-phone and laptop batteries use some version of manganese nickel oxide as the anode. Lithium atoms in the anode give off electrons, become lithium ions and then travel across to the oxide making a mixed ion oxide that absorbs the electron. The process provides about 4 volts of energy differential per electron transferred. With cobalt oxide, the cathode reaction is more or less CoO2 + Li+ e– —> LiCoO2. Sorry to say this chemistry is very unstable; the oxide itself is unstable, more unstable than MnNi or iron oxide, especially when it is fully charged, and especially when it is warm (40 degrees or warmer) 2CoO2 –> Co2O+1/2O2. Boeing’s safety idea was to control the charge rate in a way that overheating was not supposed to occur.

Despite the controls, it didn’t work for the two Boeing batteries that burst into flames. Perhaps it would have helped to add cooling to reduce the temperature — that’s what’s done in lap-tops and plug-in automobiles — but even with cooling the batteries might have self-destructed due to local heating effects. These batteries were massive, and there is plenty of room for one spot to get hotter than the rest; this seems to have happened in both fires, either as a cause or result. Once the cobalt oxide gets hot and oxygen is released a lithium-oxygen fire can spread to the whole battery, even if the majority is held at a low temperature. If local heating were the cause, no amount of external cooling would have helped.

battery-materials-energy-densities-battery-university

Something that would have helped was a polymer interlayer separator to keep the unstable cobalt oxide from fueling the fire; there was none. Another option is to use a more-stable cathode like iron phosphate or lithium manganese nickel. As shown in the graphic above, these stable oxides do not have the high power density of Li-cobalt oxide. When the unstable cobalt oxide decomposed there was oxygen, lithium, and heat in one space and none of the fire extinguishers on the planes could put out the fires.

The solution that Boeing has proposed and that Washington is reviewing is to leave the batteries unchanged, but to shield them in a massive titanium shield with the vapors formed on burning vented outside the airplane. The claim is that this shield will protect the passengers from the fire, if not from the loss of electricity. This does not appear to be the best solution. Airbus had planned to use the same batteries on their newest planes, but has now gone retro and plans to use Ni-Cad batteries. I don’t think that’s the best solution either. Better options, I think, are nickel metal hydride or the very stable Lithium Iron Phosphate batteries that Segway uses. Better yet would be to use fuel cells, an option that appears to be better than even the best batteries. Fuel cells are what the navy uses on submarines and what NASA uses in space. They are both more energy dense and safer than batteries. As a disclaimer, REB Research makes hydrogen generators and purifiers that are used with fuel-cell power.

More on the chemistry of Boeing’s batteries and their problems can be found on Wikipedia. You can also read an interview with the head of Tesla motors regarding his suggestions and offer of help.

 

Two things are infinite

Einstein is supposed to have commented that there are only two things that are infinite: the size of the universe and human stupidity, and he wasn’t sure about the former.

While Einstein still appears to be correct about the latter infinite, there is now more disagreement about the size of the universe. In Einstein’s day, it was known that the universe appeared to have originated in a big bang with all mass radiating outward at a ferocious rate. If the mass of the universe were high enough, and the speed were slow enough the universe would be finite and closed in on itself. That is, it would be a large black hole. But in Einstein’s day, the universe didn’t look to have enough mass. It thus looked like the universe was endless, but non-uniform. It appeared to be mostly filled with empty space — something that kept us from frying from the heat of distant stars.

Since Einstein’s day we’ve discovered more mass in the universe, but not quite enough to make us a black hole given the universe’s size. We’ve discovered neutron stars and black holes, dark concentrated masses, but not enough of them. We’ve discovered neutrinos, tiny neutral particles that fill space, and we’ve shown that they have rest-mass enough that neutrinos are now thought to make up most of the mass of the universe. But even with these dark-ish matter, we still have not found enough for the universe to be non-infinite, a black hole. Worse yet, we’ve discovered dark energy, something that keeps the universe expanding at nearly the speed of light when you’d think it should have slowed by now; this fast expansion makes it ever harder to find enough mass to close the universe (why we’d want to close it is an aesthetic issue discussed below).

Still, there is evidence for another, smaller mass item floating in space, the axion. This particle, and it’s yet-smaller companion, the axiono, may be the source of both the missing dark matter and the dark energy, see figure below. Axions should have masses about 10-7 eV, and should interact enough with matter to explain why there is more matter than antimatter while leaving the properties of matter otherwise unchanged. From normal physics, you’d expect an equal amount of matter and antimatter as antimatter is just matter moving backwards in time. Further, the light mass and weak interactions could allow axions to provide a halo around galaxies (helpful for galactic stability).

Mass of the Universe with Axions, no axions. Here is a plot from a recent SUSY talk (2010) http://susy10.uni-bonn.de/data/KimJEpreSUSY.pdf

Mass of the Universe with Axions, no axions. Here is a plot from a recent SUSY talk (2010) http://susy10.uni-bonn.de/data/KimJEpreSUSY.pdf

The reason you’d want the universe to be closed is aesthetic. The universe is nearly closed, if you think in terms of scientific numbers, and it’s hard to see why the universe should not then be closed. We appear to have an awful lot of mass, in terms of grams or kg, but appear to have only 20% of the required mass for a black hole. In terms of orders of magnitudes we are so close that you’d think we’d have 100% of the required mass. If axions are found to exist, and the evidence now is about 50-50, they will interact with strong magnetic fields so that they change into photons and photons change into axions. It is possible that the mass this represents will be the missing dark matter allowing our universe to be closed, and will be the missing dark energy.

As a final thought I’ve always wondered why religious leaders have been so against mention of “the big bang.” You’d think that the biggest boost to religion would be knowledge that everything appeared from nothing one bright and sunny morning, but they don’t seem to like the idea at all. If anyone who can explain that to me, I’d appreciate it. Thanks, Robert E. B.

The martian sky: why is it yellow?

In a previous post, I detailed my calculations concerning the color of the sky and sun. Basically the sun gives off light mostly in the yellow to green range, with fairly little red or purple. A lot of the blue and green wavelengths scatter leaving the sun  looking yellow because yellow looks yellow and the red plus blue also looks yellow because of additive color.

If you look at the sky through a spectroscope, it’s pretty blue with some green. Sky blue involves a bit of an eye trick of additive color so that we see the scattered blue + green as sky blue and not aqua. At sundown, the sun becomes reddish and the majority of the sky becomes greenish-grey as more green and yellow light gets scattered. The sky near the sun is orange as the atmosphere is thick enough to scatter orange, while the blue and green scatters out.

Now, to talk about the color of the sky on Mars, both at noon and at sunset. Except for the effect of the red color of the dust on Mars I would expect the sky to be blue on Mars, just like on earth but a lighter shade of blue as the atmosphere is thinner. When you add some red from the dust, one would expect the sky to be grey. That is, I would expect to find a simple combination of a base of sky blue (blue plus green), plus some extra red-orange light scattered from the Martian dust. In additive colors, the combination of blue-green and red-orange is grey, so that’s the color I’d expect the Martian sky to be normally. Some photos of the Martian sky match this expectation; see below. My guess is this is on a day when there was not much dust in the air, though NASA provides no details here.

martian sky; looks grey

On some days (high dust days, I assume), the Martian sky is turns a shade of yellow-green. I’d guess that’s because the red-dust absorbs the blue and some of the green spectrum, but does not actually add red. We are thus involved with subtractive color and, in subtractive color orange plus blue-green = butterscotch, not grey or pink.

Martian sky color

I now present a photo of the Martian sky at sunset. This is something really peculiar that I would not have expected ahead of time, but think I can explain now that I see it. The sky looks yellow in general, like in the photo above, but blue around the sun. I could explain this picture by saying that the blue and green of the Martian sky is being scattered by the Martian air (CO2, mostly), just like our atmosphere scatters these colors on earth; the sky near the sun looks blue, not red-orange because the Martian atmosphere is thinner (at noon there is less air to scatter light, but at sun-down the atmosphere is the same thickness as ours, more or less). The red of the dust does not show up in the sky color near the sun since the red-color is back scattered near the sun, and not front scattered. The Martian sky is yellow elsewhere where there is some front scatter of the reddish light reflecting off of the dust. This sounds plausible to me; tell me what you think.

Martian sky at sunset

Martian sky at sunset

As an aside, while I have long understood there was an experimental difference between subtractive and additive color, I have never quite understood why this should be so. Why is it that subtractive color combinations are different, and uniformly different from additive color combinations. I’d have thought you’d get more-or-less the same color if you remove red from one part of a piece of paper and remove blue from another as if you add red, purple, and yellow. A mental model I have (perhaps wrong) is that subtractive color looks like it does because of the details of the spectral absorption of the particular pigment chemicals that are typically used. Based on this model, I expect to find someday some new red and green pigments where the combination looks yellow when mixed on a page. I’ve not found it yet, but that’s my expectation — perhaps you know of a really good explanation for why additive color is so different from subtractive color.

Joke about antimatter and time travel

I’m sorry we don’t serve antimatter men here.

Antimatter man walks into a bar.

Is funny because … in quantum-physics there is no directionality in time. Thus an electron can change directions in time and then appears to the observer as a positron, an anti electron that has the same mass as a normal electron but the opposite charge and an opposite spin, etc. In physics, the reason electrons and positrons appear to annihilate is that it’s there was only one electron to begin with. That electron started going backwards in time so it disappeared in our forward-in-time time-frame.

The thing is, time is quite apparent on a macroscopic scales. It’s one of the most apparent aspects of macroscopic existence. Perhaps the clearest proof that time is flowing in one direction only is entropy. In normal life, you can drop a glass and watch it break whenever you like, but you can not drop shards and expect to get a complete glass. Similarly, you know you are moving forward in time if you can drop an ice cube into a hot cup of coffee and make it luke-warm. If you can reach into a cup of luke-warm coffee and extract an ice cube to make it hot, you’re moving backwards in time.

It’s also possible that gravity proves that time is moving forward. If an anti apple is just a normal apple that is moving backwards in time, then I should expect that, when I drop an anti-apple, I will find it floats upward. On the other hand, if mass is inherently a warpage of space-time, it should fall down. Perhaps when we understand gravity we will also understand how quantum physics meets the real world of entropy.

Heat conduction in insulating blankets, aerogels, space shuttle tiles, etc.

A lot about heat conduction in insulating blankets can be explained by the ordinary motion of gas molecules. That’s because the thermal conductivity of air (or any likely gas) is much lower than that of glass, alumina, or any likely solid material used for the structure of the blanket. At any temperature, the average kinetic energy of an air molecule is 1/2kT in any direction, or 3/2kT altogether; where k is Boltzman’s constant, and T is absolute temperature, °K. Since kinetic energy equals 1/2 mv2, you find that the average velocity in the x direction must be v = √kT/m = √RT/M. Here m is the mass of the gas molecule in kg, M is the molecular weight also in kg (0.029 kg/mol for air), R is the gas constant 8.29J/mol°C, and v is the molecular velocity in the x direction, in meters/sec. From this equation, you will find that v is quite large under normal circumstances, about 290 m/s (650 mph) for air molecules at ordinary temperatures of 22°C or 295 K. That is, air molecules travel in any fixed direction at roughly the speed of sound, Mach 1 (the average speed including all directions is about √3 as fast, or about 1130 mph).

The distance a molecule will go before hitting another one is a function of the cross-sectional areas of the molecules and their densities in space. Dividing the volume of a mol of gas, 0.0224 m3/mol at “normal conditions” by the number of molecules in the mol (6.02 x10^23) gives an effective volume per molecule at this normal condition: .0224 m3/6.0210^23 = 3.72 x10^-26 m3/molecule at normal temperatures and pressures. Dividing this volume by the molecular cross-section area for collisions (about 1.6 x 10^-19 m2 for air based on an effective diameter of 4.5 Angstroms) gives a free-motion distance of about 0.23×10^-6 m or 0.23µ for air molecules at standard conditions. This distance is small, to be sure, but it is 1000 times the molecular diameter, more or less, and as a result air behaves nearly as an “ideal gas”, one composed of point masses under normal conditions (and most conditions you run into). The distance the molecule travels to or from a given surface will be smaller, 1/√3 of this on average, or about 1.35×10^-7m. This distance will be important when we come to estimate heat transfer rates at the end of this post.

 

Molecular motion of an air molecule (oxygen or nitrogen) as part of heat transfer process; this shows how some of the dimensions work.

Molecular motion of an air molecule (oxygen or nitrogen) as part of heat transfer process; this shows how some of the dimensions work.

The number of molecules hitting per square meter per second is most easily calculated from the transfer of momentum. The pressure at the surface equals the rate of change of momentum of the molecules bouncing off. At atmospheric pressure 103,000 Pa = 103,000 Newtons/m2, the number of molecules bouncing off per second is half this pressure divided by the mass of each molecule times the velocity in the surface direction. The contact rate is thus found to be (1/2) x 103,000 Pa x 6.02^23 molecule/mol /(290 m/s. x .029 kg/mol) = 36,900 x 10^23 molecules/m2sec.

The thermal conductivity is merely this number times the heat capacity transfer per molecule times the distance of the transfer. I will now calculate the heat capacity per molecule from statistical mechanics because I’m used to doing things this way; other people might look up the heat capacity per mol and divide by 6.02 x10^23: For any gas, the heat capacity that derives from kinetic energy is k/2 per molecule in each direction, as mentioned above. Combining the three directions, that’s 3k/2. Air molecules look like dumbbells, though, so they have two rotations that contribute another k/2 of heat capacity each, and they have a vibration that contributes k. We begin with an approximate value for k = 2 cal/mol of molecules per °C; it’s actually 1.987 but I round up to include some electronic effects. Based on this, we calculate the heat capacity of air to be 7 cal/mol°C at constant volume or 1.16 x10^-23 cal/molecule°C. The amount of energy that can transfer to the hot (or cold) wall is this heat capacity times the temperature difference that molecules carry between the wall and their first collision with other gases. The temperature difference carried by air molecules at standard conditions is only 1.35 x10-7 times the temperature difference per meter because the molecules only go that far before colliding with another molecule (remember, I said this number would be important). The thermal conductivity for stagnant air per meter is thus calculated by multiplying the number of molecules times that hit per m2 per second, the distance the molecule travels in meters, and the effective heat capacity per molecule. This would be 36,900 x 10^23  molecules/m2sec x 1.35 x10-7m x 1.16 x10^-23 cal/molecule°C = 0.00578 cal/ms°C or .0241 W/m°C. This value is (pretty exactly) the thermal conductivity of dry air that you find by experiment.

I did all that math, though I already knew the thermal conductivity of air from experiment for a few reasons: to show off the sort of stuff you can do with simple statistical mechanics; to build up skills in case I ever need to know the thermal conductivity of deuterium or iodine gas, or mixtures; and finally, to be able to understand the effects of pressure, temperature and (mainly insulator) geometry — something I might need to design a piece of equipment with, for example, lower thermal heat losses. I find, from my calculation that we should not expect much change in thermal conductivity with gas pressure at near normal conditions; to first order, changes in pressure will change the distance the molecule travels to exactly the same extent that it changes the number of molecules that hit the surface per second. At very low pressures or very small distances, lower pressures will translate to lower conductivity, but for normal-ish pressures and geometries, changes in gas pressure should not affect thermal conductivity — and does not.

I’d predict that temperature would have a larger effect on thermal conductivity, but still not an order-of magnitude large effect. Increasing the temperature increases the distance between collisions in proportion to the absolute temperature, but decreases the number of collisions by the square-root of T since the molecules move faster at high temperature. As a result, increasing T has a √T positive effect on thermal conductivity.

Because neither temperature nor pressure has much effect, you might expect that the thermal conductivity of all air-filed insulating blankets at all normal-ish conditions is more-or-less that of standing air (air without circulation). That is what you find, for the most part; the same 0.024 W/m°C thermal conductivity with standing air, with high-tech, NASA fiber blankets on the space shuttle and with the cheapest styrofoam cups. Wool felt has a thermal conductivity of 0.042 W/m°C, about twice that of air, a not-surprising result given that wool felt is about 1/2 wool and 1/2 air.

Now we can start to understand the most recent class of insulating blankets, those with very fine fibers, or thin layers of fiber (or aluminum or gold). When these are separated by less than 0.2µ you finally decrease the thermal conductivity at room temperature below that for air. These layers decrease the distance traveled between gas collisions, but still leave the same number of collisions with the hot or cold wall; as a result, the smaller the gap below .2µ the lower the thermal conductivity. This happens in aerogels and some space blankets that have very small silica fibers, less than .1µ apart (<100 nm). Aerogels can have much lower thermal conductivities than 0.024 W/m°C, even when filled with air at standard conditions.

In outer space you get lower thermal conductivity without high-tech aerogels because the free path is very long. At these pressures virtually every molecule hits a fiber before it hits another molecule; for even a rough blanket with distant fibers, the fibers bleak up the path of the molecules significantly. Thus, the fibers of the space shuttle (about 10 µ apart) provide far lower thermal conductivity in outer space than on earth. You can get the same benefit in the lab if you put a high vacuum of say 10^-7 atm between glass walls that are 9 mm apart. Without the walls, the air molecules could travel 1.3 µ/10^-7 = 13m before colliding with each other. Since the walls of a typical Dewar are about 0.009 m apart (9 mm) the heat conduction of the Dewar is thus 1/150 (0.7%) as high as for a normal air layer 9mm thick; there is no thermal conductivity of Dewar flasks and vacuum bottles as such, since the amount of heat conducted is independent of gap-distance. Pretty spiffy. I use this knowledge to help with the thermal insulation of some of our hydrogen generators and hydrogen purifiers.

There is another effect that I should mention: black body heat transfer. In many cases black body radiation dominates: it is the reason the tiles are white (or black) and not clear; it is the reason Dewar flasks are mirrored (a mirrored surface provides less black body heat transfer). This post is already too long to do black body radiation justice here, but treat it in more detail in another post.

RE. Buxbaum