Tag Archives: energy

My latest invention: improved fuel cell reformer

Last week, I submitted a provisional patent application for an improved fuel reformer system to allow a fuel cell to operate on ordinary, liquid fuels, e.g. alcohol, gasoline, and JP-8 (diesel). I’m attaching the complete text of the description, below, but since it is not particularly user-friendly, I’d like to add a small, explanatory preface. What I’m proposing is shown in the diagram, following. I send a hydrogen-rich stream plus ordinary fuel and steam to the fuel cell, perhaps with a pre-reformer. My expectation that the fuel cell will not completely convert this material to CO2 and water vapor, even with the pre-reformer. Following the fuel cell, I then use a water-gas shift reactor to convert product CO and H2O to H2 and CO2 to increase the hydrogen content of the stream. I then use a semi-permeable membrane to extract the waste CO2 and water. I recirculate the hydrogen and the rest of the water back to the fuel cell to generate extra power, prevent coking, and promote steam reforming. I calculate the design should be able to operate at, perhaps 0.9 Volt per cell, and should nearly double the energy per gallon of fuel compared to ordinary diesel. Though use of pure hydrogen fuel would give better mileage, this design seems better for some applications. Please find the text following.

Use of a Water-Gas shift reactor and a CO2 extraction membrane to improve fuel utilization in a solid oxide fuel cell system.

Inventor: Dr. Robert E. Buxbaum, REB Research, 12851 Capital St, Oak Park, MI 48237; Patent Pending.

Solid oxide fuel cells (SOFCs) have improved over the last 10 years to the point that they are attractive options for electric power generation in automobiles, airplanes, and auxiliary power supplies. These cells operate at high temperatures and tolerate high concentrations of CO, hydrocarbons and limited concentrations of sulfur (H2S). SOFCs can operate on reformate gas and can perform limited degrees of hydrocarbon reforming too – something that is advantageous from the stand-point of fuel logistics: it’s far easier to transport a small volume of liquid fuel that it is a large volume of H2 gas. The main problem with in-situ reforming is the danger of coking the fuel cell, a problem that gets worse when reforming is attempted with the more–desirable, heavier fuels like gasoline and JP-8. To avoid coking the fuel cell, heavier fuels are typically reforming before hand in a separate reactor, typically by partial oxidation at auto-thermal conditions, a process that typically adds nitrogen and results in the inability to use the natural heat given off by the fuel cell. Steam reforming has been suggested as an option (Chick, 2011) but there is not enough heat released by the fuel cell alone to do it with the normal fuel cycles.

Another source of inefficiency in reformate-powered SOFC systems is basic to the use of carbon-containing fuels: the carbon tends to leave the fuel cell as CO instead of CO2. CO in the exhaust is undesirable from two perspectives: CO is toxic, and quite a bit of energy is wasted when the carbon leaves in this form. Normally, carbon can not leave as CO2 though, since CO is the more stable form at the high temperatures typical of SOFC operation. This patent provides solutions to all these problems through the use of a water-gas shift reactor and a CO2-extraction membrane. Find a drawing of a version of the process following.

RE. Buxbaum invention: A suggested fuel cycle to allow improved fuel reforming with a solid oxide fuel cell

RE. Buxbaum invention: A suggested fuel cycle to allow improved fuel reforming with a solid oxide fuel cell

As depicted in Figure 1, above, the fuel enters, is mixed with steam or partially boiled water, and heated in the rectifying heat exchanger. The hot steam + fuel mix then enters a steam reformer and perhaps a sulfur removal stage. This would be typical steam reforming except for a key difference: the heat for reforming comes (at least in part) from waste heat of the SOFC. Normally speaking there would not be enough heat, but in this system we add a recycle stream of H2-rich gas to the fuel cell. This stream, produced from waste CO in a water-gas shift reactor (the WGS) shown in Figure 1. This additional H2 adds to the heat generated by the SOFC and also adds to the amount of water in the SOFC. The net effect should be to reduce coking in the fuel cell while increasing the output voltage and providing enough heat for steam reforming. At least, that is the thought.

SOFCs differ from proton conducting FCS, e.g. PEM FCs, in that the ion that moves is oxygen, not hydrogen. As a result, water produced in the fuel cell ends up in the hydrogen-rich stream and not in the oxygen stream. Having this additional water in the fuel stream of the SOFC can promote fuel reforming within the FC. This presents a difficulty in exhausting the waste water vapor in that a means must be found to separate it from un-combusted fuel. This is unlike the case with PEM FCs, where the waste water leaves with the exhaust air. Our main solution to exhausting the water is the use of a membrane and perhaps a knockout drum to extract it from un-combusted fuel gases.

Our solution to the problem of carbon leaving the SOFC as CO is to react this CO with waste H2O to convert it to CO2 and additional H2. This is done in a water gas shift reactor, the WGS above. We then extract the CO2 and remaining, unused water through a CO2- specific membrane and we recycle the H2 and unconverted CO back to the SOFC using a low temperature recycle blower. The design above was modified from one in a paper by PNNL; that paper had neither a WGS reactor nor a membrane. As a result it got much worse fuel conversion, and required a high temperature recycle blower.

Heat must be removed from the SOFC output to cool it to a temperature suitable for the WGS reactor. In the design shown, the heat is used to heat the fuel before feeding it to the SOFC – this is done in the Rectifying HX. More heat must be removed before the gas can go to the CO2 extractor membrane; this heat is used to boil water for the steam reforming reaction. Additional heat inputs and exhausts will be needed for startup and load tracking. A solution to temporary heat imbalances is to adjust the voltage at the SOFC. The lower the voltage the more heat will be available to radiate to the steam reformer. At steady state operation, a heat balance suggests we will be able to provide sufficient heat to the steam reformer if we produce electricity at between 0.9 and 1.0 Volts per cell. The WGS reactor allows us to convert virtually all the fuel to water and CO2, with hardly any CO output. This was not possible for any design in the PNNL study cited above.

The drawing above shows water recycle. This is not a necessary part of the cycle. What is necessary is some degree of cooling of the WGS output. Boiling recycle water is shown because it can be a logistic benefit in certain situations, e.g. where you can not remove the necessary CO2 without removing too much of the water in the membrane module, and in mobile military situations, where it’s a benefit to reduce the amount of material that must be carried. If water or fuel must be boiled, it is worthwhile to do so by cooling the output from the WGS reactor. Using this heat saves energy and helps protect the high-selectivity membranes. Cooling also extends the life of the recycle blower and allows the lower-temperature recycle blowers. Ideally the temperature is not lowered so much that water begins to condense. Condensed water tends to disturb gas flow through a membrane module. The gas temperatures necessary to keep water from condensing in the module is about 180°C given typical, expected operating pressures of about 10 atm. The alternative is the use of a water knockout and a pressure reducer to prevent water condensation in membranes operated at lower temperatures, about 50°C.

Extracting the water in a knockout drum separate from the CO2 extraction has the secondary advantage of making it easier to adjust the water content in the fuel-gas stream. The temperature of condensation can then be used to control the water content; alternately, a separate membrane can extract water ahead of the CO2, with water content controlled by adjusting the pressure of the liquid water in the exit stream.

Some description of the membrane is worthwhile at this point since a key aspect of this patent – perhaps the key aspect — is the use of a CO2-extraction membrane. It is this addition to the fuel cycle that allows us to use the WGS reactor effectively to reduce coking and increase efficiency. The first reasonably effective CO2 extraction membranes appeared only about 5 years ago. These are made of silicone polymers like dimethylsiloxane, e.g. the Polaris membrane from MTR Inc. We can hope that better membranes will be developed in the following years, but the Polaris membrane is a reasonably acceptable option and available today, its only major shortcoming being its low operating temperature, about 50°C. Current Polaris membranes show H2-CO2 selectivity about 30 and a CO2 permeance about 1000 Barrers; these permeances suggest that high operating pressures would be desirable, and the preferred operation pressure could be 300 psi (20 atm) or higher. To operate the membrane with a humid gas stream at high pressure and 50°C will require the removal of most of the water upstream of the membrane module. For this, I’ve included a water knockout, or steam trap, shown in Figure 1. I also include a pressure reduction valve before the membrane (shown as an X in Figure 1). The pressure reduction helps prevent water condensation in the membrane modules. Better membranes may be able to operate at higher temperatures where this type of water knockout is not needed.

It seems likely that, no matter what improvements in membrane technology, the membrane will have to operate at pressures above about 6 atm, and likely above about 10 atm (upstream pressure) exhausting CO2 and water vapor to atmosphere. These high pressures are needed because the CO2 partial pressure in the fuel gas leaving the membrane module will have to be significantly higher than the CO2 exhaust pressure. Assuming a CO2 exhaust pressure of 0.7 atm or above and a desired 15% CO2 mol fraction in the fuel gas recycle, we can expect to need a minimum operating pressure of 4.7 atm at the membrane. Higher pressures, like 10 or 20 atm could be even more attractive.

In order to reform a carbon-based fuel, I expect the fuel cell to have to operate at 800°C or higher (Chick, 2011). Most fuels require high temperatures like this for reforming –methanol being a notable exception requiring only modest temperatures. If methanol is the fuel we will still want a rectifying heat exchanger, but it will be possible to put it after the Water-Gas Shift reactor, and it may be desirable for the reformer of this fuel to follow the fuel cell. When reforming sulfur-containing fuels, it is likely that a sulfur removal reactor will be needed. Several designs are available for this; I provide references to two below.

The overall system design I suggest should produce significantly more power per gm of carbon-based feed than the PNNL system (Chick, 2011). The combination of a rectifying heat exchange, a water gas reactor and CO2 extraction membrane recovers chemical energy that would otherwise be lost with the CO and H2 bleed steam. Further, the cooling stage allows the use of a lower temperature recycle pump with a fairly low compression ratio, likely 2 or less. The net result is to lower the pump cost and power drain. The fuel stream, shown in orange, is reheated without the use of a combustion pre-heater, another big advantage. While PNNL (Chick, 2011) has suggested an alternative route to recover most of the chemical energy through the use of a turbine power generator following the fuel cell, this design should have several advantages including greater reliability, and less noise.

Claims:

1.   A power-producing, fuel cell system including a solid oxide fuel cell (SOFC) where a fuel-containing output stream from the fuel cell goes to a regenerative heat exchanger followed by a water gas shift reactor followed by a membrane means to extract waste gases including carbon dioxide (CO2) formed in said reactor. Said reactor operating a temperatures between 200 and 450°C and the extracted carbon dioxide leaving at near ambient pressure; the non-extracted gases being recycled to the fuel cell.

Main References:

The most relevant reference here is “Solid Oxide Fuel Cell and Power System Development at PNNL” by Larry Chick, Pacific Northwest National Laboratory March 29, 2011: http://www.energy.gov/sites/prod/files/2014/03/f10/apu2011_9_chick.pdf. Also see US patent  8394544. it’s from the same authors and somewhat similar, though not as good and only for methane, a high-hydrogen fuel.

Robert E. Buxbaum, REB Research, May 11, 2015.

No need to conserve energy

Earth day, energy conservation stamp from the 1970s

Energy conservation stamp from the early 70s

I’m reminded that one of the major ideas of Earth Day, energy conservation, is completely unnecessary: Energy is always conserved. It’s entropy that needs to be conserved.

The entropy of the universe increases for any process that occurs, for any process that you can make occur, and for any part of any process. While some parts of processes are very efficient in themselves, they are always entropy generators when considered on a global scale. Entropy is the arrow of time: if entropy ever goes backward, time has reversed.

A thought I’ve had on how do you might conserve entropy: grow trees and use them for building materials, or convert them to gasoline, or just burn them for power. Under ideal conditions, photosynthesis is about 30% efficient at converting photon-energy to glucose. (photons + CO2 + water –> glucose + O2). This would be nearly same energy conversion efficiency as solar cells if not for the energy the plant uses to live. But solar cells have inefficiency issues of their own, and as a result the land use per power is about the same. And it’s a lot easier to grow a tree and dispose of forest waste than it is to make a solar cell and dispose of used coated glass and broken electric components. Just some Earth Day thoughts from Robert E. Buxbaum. April 24, 2015

Much of the chemistry you learned is wrong

When you were in school, you probably learned that understanding chemistry involved understanding the bonds between atoms. That all the things of the world were made of molecules, and that these molecules were fixed proportion combinations of the chemical elements held together by one of the 2 or 3 types of electron-sharing bonds. You were taught that water was H2O, that table salt was NaCl, that glass was SIO2, and rust was Fe2O3, and perhaps that the bonds involved an electron transferring between an electron-giver: H, Na, Si, or Fe… to an electron receiver: O or Cl above.

Sorry to say, none of that is true. These are fictions perpetrated by well-meaning, and sometime ignorant teachers. All of the materials mentioned above are grand polymers. Any of them can have extra or fewer atoms of any species, and as a result the stoichiometry isn’t quite fixed. They are not molecules at all in the sense you knew them. Also, ionic bonds hardly exist. Not in any chemical you’re familiar with. There are no common electron compounds. The world works, almost entirely on covalent, shared bonds. If bonds were ionic you could separate most materials by direct electrolysis of the pure compound, but you can not. You can not, for example, make iron by electrolysis of rust, nor can you make silicon by electrolysis of pure SiO2, or titanium by electrolysis of pure TiO. If you could, you’d make a lot of money and titanium would be very cheap. On the other hand, the fact that stoichiometry is rarely fixed allows you to make many useful devices, e.g. solid oxide fuel cells — things that should not work based on the chemistry you were taught.

Iron -zinc forms compounds, but they don't have fixed stoichiometry. As an example the compound at 60 atom % Zn is, I guess Zn3Fe2, but the composition varies quite a bit from there.

Iron -zinc forms compounds, but they don’t have fixed stoichiometry. As an example the compound at 68-80 atom% Zn is, I guess Zn7Fe3 with many substituted atoms, especially at temperatures near 665°C.

Because most bonds are covalent many compounds form that you would not expect. Most metal pairs form compounds with unusual stoicheometric composition. Here, for example, is the phase diagram for zinc and Iron –the materials behind galvanized sheet metal: iron that does not rust readily. The delta phase has a composition between 85 and 92 atom% Zn (8 and 15 a% iron): Perhaps the main compound is Zn5Fe2, not the sort of compound you’d expect, and it has a very variable compositions.

You may now ask why your teachers didn’t tell you this sort of stuff, but instead told you a pack of lies and half-truths. In part it’s because we don’t quite understand this ourselves. We don’t like to admit that. And besides, the lies serve a useful purpose: it gives us something to test you on. That is, a way to tell if you are a good student. The good students are those who memorize well and spit our lies back without asking too many questions of the wrong sort. We give students who do this good grades. I’m going to guess you were a good student (congratulations, so was I). The dullards got confused by our explanations. They asked too many questions, and asked, “can you explain that again? Or why? We get mad at these dullards and give them low grades. Eventually, the dullards feel bad enough about themselves to allow themselves to be ruled by us. We graduates who are confident in our ignorance rule the world, but inventions come from the dullards who don’t feel bad about their ignorance. They survive despite our best efforts. A few more of these folks survive in the west, and especially in America, than survive elsewhere. If you’re one, be happy you live here. In most countries you’d be beheaded.

Back to chemistry. It’s very difficult to know where to start to un-teach someone. Lets start with EMF and ionic bonds. While it is generally easier to remove an electron from a free metal atom than from a free non-metal atom, e.g. from a sodium atom instead of oxygen, removing an electron is always energetically unfavored, for all atoms. Similarly, while oxygen takes an extra electron easier than iron would, adding an electron is energetically unfavored. The figure below shows the classic ion bond, left, and two electron sharing options (center right) One is a bonding option the other anti-bonding. Nature prefers this to electron sharing to ionic bonds, even with blatantly ionic elements like sodium and chlorine.

Bond options in NaCl. Note that covalent is the stronger bond option though it requires less ionization.

Bond options in NaCl. Note that covalent is the stronger bond option though it requires less ionization.

There is a very small degree of ionic bonding in NaCl (left picture), but in virtually every case, covalent bonds (center) are easier to form and stronger when formed. And then there is the key anti-bonding state (right picture). The anti bond is hardly ever mentioned in high school or college chemistry, but it is critical — it’s this bond that keeps all mater from shrinking into nothingness.

I’ve discussed hydrogen bonds before. I find them fascinating since they make water wet and make life possible. I’d mentioned that they are just like regular bonds except that the quantum hydrogen atom (proton) plays the role that the electron plays. I now have to add that this is not a transfer, but a covalent spot. The H atom (proton) divides up like the electron did in the NaCl above. Thus, two water molecules are attracted by having partial bits of a proton half-way between the two oxygen atoms. The proton does not stay put at the center, there, but bobs between them as a quantum cloud. I should also mention that the hydrogen bond has an anti-bond state just like the electron above. We were never “taught” the hydrogen bond in high school or college — fortunately — that’s how I came to understand them. My professors, at Princeton saw hydrogen atoms as solid. It was their ignorance that allowed me to discover new things and get a PhD. One must be thankful for the folly of others: without it, no talented person could succeed.

And now I get to really weird bonds: entropy bonds. Have you ever noticed that meat gets softer when its aged in the freezer? That’s because most of the chemicals of life are held together by a sort of anti-bond called entropy, or randomness. The molecules in meat are unstable energetically, but actually increase the entropy of the water around them by their formation. When you lower the temperature you case the inherent instability of the bonds to cause them to let go. Unfortunately, this happens only slowly at low temperatures so you’ve got to age meat to tenderize it.

A nice thing about the entropy bond is that it is not particularly specific. A consequence of this is that all protein bonds are more-or-less the same strength. This allows proteins to form in a wide variety of compositions, but also means that deuterium oxide (heavy water) is toxic — it has a different entropic profile than regular water.

Robert Buxbaum, March 19, 2015. Unlearning false facts one lie at a time.

The speed of sound, Buxbaum’s correction

Ernst Mach showed that sound must travel at a particular speed through any material, one determined by the conservation of energy and of entropy. At room temperature and 1 atm, that speed is theoretically predicted to be 343 m/s. For a wave to move at any other speed, either the laws of energy conservation would have to fail, or ∆S ≠ 0 and the wave would die out. This is the only speed where you could say there is a traveling wave, and experimentally, this is found to be the speed of sound in air, to good accuracy.

Still, it strikes me that Mach’s assumptions may have been too restrictive for short-distance sound waves. Perhaps there is room for other sound speeds if you allow ∆S > 0, and consider sound that travels short distances and dies out far from the source. Waves at these, other speeds might affect music appreciation, or headphone design. As these waves were never treated in my thermodynamics textbooks, I wondered if I could derive their speed in any nice way, and if they were faster or slower than the main wave? (If I can’t use this blog to re-think my college studies, what good is it?)

I t can help to think of a shock-wave of sound wave moving down a constant area tube of still are at speed u, with us moving along at the same speed as the wave. In this view, the wave appears stationary, but there is a wind of speed u approaching it from the left.

Imagine the sound-wave moving to the right, down a constant area tube at speed u, with us moving along at the same speed. Thus, the wave appears stationary, with a wind of speed u from the right.

As a first step to trying to re-imagine Mach’s calculation, here is one way to derive the original, for ∆S = 0, speed of sound: I showed in a previous post that the entropy change for compression can be imagines to have two parts, a pressure part at constant temperature: dS/dV at constant T = dP/dT at constant V. This part equals R/V for an ideal gas. There is also a temperature at constant volume part of the entropy change: dS/dT at constant V = Cv/T. Dividing the two equations, we find that, at constant entropy, dT/dV = RT/CvV= P/Cv. For a case where ∆S>0, dT/dV > P/Cv.

Now lets look at the conservation of mechanical energy. A compression wave gives off a certain amount of mechanical energy, or work on expansion, and this work accelerates the gas within the wave. For an ideal gas the internal energy of the gas is stored only in its temperature. Lets now consider a sound wave going down a tube flow left to right, and lets our reference plane along the wave at the same speed so the wave seems to sit still while a flow of gas moves toward it from the right at the speed of the sound wave, u. For this flow system energy is concerned though no heat is removed, and no useful work is done. Thus, any change in enthalpy only results in a change in kinetic energy. dH = -d(u2)/2 = u du, where H here is a per-mass enthalpy (enthalpy per kg).

dH = TdS + VdP. This can be rearranged to read, TdS = dH -VdP = -u du – VdP.

We now use conservation of mass to put du into terms of P,V, and T. By conservation of mass, u/V is constant, or d(u/V)= 0. Taking the derivative of this quotient, du/V -u dV/V2= 0. Rearranging this, we get, du = u dV/V (No assumptions about entropy here). Since dH = -u du, we say that udV/V = -dH = -TdS- VdP. It is now common to say that dS = 0 across the sound wave, and thus find that u2 = -V(dP/dV) at const S. For an ideal gas, this last derivative equals, PCp/VCv, so the speed of sound, u= √PVCp/Cv with the volume in terms of mass (kg/m3).

The problem comes in where we say that ∆S>0. At this point, I would say that u= -V(dH/dV) = VCp dT/dV > PVCp/Cv. Unless, I’ve made a mistake (always possible), I find that there is a small leading, non-adiabatic sound wave that goes ahead of the ordinary sound wave and is experienced only close to the source caused by mechanical energy that becomes degraded to raising T and gives rise more compression than would be expected for iso-entropic waves.

This should have some relevance to headphone design and speaker design since headphones are heard close to the ear, while speakers are heard further away. Meanwhile the recordings are made by microphones right next to the singers or instruments.

Robert E. Buxbaum, August 26, 2014

Dr. Who’s Quantum reality viewed as diffusion

It’s very hard to get the meaning of life from science because reality is very strange, Further, science is mathematical, and the math relations for reality can be re-arranged. One arrangement of the terms will suggest a version of causality, while another will suggest a different causality. As Dr. Who points out, in non-linear, non-objective terms, there’s no causality, but rather a wibbly-wobbely ball of timey-wimey stuff.

Time as a ball of wibblely wobbly timey wimey stuff.

Reality is a ball of  timey wimpy stuff, Dr. Who.

To this end, I’ll provide my favorite way of looking at the timey-wimey way of the world by rearranging the equations of quantum mechanics into a sort of diffusion. It’s not the diffusion of something you’re quite familiar with, but rather a timey-wimey wave-stuff referred to as Ψ. It’s part real and part imaginary, and the only relationship between ψ and life is that the chance of finding something somewhere is proportional Ψ*|Ψ. The diffusion of this half-imaginary stuff is the underpinning of reality — if viewed in a certain way.

First let’s consider the steady diffusion of a normal (un-quantum) material. If there is a lot of it, like when there’s perfume off of a prima donna, you can say that N = -D dc/dx where N is the flux of perfume (molecules per minute per area), dc/dx is a concentration gradient (there’s more perfume near her than near you), and D is a diffusivity, a number related to the mobility of those perfume molecules. 

We can further generalize the diffusion of an ordinary material for a case where concentration varies with time because of reaction or a difference between the in-rate and the out rate, with reaction added as a secondary accumulator, we can write: dc/dt = reaction + dN/dx = reaction + D d2c/dx2. For a first order reaction, for example radioactive decay, reaction = -ßc, and 

dc/dt = -ßc + D d2c/dx2               (1)

where ß is the radioactive decay constant of the material whose concentration is c.

Viewed in a certain way, the most relevant equation for reality, the time-dependent Schrödinger wave equation (semi-derived here), fits into the same diffusion-reaction form:

dΨ/dt = – 2iπV/h Ψ + hi/4πm d2Ψ/dx               (2)

Instead of reality involving the motion of a real material (perfume, radioactive radon, etc.) with a real concentration, c, in this relation, the material can not be sensed directly, and the concentration, Ψ, is semi -imaginary. Here, h is plank’s constant, i is the imaginary number, √-1, m is the mass of the real material, and V is potential energy. When dealing with reactions or charged materials, it’s relevant that V will vary with position (e.g. electrons’ energy is lower when they are near protons). The diffusivity term here is imaginary, hi/4πm, but that’s OK, Ψ is part imaginary, and we’d expect that potential energy is something of a destroyer of Ψ: the likelihood of finding something at a spot goes down where the energy is high.

The form of this diffusion is linear, a mathematical term that refers to equations where solution that works for Ψ will also work for 2Ψ. Generally speaking linear solutions have exp() terms in them, and that’s especially likely here as the only place where you see a time term is on the left. For most cases we can say that

Ψ = ψ exp(-2iπE/h)t               (3)

where ψ is not a function of anything but x (space) and E is the energy of the thing whose behavior is described by Ψ. If you take the derivative of equation 3 this with respect to time, t, you get

dΨ/dt = ψ (-2iπE/h) exp(-2iπE/h)t = (-2iπE/h)Ψ.               (4)

If you insert this into equation 2, you’ll notice that the form of the first term is now identical to the second, with energy appearing identically in both terms. Divide now by exp(-2iπE/h)t, and you get the following equation:

(E-V) ψ =  -h2/8π2m d2ψ/dx2                      (5)

where ψ can be thought of as the physical concentration in space of the timey-wimey stuff. ψ is still wibbly-wobbley, but no longer timey-wimey. Now ψ- squared is the likelihood of finding the stuff somewhere at any time, and E, the energy of the thing. For most things in normal conditions, E is quantized and equals approximately kT. That is E of the thing equals, typically, a quantized energy state that’s nearly Boltzmann’s constant times temperature.

You now want to check that the approximation in equation 3-5 was legitimate. You do this by checking if the length-scale implicit in exp(-2iπE/h)t is small relative to the length-scales of the action. If it is (and it usually is), You are free to solve for ψ at any E and V using normal mathematics, by analytic or digital means, for example this way. ψ will be wibbely-wobbely but won’t be timey-wimey. That is, the space behavior of the thing will be peculiar with the item in forbidden locations, but there won’t be time reversal. For time reversal, you need small space features (like here) or entanglement.

Equation 5 can be considered a simple steady state diffusion equation. The stuff whose concentration is ψ is created wherever E is greater than V, and is destroyed wherever V is greater than E. The stuff then continuously diffuses from the former area to the latter establishing a time-independent concentration profile. E is quantized (can only be some specific values) since matter can never be created of destroyed, and it is only at specific values of E that this happens in Equation 5. For a particle in a flat box, E and ψ are found, typically, by realizing that the format of ψ must be a sin function (and ignoring an infinity). For more complex potential energy surfaces, it’s best to use a matrix solution for ψ along with non-continuous calculous. This avoids the infinity, and is a lot more flexible besides.

When you detect a material in some spot, you can imagine that the space- function ψ collapses, but even that isn’t clear as you can never know the position and velocity of a thing simultaneously, so doesn’t collapse all that much. And as for what the stuff is that diffuses and has concentration ψ, no-one knows, but it behaves like a stuff. And as to why it diffuses, perhaps it’s jiggled by unseen photons. I don’t know if this is what happens, but it’s a way I often choose to imagine reality — a moving, unseen material with real and imaginary (spiritual ?) parts, whose concentration, ψ, is related to experience, but not directly experienced.

This is not the only way the equations can be rearranged. Another way of thinking of things is as the sum of path integrals — an approach that appears to me as a many-world version, with fixed-points in time (another Dr Who feature). In this view, every object takes every path possible between these points, and reality as the sum of all the versions, including some that have time reversals. Richard Feynman explains this path integral approach here. If it doesn’t make more sense than my version, that’s OK. There is no version of the quantum equations that will make total, rational sense. All the true ones are mathematically equivalent — totally equal, but differ in the “meaning”. That is, if you were to impose meaning on the math terms, the meaning would be totally different. That’s not to say that all explanations are equally valid — most versions are totally wrong, but there are many, equally valid math version to fit many, equally valid religious or philosophic world views. The various religions, I think, are uncomfortable with having so many completely different views being totally equal because (as I understand it) each wants exclusive ownership of truth. Since this is never so for math, I claim religion is the opposite of science. Religion is trying to find The Meaning of life, and science is trying to match experiential truth — and ideally useful truth; knowing the meaning of life isn’t that useful in a knife fight.

Dr. Robert E. Buxbaum, July 9, 2014. If nothing else, you now perhaps understand Dr. Who more than you did previously. If you liked this, see here for a view of political happiness in terms of the thermodynamics of free-energy minimization.

The future of steamships: steam

Most large ships and virtually all locomotives currently run on diesel power. But the diesel  engine does not drive the wheels or propeller directly; the transmission would be too big and complex. Instead, the diesel engine is used to generate electric power, and the electric power drives the ship or train via an electric motor, generally with a battery bank to provide a buffer. Current diesel generators operate at 75-300 rpm and about 40-50% efficiency (not bad), but diesel fuel is expensive. It strikes me, therefore that the next step is to switch to a cheaper fuel like coal or compressed natural gas, and convert these fuels to electricity by a partial or full steam cycle as used in land-based electric power plants

Ship-board diesel engine, 100 MW for a large container ship

Diesel engine, 100 MW for a large container ship

Steam powers all nuclear ships, and conventionally boiled steam provided the power for thousands of Liberty ships and hundreds of aircraft carriers during World War 2. Advanced steam turbine cycles are somewhat more efficient, pushing 60% efficiency for high pressure, condensed-turbine cycles that consume vaporized fuel in a gas turbine and recover the waste heat with a steam boiler exhausting to vacuum. The higher efficiency of these gas/steam turbine engines means that, even for ships that burn ship-diesel fuel (so-called bunker oil) or natural gas, there can be a cost advantage to having a degree of steam power. There are a dozen or so steam-powered ships operating on the great lakes currently. These are mostly 700-800 feet long, and operate with 1950s era steam turbines, burning bunker oil or asphalt. US Steel runs the “Arthur M Anderson”, Carson J Callaway” , “John G Munson” and “Philip R Clarke”, all built-in 1951/2. The “Upper Lakes Group” runs the “Canadian Leader”, “Canadian Provider”, “Quebecois”, and “Montrealais.” And then there is the coal-fired “Badger”. Built in 1952, the Badger is powered by two, “Skinner UniFlow” double-acting, piston engines operating at 450 psi. The Badger is cost-effective, with the low-cost of the fuel making up for the low efficiency of the 50’s technology. With larger ships, more modern boilers and turbines, and with higher pressure boilers and turbines, the economics of steam power would be far better, even for ships with modern pollution abatement.

Nuclear steam boilers can be very compact

Nuclear steam boilers can be very compact

Steam powered ships can burn fuels that diesel engines can’t: coal, asphalts, or even dry wood because fuel combustion can be external to the high pressure region. Steam engines can cost more than diesel engines do, but lower fuel cost can make up for that, and the cost differences get smaller as the outputs get larger. Currently, coal costs 1/10 as much as bunker oil on a per-energy basis, and natural gas costs about 1/5 as much as bunker oil. One can burn coal cleanly and safely if the coal is dried before being loaded on the ship. Before burning, the coal would be powdered and gassified to town-gas (CO + H2O) before being burnt. The drying process removes much of the toxic impact of the coal by removing much of the mercury and toxic oxides. Gasification before combustion further reduces these problems, and reduces the tendency to form adhesions on boiler pipes — a bane of old-fashioned steam power. Natural gas requires no pretreatment, but costs twice as much as coal and requires a gas-turbine, boiler system for efficient energy use.

Todays ships and locomotives are far bigger than in the 1950s. The current standard is an engine output about 50 MW, or 170 MM Btu/hr of motive energy. Assuming a 50% efficient engine, the fuel use for a 50 MW ship or locomotive is 340 MM Btu/hr; locomotives only use this much when going up hill with a heavy load. Illinois coal costs, currently, about $60/ton, or $2.31/MM Btu. A 50 MW engine would consume about 13 tons of dry coal per hour costing $785/hr. By comparison, bunker oil costs about $3 /gallon, or $21/MM Btu. This is nearly ten times more than coal, or $ 7,140/hr for the same 50 MW output. Over 30 years of operation, the difference in fuel cost adds up to 1.5 billion dollars — about the cost of a modern container ship.

Robert E. Buxbaum, May 16, 2014. I possess a long-term interest in economics, thermodynamics, history, and the technology of the 1800s. See my steam-pump, and this page dedicated to Peter Cooper: Engineer, citizen of New York. Wood power isn’t all that bad, by the way, but as with coal, you must dry the wood, or (ideally) convert it to charcoal. You can improve the power and efficiency of diesel and automobile engines and reduce the pollution by adding hydrogen. Normal cars do not use steam because there is more start-stop, and because it takes too long to fire up the engine before one can drive. For cars, and drone airplanes, I suggest hydrogen/ fuel cells.

If hot air rises, why is it cold on mountain-tops?

This is a child’s question that’s rarely answered to anyone’s satisfaction. To answer it well requires college level science, and by college the child has usually been dissuaded from asking anything scientific that would likely embarrass teacher — which is to say, from asking most anything. By a good answer, I mean here one that provides both a mathematical, checkable prediction of the temperature you’d expect to find on mountain tops, and one that also gives a feel for why it should be so. I’ll try to provide this here, as previously when explaining “why is the sky blue.” A word of warning: real science involves mathematics, something that’s often left behind, perhaps in an effort to build self-esteem. If I do a poor job, please text me back: “if hot air rises, what’s keeping you down?”

As a touchy-feely answer, please note that all materials have internal energy. It’s generally associated with the kinetic energy + potential energy of the molecules. It enters whenever a material is heated or has work done on it, and for gases, to good approximation, it equals the gas heat capacity of the gas times its temperature. For air, this is about 7 cal/mol°K times the temperature in degrees Kelvin. The average air at sea-level is taken to be at 1 atm, or 101,300  Pascals, and 15.02°C, or 288.15 °K; the internal energy of this are is thus 288.15 x 7 = 2017 cal/mol = 8420 J/mol. The internal energy of the air will decrease as the air rises, and the temperature drops for reasons I will explain below. Most diatomic gases have heat capacity of 7 cal/mol°K, a fact that is only explained by quantum mechanics; if not for quantum mechanics, the heat capacities of diatomic gases would be about 9 cal/mol°K.

Lets consider a volume of this air at this standard condition, and imagine that it is held within a weightless balloon, or plastic bag. As we pull that air up, by pulling up the bag, the bag starts to expand because the pressure is lower at high altitude (air pressure is just the weight of the air). No heat is exchanged with the surrounding air because our air will always be about as warm as its surroundings, or if you like you can imagine weightless balloon prevents it. In either case the molecules lose energy as the bag expands because they always collide with an outwardly moving wall. Alternately you can say that the air in the bag is doing work on the exterior air — expansion is work — but we are putting no work into the air as it takes no work to lift this air. The buoyancy of the air in our balloon is always about that of the surrounding air, or so we’ll assume for now.

A classic, difficult way to calculate the temperature change with altitude is to calculate the work being done by the air in the rising balloon. Work done is force times distance: w=  ∫f dz and this work should equal the effective cooling since heat and work are interchangeable. There’s an integral sign here to account for the fact that force is proportional to pressure and the air pressure will decrease as the balloon goes up. We now note that w =  ∫f dz = – ∫P dV because pressure, P = force per unit area. and volume, V is area times distance. The minus sign is because the work is being done by the air, not done on the air — it involves a loss of internal energy. Sorry to say, the temperature and pressure in the air keeps changing with volume and altitude, so it’s hard to solve the integral, but there is a simple approach based on entropy, S.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

I discussed entropy last month, and showed it was a property of state, and further, that for any reversible path, ∆S= (Q/T)rev. That is, the entropy change for any reversible process equals the heat that enters divided by the temperature. Now, we expect the balloon rise is reversible, and since we’ve assumed no heat transfer, Q = 0. We thus expect that the entropy of air will be the same at all altitudes. Now entropy has two parts, a temperature part, Cp ln T2/T1 and a pressure part, R ln P2/P1. If the total ∆S=0 these two parts will exactly cancel.

Consider that at 4000m, the height of Les Droites, a mountain in the Mont Blanc range, the typical pressure is 61,660 Pa, about 60.85% of sea level pressure (101325 Pa). If the air were reduced to this pressure at constant temperature (∆S)T = -R ln P2/P1 where R is the gas constant, about 2 cal/mol°K, and P2/P1 = .6085; (∆S)T = -2 ln .6085. Since the total entropy change is zero, this part must equal Cp ln T2/T1 where Cp is the heat capacity of air at constant pressure, about 7 cal/mol°K for all diatomic gases, and T1 and T2 are the temperatures (Kelvin) of the air at sea level and 4000 m. (These equations are derived in most thermodynamics texts. The short version is that the entropy change from compression at constant T equals the work at constant temperature divided by T,  ∫P/TdV=  ∫R/V dV = R ln V2/V1= -R ln P2/P1. Similarly the entropy change at constant pressure = ∫dQ/T where dQ = Cp dT. This component of entropy is thus ∫dQ/T = Cp ∫dT/T = Cp ln T2/T1.) Setting the sum to equal zero, we can say that Cp ln T2/T1 =R ln .6085, or that 

T2 = T1 (.6085)R/Cp

T2 = T1(.6085)2/7   where 0.6065 is the pressure ratio at 4000, and because for air and most diatomic gases, R/Cp = 2/7 to very good approximation, matching the prediction from quantum mechanics.

From the above, we calculate T2 = 288.15 x .8676 = 250.0°K, or -23.15 °C. This is cold enough to provide snow  on Les Droites nearly year round, and it’s pretty accurate. The typical temperature at 4000 m is 262.17 K (-11°C). That’s 26°C colder than at sea-level, and only 12°C warmer than we’d predicted.

There are three weak assumptions behind the 11°C error in our predictions: (1) that the air that rises is no hotter than the air that does not, and (2) that the air’s not heated by radiation from the sun or earth, and (3) that there is no heat exchange with the surrounding air, e.g. from rain or snow formation. The last of these errors is thought to be the largest, but it’s still not large enough to cause serious problems.

The snow cover on Kilimanjaro, 2013. If global warming models were true, it should be gone, or mostly gone.

Snow on Kilimanjaro, Tanzania 2013. If global warming models were true, the ground should be 4°C warmer than 100 years ago, and the air at this altitude, about 7°C (12°F) warmer; and the snow should be gone.

You can use this approach, with different exponents, estimate the temperature at the center of Jupiter, or at the center of neutron stars. This iso-entropic calculation is the model that’s used here, though it’s understood that may be off by a fair percentage. You can also ask questions about global warming: increased CO2 at this level is supposed to cause extreme heating at 4000m, enough to heat the earth below by 4°C/century or more. As it happens, the temperature and snow cover on Les Droites and other Alp ski areas has been studied carefully for many decades; they are not warming as best we can tell (here’s a discussion). By all rights, Mt Blanc should be Mt Green by now; no one knows why. The earth too seems to have stopped warming. My theory: clouds. 

Robert Buxbaum, May 10, 2014. Science requires you check your theory for internal and external weakness. Here’s why the sky is blue, not green.

Entropy, the most important pattern in life

One evening at the Princeton grad college a younger fellow (an 18-year-old genius) asked the most simple, elegant question I had ever heard, one I’ve borrowed and used ever since: “tell me”, he asked, “something that’s important and true.” My answer that evening was that the entropy of the universe is always increasing. It’s a fundamentally important pattern in life; one I didn’t discover, but discovered to have a lot of applications and meaning. Let me explain why it’s true here, and then why I find it’s meaningful.

Famous entropy cartoon, Harris

Famous entropy cartoon, Harris

The entropy of the universe is not something you can measure directly, but rather indirectly, from the availability of work in any corner of it. It’s related to randomness and the arrow of time. First off, here’s how you can tell if time is moving forward: put an ice-cube into hot water, if the cube dissolves and the water becomes cooler, time is moving forward — or, at least it’s moving in the same direction as you are. If you can reach into a cup of warm water and pull out an ice-cube while making the water hot, time is moving backwards. — or rather, you are living backwards. Within any closed system, one where you don’t add things or energy (sunlight say), you can tell that time is moving forward because the forward progress of time always leads to the lack of availability of work. In the case above, you could have generated some electricity from the ice-cube and the hot water, but not from the glass of warm water.

You can not extract work from a heat source alone; to extract work some heat must be deposited in a cold sink. At best the entropy of the universe remains unchanged. More typically, it increases.

You can not extract work from a heat source alone; to extract work some heat must be deposited in a cold sink. At best the entropy of the universe remains unchanged.

This observation is about as fundamental as any to understanding the world; it is the basis of entropy and the second law of thermodynamics: you can never extract useful work from a uniform temperature body of water, say, just by making that water cooler. To get useful work, you always need something some other transfer into or out of the system; you always need to make something else hotter, colder, or provide some chemical or altitude changes that can not be reversed without adding more energy back. Thus, so long as time moves forward everything runs down in terms of work availability.

There is also a first law; it states that energy is conserved. That is, if you want to heat some substance, that change requires that you put in a set amount of work plus heat. Similarly, if you want to cool something, a set amount of heat + work must be taken out. In equation form, we say that, for any change, q +w is constant, where q is heat, and w is work. It’s the sum that’s constant, not the individual values so long as you count every 4.174 Joules of work as if it were 1 calorie of heat. If you input more heat, you have to add less work, and visa versa, but there is always the same sum. When adding heat or work, we say that q or w is positive; when extracting heat or work, we say that q or w are negative quantities. Still, each 4.174 joules counts as if it were 1 calorie.

Now, since for every path between two states, q +w is the same, we say that q + w represents a path-independent quantity for the system, one we call internal energy, U where ∆U = q + w. This is a mathematical form of the first law of thermodynamics: you can’t take q + w out of nothing, or add it to something without making a change in the properties of the thing. The only way to leave things the same is if q + w = 0. We notice also that for any pure thing or mixture, the sum q +w for the change is proportional to the mass of the stuff; we can thus say that internal energy is an intensive quality. q + w = n ∆u where n is the grams of material, and ∆u is the change in internal energy per gram.

We are now ready to put the first and second laws together. We find we can extract work from a system if we take heat from a hot body of water and deliver some of it to something at a lower temperature (the ice-cube say). This can be done with a thermopile, or with a steam engine (Rankine cycle, above), or a stirling engine. That an engine can only extract work when there is a difference of temperatures is similar to the operation of a water wheel. Sadie Carnot noted that a water wheel is able to extract work only when there is a flow of water from a high level to low; similarly in a heat engine, you only get work by taking in heat energy from a hot heat-source and exhausting some of it to a colder heat-sink. The remainder leaves as work. That is, q1 -q2 = w, and energy is conserved. The second law isn’t violated so long as there is no way you could run the engine without the cold sink. Accepting this as reasonable, we can now derive some very interesting, non-obvious truths.

We begin with the famous Carnot cycle. The Carnot cycle is an idealized heat engine with the interesting feature that it can be made to operate reversibly. That is, you can make it run forwards, taking a certain amount of work from a hot source, producing a certain amount of work and delivering a certain amount of heat to the cold sink; and you can run the same process backwards, as a refrigerator, taking in the same about of work and the same amount of heat from the cold sink and delivering the same amount to the hot source. Carnot showed by the following proof that all other reversible engines would have the same efficiency as his cycle and no engine, reversible or not, could be more efficient. The proof: if an engine could be designed that will extract a greater percentage of the heat as work when operating between a given hot source and cold sink it could be used to drive his Carnot cycle backwards. If the pair of engines were now combined so that the less efficient engine removed exactly as much heat from the sink as the more efficient engine deposited, the excess work produced by the more efficient engine would leave with no effect besides cooling the source. This combination would be in violation of the second law, something that we’d said was impossible.

Now let us try to understand the relationship that drives useful energy production. The ratio of heat in to heat out has got to be a function of the in and out temperatures alone. That is, q1/q2 = f(T1, T2). Similarly, q2/q1 = f(T2,T1) Now lets consider what happens when two Carnot cycles are placed in series between T1 and T2, with the middle temperature at Tm. For the first engine, q1/qm = f(T1, Tm), and similarly for the second engine qm/q2 = f(Tm, T2). Combining these we see that q1/q2 = (q1/qm)x(qm/q2) and therefore f(T1, T2) must always equal f(T1, Tm)x f(Tm/T2) =f(T1,Tm)/f(T2, Tm). In this relationship we see that the second term Tm is irrelevant; it is true for any Tm. We thus say that q1/q2 = T1/T2, and this is the limit of what you get at maximum (reversible) efficiency. You can now rearrange this to read q1/T1 = q2/T2 or to say that work, W = q1 – q2 = q2 (T1 – T2)/T2.

A strange result from this is that, since every process can be modeled as either a sum of Carnot engines, or of engines that are less-efficient, and since the Carnot engine will produce this same amount of reversible work when filled with any substance or combination of substances, we can say that this outcome: q1/T1 = q2/T2 is independent of path, and independent of substance so long as the process is reversible. We can thus say that for all substances there is a property of state, S such that the change in this property is ∆S = ∑q/T for all the heat in or out. In a more general sense, we can say, ∆S = ∫dq/T, where this state property, S is called the entropy. Since as before, the amount of heat needed is proportional to mass, we can say that S is an intensive property; S= n s where n is the mass of stuff, and s is the entropy change per mass. 

Another strange result comes from the efficiency equation. Since, for any engine or process that is less efficient than the reversible one, we get less work out for the same amount of q1, we must have more heat rejected than q2. Thus, for an irreversible engine or process, q1-q2 < q2(T1-T2)/T2, and q2/T2 is greater than -q1/T1. As a result, the total change in entropy, S = q1/T1 + q2/T2 >0: the entropy of the universe always goes up or stays constant. It never goes down. Another final observation is that there must be a zero temperature that nothing can go below or both q1 and q2 could be positive and energy would not be conserved. Our observations of time and energy conservation leaves us to expect to find that there must be a minimum temperature, T = 0 that nothing can be colder than. We find this temperature at -273.15 °C. It is called absolute zero; nothing has ever been cooled to be colder than this, and now we see that, so long as time moves forward and energy is conserved, nothing will ever will be found colder.

Typically we either say that S is zero at absolute zero, or at room temperature.

We’re nearly there. We can define the entropy of the universe as the sum of the entropies of everything in it. From the above treatment of work cycles, we see that this total of entropy always goes up, never down. A fundamental fact of nature, and (in my world view) a fundamental view into how God views us and the universe. First, that the entropy of the universe goes up only, and not down (in our time-forward framework) suggests there is a creator for our universe — a source of negative entropy at the start of all things, or a reverser of time (it’s the same thing in our framework). Another observation, God likes entropy a lot, and that means randomness. It’s his working principle, it seems.

But before you take me now for a total libertine and say that since science shows that everything runs down the only moral take-home is to teach: “Let us eat and drink,”… “for tomorrow we die!” (Isaiah 22:13), I should note that his randomness only applies to the universe as a whole. The individual parts (planets, laboratories, beakers of coffee) does not maximize entropy, but leads to a minimization of available work, and this is different. You can show that the maximization of S, the entropy of the universe, does not lead to the maximization of s, the entropy per gram of your particular closed space but rather to the minimization of a related quantity µ, the free energy, or usable work per gram of your stuff. You can show that, for any closed system at constant temperature, µ = h -Ts where s is entropy per gram as before, and h is called enthalpy. h is basically the potential energy of the molecules; it is lowest at low temperature and high order. For a closed system we find there is a balance between s, something that increases with increased randomness, and h, something that decreases with increased randomness. Put water and air in a bottle, and you find that the water is mostly on the bottom of the bottle, the air is mostly on the top, and the amount of mixing in each phase is not the maximum disorder, but rather the one you’d calculate will minimize µ.

As the protein folds its randomness and entropy decrease, but its enthalpy decreases too; the net effect is one precise fold that minimizes µ.

As a protein folds its randomness and entropy decrease, but its enthalpy decreases too; the net effect is one precise fold that minimizes µ.

This is the principle that God applies to everything, including us, I’d guess: a balance. Take protein folding; some patterns have big disorder, and high h; some have low disorder and very low h. The result is a temperature-dependent  balance. If I were to take a moral imperative from this balance, I’d say it matches better with the sayings of Solomon the wise: “there is nothing better for a person under the sun than to eat, drink and be merry. Then joy will accompany them in their toil all the days of the life God has given them under the sun.” (Ecclesiastes 8:15). There is toil here as well as pleasure; directed activity balanced against personal pleasures. This is the µ = h -Ts minimization where, perhaps, T is economic wealth. Thus, the richer a society, the less toil is ideal and the more freedom. Of necessity, poor societies are repressive. 

Dr. Robert E. Buxbaum, Mar 18, 2014. My previous thermodynamic post concerned the thermodynamics of hydrogen production. It’s not clear that all matter goes forward in time, by the way; antimatter may go backwards, so it’s possible that anti matter apples may fall up. On microscopic scale, time becomes flexible so it seems you can make a time machine. Religious leaders tend to be anti-science, I’ve noticed, perhaps because scientific miracles can be done by anyone, available even those who think “wrong,” or say the wrong words. And that’s that, all being heard, do what’s right and enjoy life too: as important a pattern in life as you’ll find, I think. The relationship between free-energy and societal organization is from my thesis advisor, Dr. Ernest F. Johnson.

Ivanpah’s solar electric worse than trees

Recently the DoE committed 1.6 billion dollars to the completion of the last two of three solar-natural gas-electric plants on a 10 mi2 site at Lake Ivanpah in California. The site is rated to produce 370 MW of power, in a facility that uses far more land than nuclear power, at a cost significantly higher than nuclear. The 3900 MW Drax plant (UK) cost 1.1 Billion dollars, and produces 10 times more power on a much smaller site. Ivanpah needs a lot of land because its generators require 173,500 billboard-size, sun-tracking mirrors to heat boilers atop three 750 foot towers (2 1/2 times the statue of liberty). The boilers feed steam to low pressure, low efficiency (28% efficiency) Siemens turbines. At night, natural gas provides heat to make the steam, but only at the same, low efficiency. Siemens makes higher efficiency turbine plants (59% efficiency) but these can not be used here because the solar oven temperature is only 900°F (500°C), while normal Siemens plants operate at 3650°F (2000°C).

The Ivanpau thermal solar-natural gas project will look like The Crescent Dunes Thermal-solar project shown here, but will be bigger.

The first construction of the Ivanpah thermal solar-natural-gas project; Each circle mirrors extend out to cover about 2 square miles of the 10mi2 site.

So far, the first of the three towers is operational, but it has been producing at only 30% of rated low-efficiency output. These are described as “growing pains.” There are also problems with cooked birds, blinded pilots, and the occasional fire from the misaligned death ray — more pains, I guess. There is also the problem of lightning. When hit by lightning the mirrors shatter into millions of shards of glass over a 30 foot radius, according to Argus, the mirror cleaning company. This presents a less-than attractive environmental impact.

As an exercise, I thought I’d compare this site’s electric output to the amount one could generate using a wood-burning boiler fed by trees growing on a similar sized (10 sq. miles) site. Trees are cheap, but only about 10% efficient at converting solar power to chemical energy, thus you might imagine that trees could not match the power of the Ivanpah plant, but dry wood burns hot, at 1100 -1500°C, so the efficiency of a wood-powered steam turbine will be higher, about 45%. 

About 820 MW of sunlight falls on every 1 mi2 plot, or 8200 MW for the Ivanpah site. If trees convert 10% of this to chemical energy, and we convert 45% of that to electricity, we find the site will generate 369 MW of electric power, or exactly the output that Ivanpah is rated for. The cost of trees is far cheaper than mirrors, and electricity from wood burning is typically cost 4¢/kWh, and the environmental impact of tree farming is likely to be less than that of the solar mirrors mentioned above. 

There is another advantage to the high temperature of the wood fire. The use of high temperature turbines means that any power made at night with natural gas will be produced at higher efficiency. The Ivanpah turbines output at low temperature and low efficiency when burning natural gas (at night) and thus output half the half the power of a normal Siemens plant for every BTU of gas. Because of this, it seems that the Ivanpah plant may use as much natural gas to make its 370 MW during a 12 hour night as would a higher efficiency system operating 24 hours, day and night. The additional generation by solar thus, might be zero. 

If you think the problems here are with the particular design, I should also note that the Ivanpah solar project is just one of several our Obama-government is funding, and none are doing particularly well. As another example, the $1.45 B solar project on farmland near Gila Bend Arizona is rated to produce 35 MW, about 1/10 of the Ivanpah project at 2/3 the cost. It was built in 2010 and so far has not produced any power.

Robert E. Buxbaum, March 12, 2014. I’ve tried using wood to make green gasoline. No luck so far. And I’ve come to doubt the likelihood that we can stop global warming.

Nuclear fusion

I got my PhD at Princeton University 33 years ago (1981) working on the engineering of nuclear fusion reactors, and I thought I’d use this blog to rethink through the issues. I find I’m still of the opinion that developing fusion is important as the it seems the best, long-range power option. Civilization will still need significant electric power 300 to 3000 years from now, it seems, when most other fuel sources are gone. Fusion is also one of the few options for long-range space exploration; needed if we ever decide to send colonies to Alpha Centauri or Saturn. I thought fusion would be ready by now, but it is not, and commercial use seems unlikely for the next ten years at least — an indication of the difficulties involved, and a certain lack of urgency.

Oil, gas, and uranium didn’t run out like we’d predicted in the mid 70s. Instead, population growth slowed, new supplies were found, and better methods were developed to recover and use them. Shale oil and fracking unlocked hydrocarbons we thought were unusable, and nuclear fission reactors got better –safer and more efficient. At the same time, the more we studied, the clearer it came that fusion’s technical problems are much harder to tame than uranium fission’s.

Uranium fission was/is frighteningly simple — far simpler than even the most basic fusion reactor. The first nuclear fission reactor (1940) involved nothing more than uranium pellets in a pile of carbon bricks stacked in a converted squash court at the University of Chicago. No outside effort was needed to get the large, unstable uranium atoms split to smaller, more stable ones. Water circulating through the pile removed the heat released, and control was maintained by people lifting and lowering cadmium control rods while standing on the pile.

A fusion reactor requires high temperature or energy to make anything happen. Fusion energy is produced by combining small, unstable heavy hydrogen atoms into helium, a bigger more stable one, see figure. To do this reaction you need to operate at the equivalent of about 500,000,000 degrees C, and containing it requires (typically) a magnetic bottle — something far more complex than a pile of graphic bricks. The reward was smaller too: “only” about 1/13th as much energy per event as fission. We knew the magnetic bottles were going to be tricky, e.g. there was no obvious heat transfer and control method, but fusion seemed important enough, and the problems seemed manageable enough that fusion power seemed worth pursuing — with just enough difficulties to make it a challenge.

Basic fusion reaction: deuterium + tritium react to give helium, a neutron and energy.

Basic fusion reaction: deuterium + tritium react to give helium, a neutron and energy.

The plan at Princeton, and most everywhere, was to use a TOKAMAK, a doughnut-shaped reactor like the one shown below, but roughly twice as big; TOKAMAK was a Russian acronym. The doughnut served as one side of an enormous transformer. Hydrogen fuel was ionized into a plasma (a neutral soup of protons and electrons) and heated to 300,000,000°C by a current in the TOKOMAK generated by varying the current in the other side of the transformer. Plasma containment was provided by enormous magnets on the top and bottom, and by ring-shaped magnets arranged around the torus.

As development went on, we found we kept needing bigger and bigger doughnuts and stronger and stronger magnets in an effort to balance heat loss with fusion heating. The number density of hydrogen atoms per volume, n, is proportional to the magnetic strength. This is important because the fusion heat rate per volume is proportional to n-squared, n2, while heat loss is proportional to n divided by the residence time, something we called tau, τ. The main heat loss was from the hot plasma going to the reactor surface. Because of the above, a heat balance ratio was seen to be important, heat in divided by heat out, and that was seen to be more-or-less proportional to nτ. As the target temperatures increased, we found we needed larger and larger nτ reactors to make a positive heat balance. And this translated to ever larger reactors and ever stronger magnetic fields, but even here there was a limit, 1 billion Kelvin, a thermodynamic temperature where the fusion reaction went backward and no energy was produced. The Princeton design was huge, with super strong, super magnets, and was operated at 300 million°C, near the top of the reaction curve. If the temperature went above or below this temperature, the fire would go out. There was no room for error, but relatively little energy output per volume — compared to fission.

Fusion reaction options and reaction rates.

Fusion reaction options and reaction rates.

The most likely reaction involved deuterium and tritium, referred to as D and T. This was the reaction of the two heavy isotopes of hydrogen shown in the figure above — the same reaction used in hydrogen bombs, a point we rarely made to the public. For each reaction D + T –> He + n, you get 17.6 million electron volts (17.6 MeV). This is 17.6 million times the energy you get for an electron moving over one Volt, but only 1/13 the energy of a fission reaction. By comparison, the energy of water-forming, H2 + 1/2 O2 –> H2O, is the equivalent of two electrons moving over 1.2 Volts, or 2.4 electron volts (eV), some 8 million times less than fusion.

The Princeton design involved reacting 40 gm/hr of heavy hydrogen to produce 8 mol/hr of helium and 4000 MW of heat. The heat was converted to electricity at 38% efficiency using a topping cycle, a modern (relatively untried) design. Of the 1500 MWh/hr of electricity that was supposed to be produced, all but about 400 MW was to be delivered to the power grid — if everything worked right. Sorry to say, the value of the electricity did not rise anywhere as fast as the cost of the reactor and turbines. Another problem: 1100 MW was more than could be easily absorbed by any electrical grid. The output was high and steady, and could not be easily adjusted to match fluctuating customer demand. By contrast a coal plant’s or fuel cell’s output could be easily adjusted (and a nuclear plant with a little more difficulty).

Because of the need for heat balance, it turned out that at least 9% of the hydrogen had to be burnt per pass through the reactor. The heat lost per mol by conduction to the wall was, to good approximation, the heat capacity of each mol of hydrogen ions, 82 J/°C mol, times the temperature of the ions, 300 million °C divided by the containment time, τ. The Princeton design was supposed to have a containment of about 4 seconds. As a result, the heat loss by conduction was 6.2 GW per mol. This must be matched by the molar heat of reaction that stayed in the plasma. This was 17.6 MeV times Faraday’s constant, 96,800 divided by 4 seconds (= 430 GW/mol reacted) divided by 5. Of the 430 GW/mol produced in fusion reactions only 1/5 remains in the plasma (= 86 GW/mol) the other 4/5 of the energy of reaction leaves with the neutron. To get the heat balance right, at least 9% of the hydrogen must react per pass through the reactor; there were also some heat losses from radiation, so the number is higher. Burn more or less percent of the hydrogen and you had problems. The only other solution was to increase τ > 4 seconds, but this meant ever bigger reactors.

There was also a material handling issue: to get enough fuel hydrogen into the center of the reactor, quite a lot of radioactive gas had to be handled — extracted from the plasma chamber. These were to be frozen into tiny spheres of near-solid hydrogen and injected into the reactor at ultra-sonic velocity. Any slower and the spheres would evaporate before reaching the center. As 40 grams per hour was 9% of the feed, it became clear that we had to be ready to produce and inject 1 pound/hour of tiny spheres. These “snowballs-in-hell” had to be small so they didn’t dampen the fire. The vacuum system had to be able to be big enough to handle the lb/hr or so of unburned hydrogen and ash, keeping the pressure near total vacuum. You then had to purify the hydrogen from the ash-helium and remake the little spheres that would be fed back to the reactor. There were no easy engineering problems here, but I found it enjoyable enough. With a colleague, I came up with a cute, efficient high vacuum pump and recycling system, and published it here.

Yet another engineering challenge concerned the difficulty of finding a material for the first-wall — the inner wall of the doughnut facing the plasma. Of the 4000 MW of heat energy produced, all the conduction and radiation heat, about 1000 MW is deposited in the first wall and has to be conducted away. Conducting this heat means that the wall must have an enormous coolant flow and must withstand an enormous amount of thermal stress. One possible approach was to use a liquid wall, but I’ve recently come up with a rather nicer solid wall solution (I think) and have filed a patent; more on that later, perhaps after/if the patent is accepted. Another engineering challenge was making T, tritium, for the D-T reaction. Tritium is not found in nature, but has to be made from the neutron created in the reaction and from lithium in a breeder blanket, Li + n –> He + T. I examined all possible options for extracting this tritium from the lithium at low concentrations as part of my PhD thesis, and eventually found a nice solution. The education I got in the process is used in my, REB Research hydrogen engineering business.

Man inside the fusion reactor doughnut at ITER. He'd better leave before the 8,000,000°C plasma turns on.

Man inside the fusion reactor doughnut at ITER. He’d better leave before the 8,000,000°C plasma turns on.

Because of its complexity, and all these engineering challenges, fusion power never reached the maturity of fission power; and then Three-mile Island happened and ruined the enthusiasm for all things nuclear. There were some claims that fusion would be safer than fission, but because of the complexity and improvements in fission, I am not convinced that fusion would ever be even as safe. And the long-term need keeps moving out: we keep finding more uranium, and we’ve developed breeder reactors and a thorium cycle: technologies that make it very unlikely we will run out of fission material any time soon.

The main, near term advantage I see for fusion over fission is that there are fewer radioactive products, see comparison.  A secondary advantage is neutrons. Fusion reactors make excess neutrons that can be used to make tritium, or other unusual elements. A need for one of these could favor the development of fusion power. And finally, there’s the long-term need: space exploration, or basic power when we run out of coal, uranium, and thorium. Fine advantages but unlikely to be important for a hundred years.

Robert E. Buxbaum, March 1, 2014. Here’s a post on land use, on the aesthetics of engineering design, and on the health risks of nuclear power. The sun’s nuclear fusion reactor is unstable too — one possible source of the chaotic behavior of the climate. Here’s a control joke.