Category Archives: Physics

Of God and Hubble

Edwin Hubble and Andromeda Photograph

Edwin Hubble and Andromeda Photograph

Perhaps my favorite proof of God is that, as best we can tell using the best science we have, everything we see today, popped into existence some 14 billion years ago. The event is called “the big bang,” and before that, it appears, there was nothing. After that, there was everything, and as best we can tell, not an atom has popped into existence since. I see this as the miracle of creation: Ex nihilo, Genesis, Something from nothing.

The fellow who saw this miracle first was an American, Edwin P. Hubble, born 1889. Hubble got a law degree and then a PhD (physics) studying photographs of faint nebula. That is, he studied the small, glowing, fuzzy areas of the night sky, producing a PhD thesis titled: “Photographic Investigations of Faint Nebulae.” Hubble served in the army (WWI) and continued his photographic work at the Mount Wilson Observatory, home to the world’s largest telescope at the time. He concluded that many of these fuzzy nebula were complete galaxies outside of our own. Most of the stars we see unaided are located relatively near us, in our own, local area, or our own, “Milky Way” galaxy, that is within a swirling star blob that appears to be some 250,000 light years across. Through study of photographs of the Andromeda “nebula”, Hubble concluded it was another swirling galaxy quite like ours, but some 900,000 light years away. (A light year is 5,900,000,000 miles, the distance light would travel in a year). Finding another galaxy was a wonderful find; better yet, there were more swirling galaxies besides Andromeda, about 100 billion of them, we now think. Each galaxy contains about 100 billion stars; there is plenty of room for intelligent life. 

Emission from Galaxy NGC 5181. The bright, hydrogen ß line should be at but it's at

Emission spectrum from Galaxy NGC 5181. The bright, hydrogen ß line should be at 4861.3 Å, but it’s at about 4900 Å. This difference tells you the speed of the galaxy.

But the discovery of galaxies beyond our own is not what Hubble is most famous for. Hubble was able to measure the distance to some of these galaxies, mostly by their apparent brightness, and was able to measure the speed of the galaxies relative to us by use of the Doppler shift, the same phenomenon that causes a train whistle to sound differently when the train is coming towards you or going away from you. In this case, he used the frequency spectrum of light for example, at right, for NGC 5181. The color of the spectral lines of light from the galaxy is shifted to the red, long wavelengths. Hubble picked some recognizable spectral line, like the hydrogen emission line, and determined the galactic velocity by the formula,

V= c (λ – λ*)/λ*.

In this equation, V is the velocity of the galaxy relative to us, c is the speed of light, 300,000,000 m/s, λ is the observed wavelength of the particular spectral line, and λ*is the wavelength observed for non-moving sources. Hubble found that all the distant galaxies were moving away from us, and some were moving quite fast. What’s more, the speed of a galaxy away from us was roughly proportional to the distance. How odd. There were only two explanations for this: (1) All other galaxies were propelled away from us by some, earth-based anti-gravity that became more powerful with distance (2) The whole universe was expanding at a constant rate, and thus every galaxy sees itself moving away from every other galaxy at a speed proportional to the distance between them.

This second explanation seems a lot more likely than the first, but it suggests something very interesting. If the speed is proportional to the distance, and you carry the motion backwards in time, it seems there must have been a time, some 14 billion years ago, when all matter was in one small bit of space. It seems there was one origin spot for everything, and one origin time when everything popped into existence. This is evidence for creation, even for God. The term “Big Bang” comes from a rival astronomer, Fred Hoyle, who found the whole creation idea silly. With each new observation of a galaxy moving away from us, the idea became that much less silly. Besides, it’s long been known that the universe can’t be uniform and endless.

Whatever we call the creation event, we can’t say it was an accident: a lot of stuff popped out at one time, and nothing at all similar has happened since. Nor can we call it a random fluctuation since there are just too many stars and too many galaxies in close proximity to us for it to be the result of random atoms moving. If it were all random, we’d expect to see only one star and our one planet. That so much stuff popped out in so little time suggests a God of creation. We’d have to go to other areas of science to suggest it’s a personal God, one nearby who might listen to prayer, but this is a start. 

If you want to go through the Hubble calculations yourself, you can find pictures and spectra of galaxies here for the 24 or so original galaxies studied by Hubble: http://astro.wku.edu/astr106/Hubble_intro.html. Based on your analysis, you’ll likely calculate a slightly different time for creation from the standard 14 billion, but you’ll find you calculate something close to what Hubble did. To do better, you’ll need to look deeper into space, and that would take a better telescope, e.g.  the “Hubble space telescope”

Robert E. Buxbaum, October 28, 2018.

Isotopic effects in hydrogen diffusion in metals

For most people, there is a fundamental difference between solids and fluids. Solids have long-term permanence with no apparent diffusion; liquids diffuse and lack permanence. Put a penny on top of a dime, and 20 years later the two coins are as distinct as ever. Put a layer of colored water on top of plain water, and within a few minutes you’ll see that the coloring diffuse into the plain water, or (if you think the other way) you’ll see the plain water diffuse into the colored.

Now consider the transport of hydrogen in metals, the technology behind REB Research’s metallic  membranes and getters. The metals are clearly solid, keeping their shapes and properties for centuries. Still, hydrogen flows into and through the metals at a rate of a light breeze, about 40 cm/minute. Another way of saying this is we transfer 30 to 50 cc/min of hydrogen through each cm2 of membrane at 200 psi and 400°C; divide the volume by the area, and you’ll see that the hydrogen really moves through the metal at a nice clip. It’s like a normal filter, but it’s 100% selective to hydrogen. No other gas goes through.

To explain why hydrogen passes through the solid metal membrane this way, we have to start talking about quantum behavior. It was the quantum behavior of hydrogen that first interested me in hydrogen, some 42 years ago. I used it to explain why water was wet. Below, you will find something a bit more mathematical, a quantum explanation of hydrogen motion in metals. At REB we recently put these ideas towards building a membrane system for concentration of heavy hydrogen isotopes. If you like what follows, you might want to look up my thesis. This is from my 3rd appendix.

Although no-one quite understands why nature should work this way, it seems that nature works by quantum mechanics (and entropy). The basic idea of quantum mechanics you will know that confined atoms can only occupy specific, quantized energy levels as shown below. The energy difference between the lowest energy state and the next level is typically high. Thus, most of the hydrogen atoms in an atom will occupy only the lower state, the so-called zero-point-energy state.

A hydrogen atom, shown occupying an interstitial position between metal atoms (above), is also occupying quantum states (below). The lowest state, ZPE is above the bottom of the well. Higher energy states are degenerate: they appear in pairs. The rate of diffusive motion is related to ∆E* and this degeneracy.

A hydrogen atom, shown occupying an interstitial position between metal atoms (above), is also occupying quantum states (below). The lowest state, ZPE is above the bottom of the well. Higher energy states are degenerate: they appear in pairs. The rate of diffusive motion is related to ∆E* and this degeneracy.

The fraction occupying a higher energy state is calculated as c*/c = exp (-∆E*/RT). where ∆E* is the molar energy difference between the higher energy state and the ground state, R is the gas constant and T is temperature. When thinking about diffusion it is worthwhile to note that this energy is likely temperature dependent. Thus ∆E* = ∆G* = ∆H* – T∆S* where asterisk indicates the key energy level where diffusion takes place — the activated state. If ∆E* is mostly elastic strain energy, we can assume that ∆S* is related to the temperature dependence of the elastic strain.

Thus,

∆S* = -∆E*/Y dY/dT

where Y is the Young’s modulus of elasticity of the metal. For hydrogen diffusion in metals, I find that ∆S* is typically small, while it is often typically significant for the diffusion of other atoms: carbon, nitrogen, oxygen, sulfur…

The rate of diffusion is now calculated assuming a three-dimensional drunkards walk where the step lengths are constant = a. Rayleigh showed that, for a simple cubic lattice, this becomes:

D = a2/6τ

a is the distance between interstitial sites and t is the average time for crossing. For hydrogen in a BCC metal like niobium or iron, D=

a2/9τ; for a FCC metal, like palladium or copper, it’s

a2/3τ. A nice way to think about τ, is to note that it is only at high-energy can a hydrogen atom cross from one interstitial site to another, and as we noted most hydrogen atoms will be at lower energies. Thus,

τ = ω c*/c = ω exp (-∆E*/RT)

where ω is the approach frequency, or the amount of time it takes to go from the left interstitial position to the right one. When I was doing my PhD (and still likely today) the standard approach of physics writers was to use a classical formulation for this time-scale based on the average speed of the interstitial. Thus, ω = 1/2a√(kT/m), and

τ = 1/2a√(kT/m) exp (-∆E*/RT).

In the above, m is the mass of the hydrogen atom, 1.66 x 10-24 g for protium, and twice that for deuterium, etc., a is the distance between interstitial sites, measured in cm, T is temperature, Kelvin, and k is the Boltzmann constant, 1.38 x 10-16 erg/°K. This formulation correctly predicts that heavier isotopes will diffuse slower than light isotopes, but it predicts incorrectly that, at all temperatures, the diffusivity of deuterium is 1/√2 that for protium, and that the diffusivity of tritium is 1/√3 that of protium. It also suggests that the activation energy of diffusion will not depend on isotope mass. I noticed that neither of these predictions is borne out by experiment, and came to wonder if it would not be more correct to assume ω represent the motion of the lattice, breathing, and not the motion of a highly activated hydrogen atom breaking through an immobile lattice. This thought is borne out by experimental diffusion data where you describe hydrogen diffusion as D = D° exp (-∆E*/RT).

Screen Shot 2018-06-21 at 12.08.20 AM

You’ll notice from the above that D° hardly changes with isotope mass, in complete contradiction to the above classical model. Also note that ∆E* is very isotope dependent. This too is in contradiction to the classical formulation above. Further, to the extent that D° does change with isotope mass, D° gets larger for heavier mass hydrogen isotopes. I assume that small difference is the entropy effect of ∆E* mentioned above. There is no simple square-root of mass behavior in contrast to most of the books we had in grad school.

As for why ∆E* varies with isotope mass, I found that I could get a decent explanation of my observations if I assumed that the isotope dependence arose from the zero point energy. Heavier isotopes of hydrogen will have lower zero-point energies, and thus ∆E* will be higher for heavier isotopes of hydrogen. This seems like a far better approach than the semi-classical one, where ∆E* is isotope independent.

I will now go a bit further than I did in my PhD thesis. I’ll make the general assumption that the energy well is sinusoidal, or rather that it consists of two parabolas one opposite the other. The ZPE is easily calculated for parabolic energy surfaces (harmonic oscillators). I find that ZPE = h/aπ √(∆E/m) where m is the mass of the particular hydrogen atom, h is Plank’s constant, 6.63 x 10-27 erg-sec,  and ∆E is ∆E* + ZPE, the zero point energy. For my PhD thesis, I didn’t think to calculate ZPE and thus the isotope effect on the activation energy. I now see how I could have done it relatively easily e.g. by trial and error, and a quick estimate shows it would have worked nicely. Instead, for my PhD, Appendix 3, I only looked at D°, and found that the values of D° were consistent with the idea that ω is about 0.55 times the Debye frequency, ω ≈ .55 ωD. The slight tendency for D° to be larger for heavier isotopes was explained by the temperature dependence of the metal’s elasticity.

Two more comments based on the diagram I presented above. First, notice that there is middle split level of energies. This was an explanation I’d put forward for quantum tunneling atomic migration that some people had seen at energies below the activation energy. I don’t know if this observation was a reality or an optical illusion, but present I the energy picture so that you’ll have the beginnings of a description. The other thing I’d like to address is the question you may have had — why is there no zero-energy effect at the activated energy state. Such a zero energy difference would cancel the one at the ground state and leave you with no isotope effect on activation energy. The simple answer is that all the data showing the isotope effect on activation energy, table A3-2, was for BCC metals. BCC metals have an activation energy barrier, but it is not caused by physical squeezing between atoms, as for a FCC metal, but by a lack of electrons. In a BCC metal there is no physical squeezing, at the activated state so you’d expect to have no ZPE there. This is not be the case for FCC metals, like palladium, copper, or most stainless steels. For these metals there is a much smaller, on non-existent isotope effect on ∆E*.

Robert Buxbaum, June 21, 2018. I should probably try to answer the original question about solids and fluids, too: why solids appear solid, and fluids not. My answer has to do with quantum mechanics: Energies are quantized, and always have a ∆E* for motion. Solid materials are those where ω exp (-∆E*/RT) has unit of centuries. Thus, our ability to understand the world is based on the least understandable bit of physics.

magnetic separation of air

As some of you will know, oxygen is paramagnetic, attracted slightly by a magnet. Oxygen’s paramagnetism is due to the two unpaired electrons in every O2 molecule. Oxygen has a triple-bond structure as discussed here (much of the chemistry you were taught is wrong). Virtually every other common gas is diamagnetic, repelled by a magnet. These include nitrogen, water, CO2, and argon — all diamagnetic. As a result, you can do a reasonable job of extracting oxygen from air by the use of a magnet. This is awfully cool, and could make for a good science fair project, if anyone is of a mind.

But first some math, or physics, if you like. To a good approximation the magnetization of a material, M = CH/T where M is magnetization, H is magnetic field strength, C is the Curie constant for the material, and T is absolute temperature.

Ignoring for now, the difference between entropy and internal energy, but thinking only in terms of work derived by lowering a magnet towards a volume of gas, we can say that the work extracted, and thus the decrease in energy of the magnetic gas is ∫∫HdM  = MH/2. At constant temperature and pressure, we can say ∆G = -CH2/2T.

The maximum magnetization you’re likely to get with any permanent magnet (not achieved to date) is about 50 Tesla, or 40,000 ampere meters. At 20°C, the per-mol, magnetic susceptibility of oxygen is 1.34×10−6  This suggests that the Curie constant is 1.34 ×10−6 x 293 = 3.93 ×10−4. Applying this value to oxygen in a 50 Tesla magnet at 20°C, we find the energy difference, ∆G is 1072 J/mole = RT ln ß where ß is a concentration ratio factor between the O2 content of the magnetized and un-magnetized gas, C1/C2 =ß

At room temperature, 298K ß = 1.6, and thus we find that the maximum oxygen concentration you’re likely to get is about 1.6 x 21% = 33%. It’s slightly more than this due to nitrogen’s diamagnetism, but this effect is too small the matter. What does matter is that 33% O2 is a good amount for a variety of medical uses.

I show below my simple design for a magnetic O2 concentrator. The dotted line is a permeable membrane of no selectivity – with a little O2 permeability the design will work better. All you need is a blower or pump. A coffee filter could serve as a membrane.bux magneitc air separator

This design is as simple as the standard membrane-based O2 concentrator – those based on semi-permeable membranes, but this design should require less pressure differential — just enough to overcome the magnet. Less pressure means the blower should be smaller, and less noisy, with less energy use.  I figure this could be really convenient for people who need portable oxygen. With current magnets it would take 4-5 stages or low temperatures to reach this concentration, still this design could have commercial use, I’d think.

On the theoretical end, an interesting thing I find concerns the effect on the entropy of the magnetic oxygen. (Please ignore this paragraph if you have not learned statistical thermodynamics.) While you might imagine that magnetization decreases entropy, other-things being equal because the molecules are somewhat aligned with the field, temperature and pressure being fixed, I’ve come to realize that entropy is likely higher. A sea of semi-aligned molecules will have a slightly higher heat capacity than nonaligned molecules because the vibrational Cp is higher, other things being equal. Thus, unless I’m wrong, the temperature of the gas will be slightly lower in the magnetic area than in the non-magnetic field area. Temperature and pressure are not the same within the separator as out, by the way; the blower is something of a compressor, though a much less-energy intense one than used for most air separators. Because of the blower, both the magnetic and the non magnetic air will be slightly warmer than in the surround (blower Work = ∆T/Cp). This heat will be mostly lost when the gas leaves the system, that is when it flows to lower pressure, both gas streams will be, essentially at room temperature. Again, this is not the case with the classic membrane-based oxygen concentrators — there the nitrogen-rich stream is notably warm.

Robert E. Buxbaum, October 11, 2017. I find thermodynamics wonderful, both as science and as an analog for society.

How Tesla invented, I think, Tesla coils and wireless chargers.

I think I know how Tesla invented his high frequency devices, and thought I’d show you, while also explaining the operation of some devices that develop from in. Even if I’m wrong in historical terms, at least you should come to understand some of his devices, and something of the invention process. Either can be the start of a great science fair project.

physics drawing of a mass on a spring, left, and of a grounded capacitor and inception coil, right.

The start of Tesla’s invention process, I think, was a visual similarity– I’m guessing he noticed that the physics symbol for a spring was the same as for an electrical, induction coil, as shown at left. A normal person would notice the similarity, and perhaps think about it for a few seconds, get no where, and think of something else. If he or she had a math background — necessary to do most any science — they might look at the relevant equations and notice that they’re different. The equation describing the force of a spring is F = -k x  (I’ll define these letters in the bottom paragraph). The equation describing the voltage in an induction coil is not very similar-looking at first glance, V = L di/dt.  But there is a key similarity that could appeal to some math aficionados: both equations are linear. A linear equation is one where, if you double one side you double the other. Thus, if you double F, you double x, and if you double V, you double dI/dt, and that’s a significant behavior; the equation z= atis not linear, see the difference?

Another linear equation is the key equation for the motion for a mass, Newton’s second law, F = ma = m d2x/dt2. This equation is quite complicated looking, since the latter term is a second-derivative, but it is linear, and a mass is the likely thing for a spring to act upon. Yet another linear equation can be used to relate current to the voltage across a capacitor: V= -1/C ∫idt. At first glance, this equation looks quite different from the others since it involves an integral. But Nicola Tesla did more than a first glance. Perhaps he knew that linear systems tend to show resonance — vibrations at a fixed frequency. Or perhaps that insight came later. 

And Tesla saw something else, I imagine, something even less obvious, except in hindsight. If you take the derivative of the two electrical equations, you get dV/dt = L d2i/dt2, and dV/dt = -1/C i . These equations are the same as for the spring and mass, just replace F and x by dV/dt and i. That the derivative of the integral is the thing itself is something I demonstrate here. At this point it becomes clear that a capacitor-coil system will show the same sort of natural resonance effects as shown by a spring and mass system, or by a child’s swing, or by a bouncy bridge. Tesla would have known, like anyone who’s taken college-level physics, that a small input at the right, resonant frequency will excite such systems to great swings. For a mass and spring,

Basic Tesla coil. A switch set off by magnetization of the iron core insures resonant frequency operation.

Basic Tesla coil. A switch set off by magnetization of the iron core insures resonant frequency operation.

resonant frequency = (1/2π) √k/m,

Children can make a swing go quite high, just by pumping at the right frequency. Similarly, it should be possible to excite a coil-capacitor system to higher and higher voltages if you can find a way to excite long enough at the right frequency. Tesla would have looked for a way to do this with a coil capacitor system, and after a while of trying and thinking, he seems to have found the circuit shown at right, with a spark gap to impress visitors and keep the voltages from getting to far out of hand. The resonant frequency for this system is 1/(2π√LC), an equation form that is similar to the above. The voltage swings should grow until limited by resistance in the wires, or by the radiation of power into space. The fact that significant power is radiated into space will be used as the basis for wireless phone chargers, but more on that later. For now, you might wish to note that power radiation is proportional to dV/dt.

A version of the above excited by AC current. In this version, you achieve resonance by adjusting the coil, capacitor and resistance to match the forcing frequency.

A more -modern version of the above excited by AC current. In this version, you achieve resonance by adjusting the coil, capacitor and resistance to match the forcing frequency.

The device above provides an early, simple way to excite a coil -capacitor system. It’s designed for use with a battery or other DC power source. There’s an electromagnetic switch to provide resonance with any capacitor and coil pair. An alternative, more modern device is shown at left. It  achieves resonance too without the switch through the use of input AC power, but you have to match the AC frequency to the resonant frequency of the coil and capacitor. If wall current is used, 60 cps, the coil and capacitor must be chosen so that  1/(2π√LC) = 60 cps. Both versions are called Tesla coils and either can be set up to produce very large sparks (sparks make for a great science fair project — you need to put a spark gap across the capacitor, or better yet use the coil as the low-voltage part of a transformer.

power receiverAnother use of this circuit is as a transmitter of power into space. The coil becomes the transmission antenna, and you have to set up a similar device as a receiver, see picture at right. The black thing at left of the picture is the capacitor. One has to make sure that the coil-capacitor pair is tuned to the same frequency as the transmitter. One also needs to add a rectifier, the rectifier chosen here is designated 1N4007. This, fairly standard-size rectifier allows you to sip DC power to the battery, without fear that the battery will discharge on every cycle. That’s all the science you need to charge an iPhone without having to plug it in. Designing one of these is a good science fair project, especially if you can improve on the charging distance. Why should you have to put your iPhone right on top of the transmitter battery. Why not allow continuous charging anywhere in your home. Tesla was working on long-distance power transmission till the end of his life. What modifications would that require?

Symbols used above: a = acceleration = d2x/dt2, C= capacitance of the capacitor, dV/dt = the rate of change of voltage with time, F = force, i = current, k = stiffness of the spring, L= inductance of the coil, m = mass of the weight, t= time, V= voltage, x = distance of the mass from its rest point.

Robert Buxbaum, October 2, 2017.

Heraclitus and Parmenides time joke

From Existential Commics

From Existential Comics; Parmenides believed that nothing changed, nor could it.

For those who don’t remember, Heraclitus believed that change was the essence of life, while  Parmenides believed that nothing ever changes. It’s a debate that exists to this day in physics, and also in religion (there is nothing new under the sun, etc.). In science, the view that no real change is possible is founded in Schrödinger’s wave view of quantum mechanics.

Schrödinger's wave equation, time dependent.

Schrödinger’s wave equation, time dependent.

In Schrödinger’s wave description of reality, every object or particle is considered a wave of probability. What appears to us as motion is nothing more than the wave oscillating back and forth in its potential field. Nothing has a position or velocity, quite, only random interactions with other waves, and all of these are reversible. Because of the time reversibility of the equation, long-term, the system is conservative. The wave returns to where it was, and no entropy is created, long-term. Anything that happens will happen again, in reverse. See here for more on Schrödinger waves.

Thermodynamics is in stark contradiction to this quantum view. To thermodynamics, and to common observation, entropy goes ever upward, and nothing is reversible without outside intervention. Things break but don’t fix themselves. It’s this entropy increase that tells you that you are going forward in time. You know that time is going forward if you can, at will, drop an ice-cube into hot tea to produce lukewarm, diluted tea. If you can do the reverse, time is going backward. It’s a problem that besets Dr. Who, but few others.

One way that I’ve seen to get out of the general problem of quantum time is to assume the observed universe is a black hole or some other closed system, and take it as an issue of reference frame. As seen from the outside of a black hole (or a closed system without observation) time stops and nothing changes. Within a black hole or closed system, there is constant observation, and there is time and change. It’s not a great way out of the contradiction, but it’s the best I know of.

Predestination makes a certain physics and religious sense, it just doesn't match personal experience very well.

Predestination makes a certain physics and religious sense, it just doesn’t match personal experience very well.

The religion version of this problem is as follows: God, in most religions, has fore-knowledge. That is, He knows what will happen, and that presumes we have no free will. The problem with that is, without free-will, there can be no fair judgment, no right or wrong. There are a few ways out of this, and these lie behind many of the religious splits of the 1700s. A lot of the humor of Calvin and Hobbes comics comes because Calvin is a Calvinist, convinced of fatalistic predestination; Hobbes believes in free will. Most religions take a position somewhere in-between, but all have their problems.

Applying the black-hole model to God gives the following, alternative answer, one that isn’t very satisfying IMHO, but at least it matches physics. One might assume predestination for a God that is outside the universe — He sees only an unchanging system, while we, inside see time and change and free will. One of the problems with this is it posits a distant creator who cares little for us and sees none of the details. A more positive view of time appears in Dr. Who. For Dr. Who time is fluid, with some fixed points. Here’s my view of Dr. Who’s physics.  Unfortunately, Dr. Who is fiction: attractive, but without basis. Time, as it were, is an issue for the ages.

Robert Buxbaum, Philosophical musings, Friday afternoon, June 30, 2017.

A clever, sorption-based, hydrogen compressor

Hydrogen-powered fuel cells provide weight and cost advantages over batteries, important e.g. for drones and extended range vehicles, but they require highly compressed hydrogen and it’s often a challenge compressing the hydrogen. A large-scale solution I like is pneumatic compression, e.g. this compressor. One would combine it with a membrane reactor hydrogen generator, to fill tanks for fuel cells. The problem is that this pump is somewhat complex, and would likely add air impurities to the hydrogen. I’d now like to describe a different, very clever hydrogen pump, one that suited to smaller outputs, but adds no impurities and and provides very high pressure. It operates by metallic hydride sorption at low temperature, followed by desorption at high temperature.

Hydride sorption -desorption pressures vs temperature.

Hydride sorption -desorption pressures vs temperature, from Dhinesh et al.

The metal hydriding reaction is M + nH2 <–> MH2n. Where M is a metal or metallic alloy and MH2n is the hydride. While most metals will undergo this reaction at some appropriate temperature and pressure, the materials of practical interest are exothermic hydrides that is hydrides that give off heat on hydriding. They also must undergo a nearly stoichiometric absorption or desorption reaction at reasonable temperatures and pressures. The plot at right presents the plateau pressure for hydrogen absorption/ desorption in several, exothermic metal hydrides. The most attractive of these are shown in the red box near the center. These sorb or desorb between 1 and 10 atmospheres and 25 and 100 °C.

In this plot, the slope of the sorption line is proportional to the heat of sorption. The most attractive materials for this pump are the ones in the box (or near) with a high slope to the line implying a high heat of sorption. A high heat of sorption means you can get very high compression without too much of a temperature swing.

To me, NaAlH4 appears to be the best of the materials. Though I have not built a pump yet with this material, I’d like to. It certainly serves as a good example for how the pump might work. The basic reaction is:

NaAl + 2H2 <–> NaAlH4

suggesting that each mol of NaAl material (50g) will absorb 2 mols of hydrogen (44.8 std liters). The sorption line for this reaction crosses the 1 atm horizontal line at about 30°C. This suggests that sorption will occur at 1 am and normal room temperature: 20-25°C. Assume the pump contains 100 g of NaAl (2.0 mols). Under ideal conditions, these 100g will 4 mols of hydrogen gas, about 90 liters. If this material in now heated to 226°C, it will desorb the hydrogen (more like 80%, 72 liters) at a pressure in excess of 100 atm, or 1500 psi. The pressure line extends beyond the graph, but the sense is that one could pressure in the neighborhood of 5000 psi or more: enough to use filling the high pressure tank of a hydrogen-based, fuel cell car.

The problem with this pump for larger volume H2 users is time. It will take 2-3 hours to cycle the sober, that is, to absorb hydrogen at low pressure, to heat the material to 226°C, to desorb the H2 and cycle back to low temperature. At a pump rate of 72 liters in 2-3 hours, this will not be an effective pump for a fuel-cell car. The output, 72 liters is only enough to generate 0.12kWh, perhaps enough for the tank of a fuel cell drone, or for augmenting the mpg of gasoline automobiles. If one is interested in these materials, my company, REB Research will be happy to manufacture some in research quantities (the prices blow are for materials cost, only I will charge significantly more for the manufactured product, and more yet if you want a heater/cooler system).

Properties of Metal Hydride materials; Dhanesh Chandra,* Wen-Ming Chien and Anjali Talekar, Material Matters, Volume 6 Article 2

Properties of Metal Hydride materials; Dhanesh Chandra,* Wen-Ming Chien and Anjali Talekar, Material Matters, Volume 6 Article 2

One could increase the output of a pump by using more sorbent, perhaps 10 kg distributed over 100 cells. With this much sorbent, you’ll pump 100 times faster, enough to take the output of a fairly large hydrogen generator, like this one from REB. I’m not sure you get economies of scale, though. With a mechanical pump, or the pneumatical pump,  you get an economy of scale: typically it costs 3 times as much for each 10 times increase in output. For the hydride pump, a ten times increase might cost 7-8 times as much. For this reason, the sorption pump lends itself to low volume applications. At high volume, you’re going to want a mechanical pump, perhaps with a getter to remove small amounts of air impurities.

Materials with sorption lines near the middle of the graph above are suited for long-term hydrogen storage. Uranium hydride is popular in the nuclear industry, though I have also provided Pd-coated niobium for this purpose. Materials whose graph appear at the far, lower left, titanium TiH2, can be used for permanent hydrogen removal (gettering). I have sold Pd-niobium screws for this application, and will be happy to provide other shapes and other materials, e.g. for reversible vacuum pumping from a fusion reactor.

Robert Buxbaum, May 26, 2017 (updated Apr. 4, 2022). 

Future airplane catapults may not be electric

President Trump got into Hot Water with the Navy this week for his suggestion that they should go “back to god-damn steam” for their airplane catapults as a cure for cost over-runs and delays with the Navy’s aircraft carriers. The Navy had chosen to go to a more modern catapult called EMALS (electromagnetic, aircraft launch system) based on a traveling coil and electromagnetic pulses. This EMAL system has cost $5 Billion in cost over-runs, has added 3 years to the program, and still doesn’t work well. In response to the president’s suggestion (explosion), the Navy did what the rest of Washington has done: blame Trump’s ignorance, e.g. here, in the Navy Times. Still, for what it’s worth, I think Trump’s idea has merit, especially if I can modify it a bit to suggest high pressure air (pneumatics) instead of high pressure steam.


Tests of the navy EMALS, notice that some launches go further than others; the problem is electronics, supposedly.

If you want to launch a 50,000 lb jet fighter at 5 g acceleration, you need to apply 250,000 lbs of force uniformly throughout the launch. For pneumatics, all that takes is 250 psi steam or air, and a 1000 square inch piston, about 3 feet in diameter. This is a very modest pressure and a quite modest size piston. A 50,000 lb object accelerated this way, will reach launch speed (130 mph) in 1.2 seconds. It’s very hard to get such fast or uniform acceleration with an electromagnetic coil since the motion of the coil always produces a back voltage. The electromagnetic pulses can be adjusted to counter this, but it’s not all that easy, as the Navy tests show. You have to know the speed and position of the airplane precisely to get it right, and have to adjust the firing of the pushing coils accordingly. There is no guarantee of smooth acceleration like you get with a piston, and the EMALS control circuit will always be vulnerable to electromagnetic and cyber attack. As things stand, the control system is thought to be the problem.

A piston is invulnerable to EM and cyber attack since, if worse comes to worse, the valves can be operated manually, as was done with steam-catapults throughout WWII. And pistons are very robust — far more robust than solenoid coils — because they are far less complex. As much force as you put on the plane, has to be put on the coil or piston. Thus, for 5 g acceleration, the coil or piston has to experience 250,000 lbs of horizontal force. That’s 3 million Newtons for those who like SI units (here’s a joke about SI units). A solid piston will have no problem withstanding 250,000 lbs for years. Piston steamships from the 50s are still in operation. Coils are far more delicate, and the life-span is likely to be short, at least for current designs. 

The reason I suggest compressed air, pneumatics, instead of steam is that air is not as hot and corrosive as steam. Also an air compressor can be located close to the flight deck, connected to the power center by electric wires. Steam requires long runs of steam pipes, a more difficult proposition. As a possible design, one could use a multi-stage, inter-cooled air compressor connected to a ballast tank, perhaps 5 feet in diameter x 100 feet long to guarantee uniform pressure. The ballast tank would provide the uniform pressure while allowing the use of a relatively small compressor, drawing less power than the EMALS. Those who’ve had freshman physics will be able to show that 5 g acceleration will get the plane to 130 mph in only 125 feet of runway. This is far less runway than the EMALS requires. For lighter planes or greater efficiency, one could shut off the input air before 120 feet and allow the remainder of the air to expand for 200 feet of the piston.

The same pistons could be used for capturing an airplane. It could start at 250 psi, dead-ended to the cylinder top. The captured airplane would push air back into the ballast tank, or the valve could be closed allowing pressure to build. Operated that way, the cylinder could stop the plane in 60 feet. You can’t do that with an EMAL. I should also mention that the efficiency of the piston catapult can be near 100%, but the efficiency of the EMALS will be near zero at the beginning of acceleration. Low efficiency at low speed is a problem found in all electromagnetic actuators: lots of electromagnetic power is needed to get things moving, but the output work,  ∫F dx, is near zero at low velocity. With EM, efficiency is high at only at one speed determined by the size of the moving coil; with pistons it’s high at all speeds. I suggest the Navy keep their EMALS, but only as a secondary system, perhaps used to launch drones until they get sea experience and demonstrate a real advantage over pneumatics.

Robert Buxbaum, May 19, 2017. The USS Princeton was the fanciest ship in the US fleet, with super high-tech cannons. When they mis-fired, it killed most of the cabinet of President Tyler. Slow and steady wins the arms race.

A very clever hydrogen pump

I’d like to describe a most clever hydrogen pump. I didn’t invent it, but it’s awfully cool. I did try to buy one from “H2 Pump,” a company that is now defunct, and I tried to make one. Perhaps I’ll try again. Here is a diagram.

Electrolytic membrane H2 pump

Electrolytic membrane H2 pump

This pump works as the reverse of of a PEM fuel cell. Hydrogen gas is on both sides of a platinum-coated, proton-conducting membrane — a fuel cell membrane. As in a PEM fuel cell, the platinum splits the hydrogen molecules into H atoms. An electrode removes electrons to form H+ ions on one side of the membrane; the electrons are on the other side of the membrane (the membrane itself is chosen to not conduct electricity). The difference from the fuel cell is that, for the pump you apply a energy (voltage) to drive hydrogen across the membrane, to a higher pressure side; in a fuel cell, the hydrogen goes on its own to form water, and you extract electric energy.

As shown, the design is amazingly simple and efficient. There are no moving parts except for the hydrogen itself. Not only do you pump hydrogen, but you can purify it as well, as most impurities (nitrogen, CO2) will not go through the membrane. Water does permeate the membrane, but for many applications, this isn’t a major impurity. The amount of hydrogen transferred per plate, per Amp-second of current is given by Faraday’s law, an equation that also shows up in my discussion of electrolysis, and of electroplating,

C= zFn.

Here, C is the current in Amp-seconds, z is the number or electrons transferred per molecule, in this case 2, F is Faraday’s constant, 96,800, n is the number of mols transferred.  If only one plate is used, you need 96,800 Amp-seconds per gram of hydrogen, 53.8 Amp hours per mol. Most membranes can operate at well at 1.5 Amp per cm2, suggesting that a 1.1 square-foot membrane (1000 cm2) will move about 1 mol per minute, 22.4 slpm. To reduce the current requirement, though not the membrane area requirement, one typically stacks the membranes. A 100 membrane stack would take 16.1 Amps to pump 22.4 slpm — a very manageable current.

The amount of energy needed per mol is related to the pressure difference via the difference in Gibbs energy, ∆G, at the relevant temperature.

Energy needed per mol is, ideally = ∆G = RT ln Pu/Pd.

where R is the gas constant, 8.34 Joules per mol, T is the absolute temperature, Kelvins (298 for a room temperature process), ln is the natural log, and Pu/Pd is the ratio of the upstream and downstream pressure. We find that, to compress 2 grams of hydrogen (one mol or 22.4 liters) to 100 atm (1500 psi) from 1 atm you need only 11400 Watt seconds of energy (8.34 x 298 x 4.61= 11,400). This is .00317 kW-hrs. This energy costs only 0.03¢ at current electric prices, by far the cheapest power requirement to pump this much hydrogen that I know of. The pump is surprisingly compact and simple, and you get purification of the hydrogen too. What could possibly go wrong? How could the H2 pump company fail?

One thing that I noticed went wrong when I tried building one of these was leakage at the seals. I found it uncommonly hard to make seals that held even 20 psi. I was using 4″ x 4″ membranes so 20 psi was the equivalent of 320 pounds of force. If I were to get 200 psi, there would have been 3200 lbs of force. I could never get the seals to stay put at anything more than 20 psi.

Another problem was the membranes themselves. The membranes I bought were not very strong. I used a wire-mesh backing, and a layer of steel behind that. I figured I could reach maybe 200 psi with this design, but didn’t get there. These low pressures limit the range of pump applications. For many applications,  you’d want 150-200 psi. Still, it’s an awfully cool pump,

Robert E. Buxbaum, February 17, 2017. My company, REB Research, makes hydrogen generators and purifiers. I’ve previously pointed out that hydrogen fuel cell cars have some dramatic advantages over pure battery cars.

Boy-Girl physics humor

Girl breaking up with her boyfriend: I just need two things, more space, and time.

Atoms try to understand themselves.

Atoms build physicists in an attempt to understand themselves. That’s also why physicists build physics societies and clubs.

Boyfriend: So, what’s the other thing?

 

Robert Buxbaum. And that, dear friend, is why science majors so rarely have normal boyfriends / girlfriends.

A female engineer friend of mine commented on the plight of dating in the department: “The odds are good, but the goods are odd.”

By the way, the solution to Einstein’s twin paradox resides in understanding that time is space. Both twins see the space ship moving at the same pace, but space shrinks for the moving twin in the space ship, not for the standing one. Thus, the moving twin finishes his (or her) journey in less time than the standing one observes.

Of horses, trucks, and horsepower

Horsepower is a unit of work production rate, about 3/4 of a kW, for those who like standard international units. It is also the pulling force of a work horse of the 1700s times its speed when pulling, perhaps 5 mph. A standard truck will develop 200 hp but only while accelerating at about 60 mph; to develop those same 200 horsepower at 1 mph it would have to pull with 200 times more force. That is impossible for a truck, both because of traction limitations and because of the nature of a gasoline engine when attached to typical gearing. At low speed, 1 mph, a truck will barely develop as much force as 4-5 horses, suggesting a work output about 1 hp. This is especially true for a truck pulling in the snow, as shown in the video below.

Here, a semi-truck (of milk) is being pulled out of the snow by a team of horses going perhaps 1 mph. The majority of work is done by the horse on the left — the others seem to be slipping. Assuming that the four horses manage to develop 1 hp each (4 hp total), the pull force is four times a truck at 1 mph, or as great as a 200 hp truck accelerating at 50 mph. That’s why the horse succeed where the truck does not.

You will find other videos on the internet showing that horses produce more force or hp than trucks or tractors. They always do so at low speeds. A horse will also beat a truck or car in acceleration to about the 1/4 mile mark. That’s because acceleration =force /mass: a = F/m.

I should mention that DC electric motors also, like horses, produce their highest force at very low speeds, but unlike horses, their efficiency is very low there. Electric engine efficiency is high only at speeds quite near the maximum and their horse-power output (force times speed) is at a maximum at about 1/2 the maximum speed.

Steam engines (I like steam engines) produce about the same force at all speeds, and more-or-less the same efficiency at all speeds. That efficiency is typically only about 20%, about that of a horse, but the feed cost and maintenance cost is far lower. A steam engine will eat coal, while a horse must eat oats.

March 4, 2016. Robert Buxbaum, an engineer, runs REB Research, and is running for water commissioner.