Category Archives: Physics

How I size heat exchangers

Heat exchange is a key part of most chemical process designs. Heat exchangers save money because they’re generally cheaper than heaters and the continuing cost of fuel or electricity to run the heaters. They also usually provide free, fast cooling for the product; often the product is made hot, and needs to be cooled. Hot products are usually undesirable. Free, fast cooling is good.

So how do you design a heat exchanger? A common design is to weld the right amount of tubes inside a shell, so it looks like the drawing below. The the hot fluid might be made to go through the tubes, and the cold in the shell, as shown, or the hot can flow through the shell. In either case, the flows are usually in the opposite direction so there is a hot end and a cold end as shown. In this essay, I’d like to discuss how I design our counter current heat exchangers beginning a common case (for us) where the two flows have the same thermal inertia, e.g. the same mass flow rates and the same heat capacities. That’s the situation with our hydrogen purifiers: impure hydrogen goes in cold, and is heated to 400°C for purification. Virtually all of this hot hydrogen exits the purifier in the “pure out” stream and needs to be cooled to room temperature or nearly.

Typical shell and tube heat exchanger design, Black Hills inc.

For our typical designs the hot flows in one direction, and an equal cold flow is opposite, I will show the temperature difference is constant all along the heat exchanger. As a first pass rule of thumb, I design so that this constant temperature difference is 30°C. That is ∆THX =~ 30°C at every point along the heat exchanger. More specifically, in our Mr Hydrogen® purifiers, the impure, feed hydrogen enters at 20°C typically, and is heated by the heat exchanger to 370°C. That is 30°C cooler than the final process temperature. The hydrogen must be heated this last 30°C with electricity. After purification, the hot, pure hydrogen, at 400°C, enters the heat exchanger leaving at 30°C above the input temperature, that is at 50°C. It’s hot, but not scalding. The last 30°C of cooling is done with air blown by a fan.

The power demand of the external heat source, the electric heater, is calculated as: Wheater = flow (mols/second)*heat capacity (J/°C – mol)* (∆Theater= ∆THX = 30°C).

The smaller the value of ∆THX, the less electric draw you need for steady state operation, but the more you have to pay for the heat exchanger. For small flows, I often use a higher value of ∆THX = 30°C, and for large flows smaller, but 30°C is a good place to start.

Now to size the heat exchanger. Because the flow rate of hot fluid (purified hydrogen) is virtually the same as for cold fluid (impure hydrogen), the heat capacity per mol of product coming out is the same as for mol of feed going in. Since enthalpy change equals heat capacity time temperature change, ∆H= Cp∆T, with effectiveCp the same for both fluids, and any rise in H in the cool fluid coming at the hot fluid, we can draw a temperature vs enthalpy diagram that will look like this:

The heat exchanger heats the feed from 20°C to 370°C. ∆T = 350°C. It also cools the product 350°C, that is from 400 to 50°C. In each case the enthalpy exchanged per mol of feed (or product is ∆H= Cp*∆T = 7*350 =2450 calories.

Since most heaters work in Watts, not calories, at some point it’s worthwhile to switch to Watts. 1 Cal = 4.174 J, 1 Cal/sec = 4.174 W. I tend to do calculations in mixed units (English and SI) because the heat capacity per mole of most things are simple numbers in English units. Cp (water) for example = 1 cal/g = 18 cal/mol. Cp (hydrogen) = 7 cal/mol. In SI units, the heat rate, WHX, is:

WHX = flow (mols/second)*heat capacity per mol (J/°C – mol)* ∆Tin-out (350°C).

The flow rate in mols per second is the flow rate in slpm divided by 22.4 x 60. Since the driving force for transfer is 30°C, the area of the heat exchanger is WHX times the resistance divided by ∆THX:

A = WHX * R / 30°C.

Here, R is the average resistance to heat transfer, m2*∆T/Watt. It equals the sum of all the resistances, essentially the sum of the resistance of the steel of the heat exchanger plus that of the two gas phases:

R= δm/km + h1+ h2

Here, δm is the thickness of the metal, km is the thermal conductivity of the metal, and h1 and h2 are the gas-phase heat transfer parameters in the feed and product flow respectively. You can often estimate these as δ1/k1 and δ2/k2 respectively, with k1 and k2 as the thermal conductivity of the feed and product, both hydrogen in my case. As for, δ, the effective gas-layer thickness, I generally estimate this as 1/3 the thickness of the flow channel, for example:

h1 = δ1/k1 = 1/3 D1/k1.

Because δ is smaller the smaller the diameter of the tubes, h is smaller too. Also small tubes tend to be cheaper than big ones, and more compact. I thus prefer to use small diameter tubes and small diameter gaps. in my heat exchangers, the tubes are often 1/4″ or bigger, but the gap sizes are targeted to 1/8″ or less. If the gap size gets too low, you get excessive pressure drops and non-uniform flow, so you have to check that the pressure drop isn’t too large. I tend to stick to normal tube sizes, and tweak the design a few times within those parameters, considering customer needs. Only after the numbers look good to my aesthetics, do I make the product. Aesthetics plays a role here: you have to have a sense of what a well-designed exchanger should look like.

The above calculations are fine for the simple case where ∆THX is constant. But what happens if it is not. Let’s say the feed is impure, so some hot product has to be vented, leaving les hot fluid in the heat exchanger than feed. I show this in the plot at right for the case of 14% impurities. Sine there is no phase change, the lines are still straight, but they are no longer parallel. Because more thermal mass enters than leaves, the hot gas is cooled completely, that is to 50°C, 30°C above room temperature, but the cool gas is heated at only 7/8 the rate that the hot gas is cooled. The hot gas gives off 2450 cal as before, but this is now only enough to heat the cold fluid by 2450/8 = 306.5°. The cool gas thus leave the heat exchanger at 20°C+ 306.8° = 326.5°C.

The simple way to size the heat exchanger now is to use an average value for ∆THX. In the diagram, ∆THX is seen to vary between 30°C at the entrance and and 97.5°C at the exit. As a conservative average, I’ll assume that ∆THX = 40°C, though 50 to 60°C might be more accurate. This results in a small heat exchanger design that’s 3/4 the size of before, and is still overdesigned by 25%. There is no great down-side to this overdesign. With over-design, the hot fluid leaves at a lower ∆THX, that is, at a temperature below 50°C. The cold fluid will be heated to a bit more than to the 326.5°C predicted, perhaps to 330°C. We save more energy, and waste a bit on materials cost. There is a “correct approach”, of course, and it involves the use of calculous. A = ∫dA = ∫R/∆THX dWHX using an analytic function for ∆THX as a function of WHX. Calculating this way takes lots of time for little benefit. My time is worth more than a few ounces of metal.

The only times that I do the correct analysis is with flame boilers, with major mismatches between the hot and cold flows, or when the government requires calculations. Otherwise, I make an H Vs T diagram and account for the fact that ∆T varies with H is by averaging. I doubt most people do any more than that. It’s not like ∆THX = 30°C is etched in stone somewhere, either, it’s a rule of thumb, nothing more. It’s there to make your life easier, not to be worshiped.

Robert Buxbaum June 3, 2024

Einstein’s theory of diffusion in liquids, and my extension.

In 1905 and 1908, Einstein developed two formulations for the diffusion of a small particle in a liquid. As a side-benefit of the first derivation, he demonstrated the visible existence of molecules, a remarkable piece of work. In the second formulation, he derived the same result using non-equilibrium thermodynamics, something he seems to have developed on the spot. I’ll give a brief version of the second derivation, and will then I’ll show off my own extension. It’s one of my proudest intellectual achievements.

But first a little background to the problem. In 1827, a plant biologist, Robert Brown examined pollen under a microscope and noticed that it moved in a jerky manner. He gave this “Brownian motion” the obvious explanation: that the pollen was alive and swimming. Later, it was observed that the pollen moved faster in acetone. The obvious explanation: pollen doesn’t like acetone, and thus swims faster. But the pollen never stopped, and it was noticed that cigar smoke also swam. Was cigar smoke alive too?

Einstein’s first version of an answer, 1905, was to consider that the liquid was composed of atoms whose energy was a Boltzmann distribution with an average of E= kT in every direction where k is the Boltzmann constant, and k = R/N. That is Boltsman’s constant equals the gas constant, R, divided by Avogadro’s number, N. He was able to show that the many interactions with the molecules should cause the pollen to take a random, jerky walk as seen, and that the velocity should be faster the less viscous the solvent, or the smaller the length-scale of observation. Einstein applied the Stokes drag equation to the solute, the drag force per particle was f = -6πrvη where r is the radius of the solute particle, v is the velocity, and η is the solution viscosity. Using some math, he was able to show that the diffusivity of the solute should be D = kT/6πrη. This is called the Stokes-Einstein equation.

In 1908 a French physicist, Jean Baptiste Perrin confirmed Einstein’s predictions, winning the Nobel prize for his work. I will now show the 1908 Einstein derivation and will hope to get to my extension by the end of this post.

Consider the molar Gibbs free energy of a solvent, water say. The molar concentration of water is x and that of a very dilute solute is y. y<<1. For this nearly pure water, you can show that µ = µ° +RT ln x= µ° +RT ln (1-y) = µ° -RTy.

Now, take a derivative with respect to some linear direction, z. Normally this is considered illegal, since thermodynamic is normally understood to apply to equilibrium systems only. Still Einstein took the derivative, and claimed it was legitimate at nearly equilibrium, pseudo-equilibrium. You can calculate the force on the solvent, the force on the water generated by a concentration gradient, Fw = dµ/dz = -RT dy/dz.

Now the force on each atom of water equals -RT/N dy/dz = -kT dy/dz.

Now, let’s call f the force on each atom of solute. For dilute solutions, this force is far higher than the above, f = -kT/y dy/dz. That is, for a given concentration gradient, dy/dz, the force on each solute atom is higher than on each solvent atom in inverse proportion to the molar concentration.

For small spheres, and low velocities, the flow is laminar and the drag force, f = 6πrvη.

Now calculate the speed of each solute atom. It is proportional to the force on the atom by the same relationship as appeared above: f = 6πrvη or v = f/6πrη. Inserting our equation for f= -kT/y dy/dz, we find that the velocity of the average solute molecule,

v = -kT/6πrηy dy/dz.

Let’s say that the molar concentration of solvent is C, so that, for water, C will equal about 1/18 mols/cc. The atomic concentration of dilute solvent will then equal Cy. We find that the molar flux of material, the diffusive flux equals Cyv, or that

Molar flux (mols/cm2/s) = Cy (-kT/6πrηy dy/dz) = -kTC/6πrη dy/dz -kT/6πrη dCy/dz.

where Cy is the molar concentration of solvent per volume.

Classical engineering comes to a similar equation with a property called diffusivity. Sp that

Molar flux of y (mols y/cm2/s) = -D dCy/dz, and D is an experimentally determined constant. We thus now have a prediction for D:

D = kT/6πrη.

This again is the Stokes Einstein Equation, the same as above but derived with far less math. I was fascinated, but felt sure there was something wrong here. Macroscopic viscosity was not the same as microscopic. I just could not think of a great case where there was much difference until I realized that, in polymer solutions there was a big difference.

Polymer solutions, I reasoned had large viscosities, but a diffusing solute probably didn’t feel the liquid as anywhere near as viscous. The viscometer measured at a larger distance, more similar to that of the polymer coil entanglement length, while a small solute might dart between the polymer chains like a rabbit among trees. I applied an equation for heat transfer in a dispersion that JK Maxwell had derived,

where κeff is the modified effective thermal conductivity (or diffusivity in my case), κl and κp are the thermal conductivity of the liquid and the particles respectively, and φ is the volume fraction of particles. 

To convert this to diffusion, I replaced κl by Dl, and κp by Dp where

Dl = kT/6πrηl

and Dp = kT/6πrη.

In the above ηl is the viscosity of the pure, liquid solvent.

The chair of the department, Don Anderson didn’t believe my equation, but agreed to help test it. A student named Kit Yam ran experiments on a variety of polymer solutions, and it turned out that the equation worked really well down to high polymer concentrations, and high viscosity.

As a simple, first approximation to the above, you can take Dp = 0, since it’s much smaller than Dl and you can take Dl to equal Dl = kT/6πrηl as above. The new, first order approximation is:

D = kT/6πrηl (1 – 3φ/2).

We published in Science. That is I published along with the two colleagues who tested the idea and proved the theory right, or at least useful. The reference is Yam, K., Anderson, D., Buxbaum, R. E., Science 240 (1988) p. 330 ff. “Diffusion of Small Solutes in Polymer-Containing Solutions”. This result is one of my proudest achievements.

R.E. Buxbaum, March 20, 2024

Relativity’s twin paradox explained, and why time is at right angles to space.

One of the most famous paradoxes of physics is explained wrong — always. It makes people feel good to think they understand it, but the explanation is wrong and confusing, and it drives young physicists in a wrong direction. The basic paradox is an outgrowth of the special relativity prediction that time moves slower if you move faster.

Thus, if you entered a spaceship and were to travel to a distant star at 99% the speed of light, turn around and get here 30 years, you would have aged far less than 30 years. You and everyone else on the space ship would have aged three years, 1/10 as much as someone on earth.

The paradox part, not that the above isn’t weird enough by itself, is that the person in the spaceship will imagine that he (or she) is standing still, and that everyone on earth is moving away at 99% the speed of light. Thus, the person on the spaceship should expect to find that the people on earth will age slower. That is, the person on the space ship should return from his (or her) three year journey, expecting to find that the people on earth have only aged 0.3 years. Obviously, only one of these expectations can be right, but it’s not clear which (It’s the first one), nor is it clear why.

The wrong explanation appears in an early popular book, “Mr Tompkins in Wonderland”, by Physicist, George Gamow. The book was written shortly after Relativity was proposed, and involves a Mr Tompkins who falls asleep in a physics lecture. Mr. Tompkins dreams he’s riding on a train going near the speed of light, finds things are shorter and time is going slower. He then asks the paradox question to the conductor, who admits he doesn’t quite know how it works (perhaps Gamow didn’t), but that “it has something do do with the brakeman.” That sounds like Gamow is saying the explanation has to do with deceleration at the turn around, or general relativity in general, implying gravity could have a similarly large effect. It doesn’t work that way, and the effect of 1G gravity is small, but everyone seems content to explain the paradox this way. This is particularly unfortunate because these include physicists clouding an already cloudy issue.

In the early days of physics, physicists tried to explain things with a little legitimate math to the lay audience. Gamow did this, as did Einstein, Planck, Feynman, and most others. I try to do this too. Nowadays, physicists have removed the math, and added gobbledygook. The one exception here are the cinematographers of Star Wars. They alone show the explanation correctly.

The explanation does not have to do general relativity or the acceleration at the end of the journey (the brakeman). Instead of working through some acceleration, general relativity effect, the twin paradox works with simple, special relativity: all space contracts for the duration of the trip, and everything in it gets shorter. The person in this spaceship will see the distance to the star shrink by 90%. Traveling there thus takes 1/10th the time because the distance is 1/10th. There and back at 99% the speed of light, takes exactly 3 years.

The equation for time contraction is: t’ = v/x° √(1-(v/c)2) = t° √(1-(v/c)2) where t’ is the time in the spaceship, v is the speed, x° is the distance traveled (as measured from earth), and c is the speed of light. For v/c = .99, we find that √1-(v/c)2 is 0.1. We thus find that t’ = 0.1 t°. When dealing with the twin paradox, it’s better to say that x’ = 0.1x° where x’ is the distance to the star as seen from the spaceship. In either case, when the people on the space ship accelerate, they see the distance in front of them shrink, as shown in Star Wars, below.

Star Wars. The millennium falcon jumps to light speed, and beyond.

That time was at right angles to space was a comment in one of Einstein’s popular articles and books; he wrote several, all with some minimal mathematics Current science has no math, and a lot of politics, IMHO, and thus is not science.

He showed that time and space are at right angles by analogy from Pythagoras. Pythagoras showed that distance on a diagonal, d between two points at right angles, x and y is d = √(x2 + y2). Another way of saying this is d2 =x2 + y2. The relationship is similar for relativistic distances. To explain the twin paradox, we find that the square of the effective distance, x’2 = x°2 (1 – (v/c)2) = x°2 – (x°v)2/c2 = x°2 – (x°v/c)2 = x°2 – (t°2/c2). Here, x°2 is the square of the original distance, and it comes out that the term, – (t°2/c2) behaves like the square of an imaginary distance that is at right angles to it. It comes out that co-frame time, t° behaves as if it were a distance with a scale factor of i/c.

For some reason people today read books on science by non-scientist ‘explainers.’ I These books have no math, and I guess they sell. Publishers think they are helping democratize science, perhaps. You are better off reading the original thinkers, IMHO.

Robert Buxbaum, July 16, 2023. In his autobiography, Einstein claimed to be a fan of scientist -philosopher, Ernst Mach. Mach derived the speed of sound from a mathematical analysis of thermodynamics. Einstein followed, considering that it must be equally true to consider an empty box traveling in space to be one that carries its emptiness with it, as to assume that fresh emptiness comes in at one end and leaves by the other. If you set the two to be equal mathematically, you conclude that both time and space vary with velocity. Similar analysis will show that atoms are real, and that energy must travel in packets, quanta. Einstein also did fun work on the curvature of rivers, and was a fan of this sail ship design. Here is some more on the scientific method.

Dark matter: why our galaxy still has its arms

Our galaxy may have two arms, or perhaps four. It was thought to be four until 2008, when it was reduced to two. Then, in 2015, it was expanded again to four arms, but recent research suggests it’s only two again. About 70% of galaxies have arms, easily counted from the outside, as in the picture below. Apparently it’s hard to get a good view from the inside.

Four armed, spiral galaxy, NGC 2008. There is a debate over whether our galaxy looks like this, or if there are only two arms. Over 70% of all galaxies are spiral galaxies. 

Logically speaking, we should not expect a galaxy to have arms at all. For a galaxy to have arms, it must rotate as a unit. Otherwise, even if the galaxy had arms when it formed, it would lose them by the time the outer rim rotated even once. As it happens we know the speed of rotation and age of galaxies; they’ve all rotated 10 to 50 times since they formed.

For stable rotation, the rotational acceleration must match the force of gravity and this should decrease with distances from the massive center. Thus, we’d expect that the stars should circle much faster the closer they are to the center of the galaxy. We see that Mercury circles the sun much faster than we do, and that we circle much faster than the outer planets. If stars circled the galactic core this way, any arm structure would be long gone. We see that the galactic arms are stable, and to explain it, we’ve proposed the existence of lots of unseen, dark matter. This matter has to have some peculiar properties, behaving as a light gas that doesn’t spin with the rest of the galaxy, or absorb light, or reflect. Some years ago, I came to believe that there was only one gas distribution that fit, and challenged folks to figure out the distribution.

The mass of the particles that made up this gas has to be very light, about 10-7 eV, about 2 x 1012 lighter than an electron, and very slippery. Some researchers had posited large, dark rocks, but I preferred to imagine a particle called the axion, and I expected it would be found soon. The particle mass had to be about this or it would shrink down to the center of he galaxy or start to spin, or fill the universe. Ina ny of these cases, galaxies would not be stable. The problem is, we’ve been looking for years, and not only have we not seen any particle like this. What’s more, continued work on the structure of matter suggests that no such particle should exist. At this point, galactic stability is a bigger mystery than it was 40 years ago.;

So how to explain galactic stability if there is no axion. One thought, from Mordechai Milgrom, is that gravity does not work as we thought. This is an annoying explanation: it involves a complex revision of General Relativity, a beautiful theory that seems to be generally valid. Another, more recent explanation is that the dark matter is regular matter that somehow became an entangled, super fluid despite the low density and relatively warm temperatures of interstellar space. This has been proposed by Justin Khoury, here. Either theory would explain the slipperiness, and the fact that the gas does not interact with light, but the details don’t quite work. For one, I’d still think that the entangled particle mass would have to be quite light; maybe a neutrino would fit (entangled neutrinos?). Super fluids don’t usually exist at space temperatures and pressures, and long distances (light years) should preclude entanglements, and neutrinos don’t seem to interact at all.

Sabine Hossenfelder suggests a combination of modified gravity and superfluidity. Some version of this might fit observations better, but doubles the amount of new physics required. Sabine does a good science video blog, BTW, with humor and less math. She doesn’t believe in Free will or religion, or entropy. By her, the Big Bang was caused by a mystery particle called an inflateon that creates mass and energy from nothing. She claims that the worst thing you can do in terms of resource depletion is have children, and seems to believe religious education is child abuse. Some of her views I agree with, with many, I do not. I think entropy is fundamental, and think people are good. Also, I see no advantage in saying “In the beginning an inflateon created the heavens and the earth”, but there you go. It’s not like I know what dark matter is any better than she does.

There are some 200 billion galaxies, generally with 100 billion stars. Our galaxy is about 150,000 light years across, 1.5 x 1018 km. It appears to behave, more or less, as a solid disk having rotated about 15 full turns since its formation, 10 billion years ago. The speed at the edge is thus about π x 1.5 x 1018 km/ 3 x 1016 s = 160km/s. That’s not relativistic, but is 16 times the speed of our fastest rockets. The vast majority of the mass of our galaxy would have to be dark matter, with relatively little between galaxies. Go figure.

Robert Buxbaum, May 24, 2023. I’m a chemical engineer, PhD, but studied some physics and philosophy.

Rotating sail ships and why your curve ball doesn’t curve.

The Flettner-sail ship, Barbara, 1926.

Sailing ships are wonderfully economic and non-polluting. They have unlimited range because they use virtually no fuel, but they tend to be slow, about 5-12 knots, about half as fast as Diesel-powered ships, and they can be stranded for weeks if the wind dies. Classic sailing ships also require a lot of manpower: many skilled sailors to adjust the sails. What’s wanted is an easily manned, economical, hybrid ship: one that’s powered by Diesel when the wind is light, and by a simple sail system when the wind blows. Anton Flettner invented an easily manned sail and built two ships with it. The Barbara above used a 530 hp Diesel and got additional thrust, about an additional 500 hp worth, from three, rotating, cylindrical sails. The rotating sales produced thrust via the same, Magnus force that makes a curve ball curve. Barbara went at 9 knots without the wind, or about 12.5 knots when the wind blew. Einstein thought it one of the most brilliant ideas he’d seen.

Force diagram of Flettner rotor (Lele & Rao, 2017)

The source of the force can be understood with help of the figure at left and the graph below. When a simple cylinder sits in the wind, with no spin, α=0, the wind force is essentially drag, and is 1/2 the wind speed squared, times the cross-sectional area of the cylinder, Dxh, and the density of air. Multiply this by a drag coefficient, CD, that is about 1 for a non-spinning cylinder, and about 2 for a fast spinning cylinder. FD= CDDhρv2/2.

A spinning cylinder has lift force too. FL= CLDhρv2/2.

Numerical lift coefficients versus time, seconds for different ratios of surface speed to wind speed, a. (Mittal & Kumar 2003), Journal of Fluid Mechanics.

As graphed in the figure at right, CL is effectively zero with sustained vibrations at zero spin, α=0. Vibrations are useless for propulsion, and can be damaging to the sail, though they are helpful in baseball pitching, producing the erratic flight of knuckle balls. If you spin a cylindrical mast at more than α=2.1 the vibrations disappear, and you get significant lift, CL= 6. At this rotation speed the fast surface moves with the wind at 2.1 times the wind speed. That is it moves significantly faster than the wind. The other side of the rotor moves opposite the wind, 1.1 times as fast as the wind. The coefficient of lift lift, CL= 6, is more than twice that found with a typical, triangular, non-rotating sail. Rotation increases the drag too, but not as much. The lift is about 4 times the drag, far better than in a typical sail. Another plus is that the ship can be propelled forward or backward -just reverse the spin direction. This is very good for close-in sailing.

The sail lift, and lift to drag ratio, increases with rotation speed reaching very values of 10 to 18 at α values of 3 to 4. Flettner considered α=3.5. optimal. At this α you get far more thrust than with a normal sail, and you can go faster than the wind, and far closer to the wind than with any normal sail. You don’t want α values above 4.2 because you start seeing vibrations again. Also more rotation power is needed (rotation power goes as ω2); unless the wind is strong, you might as well use a normal propeller.

The driving force is always at right angles to the perceived wind, called the “fair wind”, and the fair wind moves towards the front as the ship speed increases. Controlling the rotation speed is somewhat difficult but important. Flettner sails were no longer used by the 1930s because fuel became cheaper and control was difficult. Normal sails weren’t being used either for the same reasons.

In the early 1980s, there was a return to the romantic. Famous underwater explorer, Jacques Cousteau, revived a version of the Flettner sail for his exploratory ship, the Alcyone. He used aluminum sails, and an electric motor for rotation. He claimed that the ship drew more than half of its power from the wind, and claimed that, because of computer control, it could sail with no crew. This claim was likely bragging, but he bragged a lot. Even with today’s computer systems, people are needed to steer and manage things in case something goes wrong. The energy savings were impressive, though, enough so that some have begun to put Flettner sails on cargo ships, as a right. This is an ideal use since cargo ships go about as fast as a typical wind, 10- 20 knots. It’s reported that, Flettner- powered cargo ships get about 20% of their propulsion from wind power, not an insignificant amount.

And this gets us to the reason your curve ball does not curve: it’s likely you’re not spinning it fast enough. To get a good curve, you want the ball to spin at α =3, or about 1.5 times the rate you’d get by rolling the ball off your fingers. You have to snap your wrist hard to get it to spin this fast. As another approach, you can aim for α=0, a knuckle ball, achieved with zero rotation. At α=0, the ball will oscillate. It’s hard to do, but your pitch will be nearly impossible to hit or catch. Good luck.

Robert Buxbaum, March 22, 2023. There are also Flettner airplane designs where horizontal, cylindrical “wings” rotate to provide high lift with short wings and a relatively low power draw. So-far, these planes are less efficient and slower than a normal helicopter. The idea could bear more development work, IMHO. Einstein had an eye for good ideas.

Hydrogen transport in metallic membranes

The main products of my company, REB Research, involve metallic membranes, often palladium-based, that provide 100% selective hydrogen filtering or long term hydrogen storage. One way to understand why these metallic membrane provide 100% selectivity has to do with the fact that metallic atoms are much bigger than hydrogen ions, with absolutely regular, small spaces between them that fit hydrogen and nothing else.

Palladium atoms are essentially spheres. In the metallic form, the atoms pack in an FCC structure (face-centered cubic) with a radius of, 1.375 Å. There is a cloud of free electrons that provide conductivity and heat transfer, but as far as the structure of the metal, there is only a tiny space of 0.426 Å between the atoms, see below. This hole is too small of any molecule, or any inert gas. In the gas phase hydrogen molecules are about 1.06 Å in diameter, and other molecules are bigger. Hydrogen atoms shrink when inside a metal, though, to 0.3 to 0.4 Å, just small enough to fit through the holes.

The reason that hydrogen shrinks has to do with its electron leaving to join palladium’s condition cloud. Hydrogen is usually put on the upper left of the periodic table because, in most cases, it behaves as a metal. Like a metal, it reacts with oxygen, and chlorine, forming stoichiometric compounds like H2O and HCl. It also behaves like a metal in that it alloys, non-stoichiometrically, with other metals. Not with all metals, but with many, Pd and the transition metals in particular. Metal atoms are a lot bigger than hydrogen so there is little metallic expansion on alloying. The hydrogen fits in the tiny spaces between atoms. I’ve previously written about hydrogen transport through transition metals (we provide membranes for this too).

No other atom or molecule fits in the tiny space between palladium atoms. Other atoms and molecules are bigger, 1.5Å or more in size. This is far too big to fit in a hole 0.426Å in diameter. The result is that palladium is basically 100% selective to hydrogen. Other metals are too, but palladium is particularly good in that it does not readily oxidize. We sometime sell transition metal membranes and sorbers, but typically coat the underlying metal with palladium.

We don’t typically sell products of pure palladium, by the way. Instead most of our products use, Pd-25%Ag or Pd-Cu. These alloys are slightly cheaper than pure Pd and more stable. Pd-25% silver is also slightly more permeable to hydrogen than pure Pd is — a win-win-win for the alloy.

Robert Buxbaum, January 22, 2023

Fusion advance: LLNL’s small H-bomb, 1.5 lb TNT didn’t destroy the lab.

There was a major advance in nuclear fusion this month at the The National Ignition Facility of Lawrence Livermore National Laboratory (LLNL), but the press could not figure out what it was, quite. They claimed ignition, and it was not. They claimed that it opened the door to limitless power. It did not. Some heat-energy was produced, but not much, 2.5 MJ was reported. Translated to the English system, that’s 600 kCal, about as much heat in a “Big Mac”. That’s far less energy went into lasers that set the reaction off. The importance wasn’t the amount in the energy produced, in my opinion, it’s that the folks at LLNL fired off a small hydrogen bomb, in house, and survived the explosion. 600 kCal is about the explosive power of 1.5 lb of TNT.

Many laser beams converge on a droplet of deuterium-tritium setting off the explosion of a small fraction of the fuel. The explosion had about the power of 1.2 kg of TNT. Drawing from IEEE Spectrum

The process, as reported in the Financial Times, involved “a BB-sized” droplet of holmium -enclosed deuterium and tritium. The folks at LLNL fast-cooked this droplet using 100 lasers, see figure of 2.1MJ total output, converging on one spot simultaneously. As I understand it 4.6 MJ came out, 2.5 MJ more than went in. The impressive part is that the delicate lasers survived the event. By comparison, the blast that bought down Pan Am flight 103 over Lockerbie took only 2-3 ounces of explosive, about 70g. The folks at LLNL say they can do this once per day, something I find impressive.

The New York Times seemed to think this was ignition. It was not. Given the size of a BB, and the density of liquid deuterium-tritium, it would seem the weight of the drop was about 0.022g. This is not much but if it were all fused, it would release 12 GJ, the equivalent of about 3 tons of TNT. That the energy released was only 2.5MJ, suggests that only 0.02% of the droplet was fused. It is possible, though unlikely, that the folks at LLNL could have ignited the entire droplet. If they did, the damage from 5 tons of TNT equivalent would have certainly wrecked the facility. And that’s part of the problem; to make practical energy, you need to ignite the whole droplet and do it every second or so. That’s to say, you have to burn the equivalent of 5000 Big Macs per second.

You also need the droplets to be a lot cheaper than they are. Today, these holmium capsules cost about $100,000 each. We will need to make them, one per second for a cost around $! for this to make any sort of sense. Not to say that the experiments are useless. This is a great way to test H-bomb designs without destroying the environment. But it’s not a practical energy production method. Even ignoring the energy input to the laser, it is impossible to deal with energy when it comes in the form of huge explosions. In a sense we got unlimited power. Unfortunately it’s in the form of H-Bombs.

Robert Buxbaum, January 5, 2023

Of covalent bonds and muon catalyzed cold fusion.

A hydrogen molecule consists of two protons held together by a covalent bond. One way to think of such bonds is to imagine that there is only one electron is directly involved as shown below. The bonding electron only spends 1/7 of its time between the protons, making the bond, the other 6/7 of the time the electron shields the two protons by 3/7 e each, reducing the effective charge of each proton to 4/7e+.

We see that the two shielded protons will repel each other with the force of FR = Ke (16/49 e2 /r2) where e is the charge of an electron or proton, r is the distance between the protons (r = 0.74Å = 0.74×10-10m), and Ke is Coulomb’s electrical constant, Ke ≈ 8.988×109 N⋅m2⋅C−2. The attractive force is calculated similarly, as each proton attracts the central electron by FA = – Ke (4/49) e2/ (r/2)2. The forces are seen to be in balance, the net force is zero.

It is because of quantum mechanics, that the bond is the length that it is. If the atoms were to move closer than r = 0.74Å, the central electron would be confined to less space and would get more energy, causing it to spend less time between the two protons. With less of an electron between them, FR would be greater than FA and the protons would repel. If the atoms moved further apart than 0.74Å, a greater fraction of the electron would move to the center, FA would increase, and the atoms would attract. This is a fairly pleasant way to understand why the hydrogen side of all hydrogen covalent bonds are the same length. It’s also a nice introduction to muon-catalyzed cold fusion.

Most fusion takes place only at high temperatures, at 100 million °C in a TOKAMAK Fusion reactor, or at about 15 million °C in the high pressure interior of the sun. Muon catalyzed fusion creates the equivalent of a much higher pressure, so that fusion occurs at room temperature. The trick to muon catalyzed fusion is to replace one of the electrons with a muon, an unstable, heavy electron particle discovered in 1936. The muon, designated µ-, behaves just like an electron but it has about 207 times the mass. As a result when it replaces an electron in hydrogen, it forms form a covalent bond that is about 1/207th the length of a normal bond. This is the equivalent of extreme pressure. At this closer distance, hydrogen nuclei fuse even at room temperature.

In normal hydrogen, the nuclei are just protons. When they fuse, one of them becomes a neutron. You get a deuteron (a proton-neutron pair), plus an anti electron and 1.44 MeV of energy after the anti-electron has annihilated (for more on antimatter see here). The muon is released most of the time, and can catalyze many more fusion reactions. See figure at right.

While 1.44MeV per reaction is a lot by ordinary standards — roughly one million times more energy than is released per atom when hydrogen is burnt — it’s very little compared to the energy it takes to make a muon. Making a muon takes a minimum of 1000 MeV, and more typically 4000 MeV using current technology. You need to get a lot more energy per muon if this process is to be useful.

You get quite a lot more energy when a muon catalyzes deuterium fusion or deuterium- fusion. With these reactions, you get 3.3 to 4 MeV worth of energy per fusion, and the muon will be ejected with enough force to support about eight D-D fusions before it decays or sticks to a helium atom. That’s better than before, but still not enough to justify the cost of making the muon.

The next reactions to consider are D-T fusion and Li-D fusion. Tritium is an even heavier isotope of hydrogen. It undergoes muon catalyzed fusion with deuterium via the reaction, D+T –> 4He +n +17.6 MeV. Because of the higher energy of the reaction, the muons are even less likely to stick to a helium atom, and you get about 100 fusions per muon. 100 x 17.6 MeV = 1.76 GeV, barely break-even for the high energy cost to make the muon, but there is no reason to stop there. You can use the high energy fusion neutrons to catalyze LiD fusion. For example, 2LiD +n –> 34He + T + D +n producing 19.9 MeV and a tritium atom.

With this additional 19.9 MeV per DT fusion, the system can start to produce usable energy for sale. It is also important that tritium is made in the process. You need tritium for the fusion reactions, and there are not many other supplies. The spare neutron is interesting too. It can be used to make additional tritium or for other purposes. It’s a direction I’d like to explore further. I worked on making tritium for my PhD, and in my opinion, this sort of hybrid operation is the most attractive route to clean nuclear fusion power.

Robert Buxbaum, September 8, 2022. For my appraisal of hot fusion, see here.

A more accurate permeation tester

There are two ASTM-approved methods for measuring the gas permeability of a material. The equipment is very similar, and REB Research makes equipment for either. In one of these methods (described in detail here) you measure the rate of pressure rise in a small volume.This method is ideal for high permeation rate materials. It’s fast, reliable, and as a bonus, allows you to infer diffusivity and solubility as well, based on the permeation and breakthrough time.

Exploded view of the permeation cell.

For slower permeation materials, I’ve found you are better off with the other method: using a flow of sampling gas (helium typically, though argon can be used as well) and a gas-sampling gas chromatograph. We sell the cells for this, though not the gas chromatograph. For my own work, I use helium as the carrier gas and sampling gas, along with a GC with a 1 cc sampling loop (a coil of stainless steel tube), and an automatic, gas-operated valve, called a sampling valve. I use a VECO ionization detector since it provides the greatest sensitivity differentiating hydrogen from helium.

When doing an experiment, the permeate gas is put into the upper chamber. That’s typically hydrogen for my experiments. The sampling gas (helium in my setup) is made to flow past the lower chamber at a fixed, flow rate, 20 sccm or less. The sampling gas then flows to the sampling loop of the GC, and from there up the hood. Every 20 minutes or so, the sampling valve switches, sending the sampling gas directly out the hood. When the valve switches, the carrier gas (helium) now passes through the sampling loop on its way to the column. This sends the 1 cc of sample directly to the GC column as a single “injection”. The GC column separates the various gases in the sample and determines the components and the concentration of each. From the helium flow rate, and the argon concentration in it, I determine the permeation rate and, from that, the permeability of the material.

As an example, let’s assume that the sample gas flow is 20 sccm, as in the diagram above, and that the GC determines the H2 concentration to be 1 ppm. The permeation rate is thus 20 x 10-6 std cc/minute, or 3.33 x 10-7 std cc/s. The permeability is now calculated from the permeation area (12.56 cm2 for the cells I make), from the material thickness, and from the upstream pressure. Typically, one measures the thickness in cm, and the pressure in cm of Hg so that 1 atm is 76cm Hg. The result is that permeability is determined in a unit called barrer. Continuing the example above, if the upstream hydrogen is 15 psig, that’s 2 atmospheres absolute or or 152 cm Hg. Lets say that the material is a polymer of thickness is 0.3 cm; we thus conclude that the permeability is 0.524 x 10-10 scc/cm/s/cm2/cmHg = 0.524 barrer.

This method is capable of measuring permeabilities lower than the previous method, easily lower than 1 barrer, because the results are not fogged by small air leaks or degassing from the membrane material. Leaks of oxygen, and nitrogen show up on the GC output as peaks that are distinct from the permeate peak, hydrogen or whatever you’re studying as a permeate gas. Another plus of this method is that you can measure the permeability of multiple gas species simultaneously, a useful feature when evaluating gas separation polymers. If this type of approach seems attractive, you can build a cell like this yourself, or buy one from us. Send us an email to reb@rebresearch.com, or give us a call at 248-545-0155.

Robert Buxbaum, April 27, 2022.

Low temperature hydrogen removal

Platinum catalysts can be very effective at removing hydrogen from air. Platinum promotes the irreversible reaction of hydrogen with oxygen to make water: H2 + 1/2 O2 –> H2O, a reaction that can take off, at great rates, even at temperatures well below freezing. In the 1800s, when platinum was cheap, platinum powder was used to light town-gas, gas street lamps. In those days, street lamps were not fueled by methane, ‘natural gas’, but by ‘town gas’, a mix of hydrogen and carbon monoxide and many impurities like H2S. It was made by reacting coal and steam in a gas plant, and it is a testament to the catalytic power of Pt that it could light this town gas. These impurities are catalytic poisons. When exposed to any catalyst, including platinum, the catalyst looses it’s power to. This is especially true at low temperatures where product water condenses, and this too poisons the catalytic surface.

Nowadays, platinum is expensive and platinum catalysts are no longer made of Pt powder, but rather by coating a thin layer of Pt metal on a high surface area substrate like alumina, ceria, or activated carbon. At higher temperatures, this distribution of Pt improves the reaction rate per gram Pt. Unfortunately, at low temperatures, the substrate seems to be part of the poisoning problem. I think I’ve found a partial way around it though.

My company, REB Research, sells Pt catalysts for hydrogen removal use down to about 0°C, 32°F. For those needing lower temperature hydrogen removal, we offer a palladium-hydrocarbon getter that continues to work down to -30°C and works both in air and in the absence of air. It’s pretty good, but poisons more readily than Pt does when exposed to H2S. For years, I had wanted to develop a version of the platinum catalyst that works well down to -30°C or so, and ideally that worked both in air and without air. I got to do some of this development work during the COVID downtime year.

My current approach is to add a small amount of teflon and other hydrophobic materials. My theory is that normal Pt catalysts form water so readily that the water coats the catalytic surface and substrate pores, choking the catalyst from contact with oxygen or hydrogen. My thought of why our Pd-organic works better than Pt is that it’s part because Pd is a slower water former, and in part because the organic compounds prevent water condensation. If so, teflon + Pt should be more active than uncoated Pt catalyst. And it is so.

Think of this in terms of the  Van der Waals equation of state:{\displaystyle \left(p+{\frac {a}{V_{m}^{2}}}\right)\left(V_{m}-b\right)=RT}

where V_{m} is molar volume. The substance-specific constants a and b can be understood as an attraction force between molecules and a molecular volume respectively. Alternately, they can be calculated from the critical temperature and pressure as

{\displaystyle a={\frac {27(RT_{c})^{2}}{64p_{c}}}}{\displaystyle b={\frac {RT_{c}}{8p_{c}}}.}

Now, I’m going to assume that the effect of a hydrophobic surface near the Pt is to reduce the effective value of a. This is to say that water molecules still attract as before, but there are fewer water molecules around. I’ll assume that b remains the same. Thus the ratio of Tc and Pc remains the same but the values drop by a factor of related to the decrease in water density. If we imagine the use of enough teflon to decrease he number of water molecules by 60%, that would be enough to reduce the critical temperature by 60%. That is, from 647 K (374 °C) to 359 K, or -14°C. This might be enough to allow Pt catalysts to be used for H2 removal from the gas within a nuclear wast casket. I’m into nuclear, both because of its clean power density and its space density. As for nuclear waste, you need these caskets.

I’ve begun to test of my theory by making hydrogen removal catalyst that use both platinum and palladium along with unsaturated hydrocarbons. I find it works far better than the palladium-hydrocarbon getter, at least at room temperature. I find it works well even when the catalyst is completely soaked in water, but the real experiments are yet to come — how does this work in the cold. Originally I planned to use a freezer for these tests, but I now have a better method: wait for winter and use God’s giant freezer.

Robert E. Buxbaum October 20, 2021. I did a fuller treatment of the thermo above, a few weeks back.