Category Archives: thermodynamics

How I size heat exchangers

Heat exchange is a key part of most chemical process designs. Heat exchangers save money because they’re generally cheaper than heaters and the continuing cost of fuel or electricity to run the heaters. They also usually provide free, fast cooling for the product; often the product is made hot, and needs to be cooled. Hot products are usually undesirable. Free, fast cooling is good.

So how do you design a heat exchanger? A common design is to weld the right amount of tubes inside a shell, so it looks like the drawing below. The the hot fluid might be made to go through the tubes, and the cold in the shell, as shown, or the hot can flow through the shell. In either case, the flows are usually in the opposite direction so there is a hot end and a cold end as shown. In this essay, I’d like to discuss how I design our counter current heat exchangers beginning a common case (for us) where the two flows have the same thermal inertia, e.g. the same mass flow rates and the same heat capacities. That’s the situation with our hydrogen purifiers: impure hydrogen goes in cold, and is heated to 400°C for purification. Virtually all of this hot hydrogen exits the purifier in the “pure out” stream and needs to be cooled to room temperature or nearly.

Typical shell and tube heat exchanger design, Black Hills inc.

For our typical designs the hot flows in one direction, and an equal cold flow is opposite, I will show the temperature difference is constant all along the heat exchanger. As a first pass rule of thumb, I design so that this constant temperature difference is 30°C. That is ∆THX =~ 30°C at every point along the heat exchanger. More specifically, in our Mr Hydrogen® purifiers, the impure, feed hydrogen enters at 20°C typically, and is heated by the heat exchanger to 370°C. That is 30°C cooler than the final process temperature. The hydrogen must be heated this last 30°C with electricity. After purification, the hot, pure hydrogen, at 400°C, enters the heat exchanger leaving at 30°C above the input temperature, that is at 50°C. It’s hot, but not scalding. The last 30°C of cooling is done with air blown by a fan.

The power demand of the external heat source, the electric heater, is calculated as: Wheater = flow (mols/second)*heat capacity (J/°C – mol)* (∆Theater= ∆THX = 30°C).

The smaller the value of ∆THX, the less electric draw you need for steady state operation, but the more you have to pay for the heat exchanger. For small flows, I often use a higher value of ∆THX = 30°C, and for large flows smaller, but 30°C is a good place to start.

Now to size the heat exchanger. Because the flow rate of hot fluid (purified hydrogen) is virtually the same as for cold fluid (impure hydrogen), the heat capacity per mol of product coming out is the same as for mol of feed going in. Since enthalpy change equals heat capacity time temperature change, ∆H= Cp∆T, with effectiveCp the same for both fluids, and any rise in H in the cool fluid coming at the hot fluid, we can draw a temperature vs enthalpy diagram that will look like this:

The heat exchanger heats the feed from 20°C to 370°C. ∆T = 350°C. It also cools the product 350°C, that is from 400 to 50°C. In each case the enthalpy exchanged per mol of feed (or product is ∆H= Cp*∆T = 7*350 =2450 calories.

Since most heaters work in Watts, not calories, at some point it’s worthwhile to switch to Watts. 1 Cal = 4.174 J, 1 Cal/sec = 4.174 W. I tend to do calculations in mixed units (English and SI) because the heat capacity per mole of most things are simple numbers in English units. Cp (water) for example = 1 cal/g = 18 cal/mol. Cp (hydrogen) = 7 cal/mol. In SI units, the heat rate, WHX, is:

WHX = flow (mols/second)*heat capacity per mol (J/°C – mol)* ∆Tin-out (350°C).

The flow rate in mols per second is the flow rate in slpm divided by 22.4 x 60. Since the driving force for transfer is 30°C, the area of the heat exchanger is WHX times the resistance divided by ∆THX:

A = WHX * R / 30°C.

Here, R is the average resistance to heat transfer, m2*∆T/Watt. It equals the sum of all the resistances, essentially the sum of the resistance of the steel of the heat exchanger plus that of the two gas phases:

R= δm/km + h1+ h2

Here, δm is the thickness of the metal, km is the thermal conductivity of the metal, and h1 and h2 are the gas-phase heat transfer parameters in the feed and product flow respectively. You can often estimate these as δ1/k1 and δ2/k2 respectively, with k1 and k2 as the thermal conductivity of the feed and product, both hydrogen in my case. As for, δ, the effective gas-layer thickness, I generally estimate this as 1/3 the thickness of the flow channel, for example:

h1 = δ1/k1 = 1/3 D1/k1.

Because δ is smaller the smaller the diameter of the tubes, h is smaller too. Also small tubes tend to be cheaper than big ones, and more compact. I thus prefer to use small diameter tubes and small diameter gaps. in my heat exchangers, the tubes are often 1/4″ or bigger, but the gap sizes are targeted to 1/8″ or less. If the gap size gets too low, you get excessive pressure drops and non-uniform flow, so you have to check that the pressure drop isn’t too large. I tend to stick to normal tube sizes, and tweak the design a few times within those parameters, considering customer needs. Only after the numbers look good to my aesthetics, do I make the product. Aesthetics plays a role here: you have to have a sense of what a well-designed exchanger should look like.

The above calculations are fine for the simple case where ∆THX is constant. But what happens if it is not. Let’s say the feed is impure, so some hot product has to be vented, leaving les hot fluid in the heat exchanger than feed. I show this in the plot at right for the case of 14% impurities. Sine there is no phase change, the lines are still straight, but they are no longer parallel. Because more thermal mass enters than leaves, the hot gas is cooled completely, that is to 50°C, 30°C above room temperature, but the cool gas is heated at only 7/8 the rate that the hot gas is cooled. The hot gas gives off 2450 cal as before, but this is now only enough to heat the cold fluid by 2450/8 = 306.5°. The cool gas thus leave the heat exchanger at 20°C+ 306.8° = 326.5°C.

The simple way to size the heat exchanger now is to use an average value for ∆THX. In the diagram, ∆THX is seen to vary between 30°C at the entrance and and 97.5°C at the exit. As a conservative average, I’ll assume that ∆THX = 40°C, though 50 to 60°C might be more accurate. This results in a small heat exchanger design that’s 3/4 the size of before, and is still overdesigned by 25%. There is no great down-side to this overdesign. With over-design, the hot fluid leaves at a lower ∆THX, that is, at a temperature below 50°C. The cold fluid will be heated to a bit more than to the 326.5°C predicted, perhaps to 330°C. We save more energy, and waste a bit on materials cost. There is a “correct approach”, of course, and it involves the use of calculous. A = ∫dA = ∫R/∆THX dWHX using an analytic function for ∆THX as a function of WHX. Calculating this way takes lots of time for little benefit. My time is worth more than a few ounces of metal.

The only times that I do the correct analysis is with flame boilers, with major mismatches between the hot and cold flows, or when the government requires calculations. Otherwise, I make an H Vs T diagram and account for the fact that ∆T varies with H is by averaging. I doubt most people do any more than that. It’s not like ∆THX = 30°C is etched in stone somewhere, either, it’s a rule of thumb, nothing more. It’s there to make your life easier, not to be worshiped.

Robert Buxbaum June 3, 2024

Einstein’s theory of diffusion in liquids, and my extension.

In 1905 and 1908, Einstein developed two formulations for the diffusion of a small particle in a liquid. As a side-benefit of the first derivation, he demonstrated the visible existence of molecules, a remarkable piece of work. In the second formulation, he derived the same result using non-equilibrium thermodynamics, something he seems to have developed on the spot. I’ll give a brief version of the second derivation, and will then I’ll show off my own extension. It’s one of my proudest intellectual achievements.

But first a little background to the problem. In 1827, a plant biologist, Robert Brown examined pollen under a microscope and noticed that it moved in a jerky manner. He gave this “Brownian motion” the obvious explanation: that the pollen was alive and swimming. Later, it was observed that the pollen moved faster in acetone. The obvious explanation: pollen doesn’t like acetone, and thus swims faster. But the pollen never stopped, and it was noticed that cigar smoke also swam. Was cigar smoke alive too?

Einstein’s first version of an answer, 1905, was to consider that the liquid was composed of atoms whose energy was a Boltzmann distribution with an average of E= kT in every direction where k is the Boltzmann constant, and k = R/N. That is Boltsman’s constant equals the gas constant, R, divided by Avogadro’s number, N. He was able to show that the many interactions with the molecules should cause the pollen to take a random, jerky walk as seen, and that the velocity should be faster the less viscous the solvent, or the smaller the length-scale of observation. Einstein applied the Stokes drag equation to the solute, the drag force per particle was f = -6πrvη where r is the radius of the solute particle, v is the velocity, and η is the solution viscosity. Using some math, he was able to show that the diffusivity of the solute should be D = kT/6πrη. This is called the Stokes-Einstein equation.

In 1908 a French physicist, Jean Baptiste Perrin confirmed Einstein’s predictions, winning the Nobel prize for his work. I will now show the 1908 Einstein derivation and will hope to get to my extension by the end of this post.

Consider the molar Gibbs free energy of a solvent, water say. The molar concentration of water is x and that of a very dilute solute is y. y<<1. For this nearly pure water, you can show that µ = µ° +RT ln x= µ° +RT ln (1-y) = µ° -RTy.

Now, take a derivative with respect to some linear direction, z. Normally this is considered illegal, since thermodynamic is normally understood to apply to equilibrium systems only. Still Einstein took the derivative, and claimed it was legitimate at nearly equilibrium, pseudo-equilibrium. You can calculate the force on the solvent, the force on the water generated by a concentration gradient, Fw = dµ/dz = -RT dy/dz.

Now the force on each atom of water equals -RT/N dy/dz = -kT dy/dz.

Now, let’s call f the force on each atom of solute. For dilute solutions, this force is far higher than the above, f = -kT/y dy/dz. That is, for a given concentration gradient, dy/dz, the force on each solute atom is higher than on each solvent atom in inverse proportion to the molar concentration.

For small spheres, and low velocities, the flow is laminar and the drag force, f = 6πrvη.

Now calculate the speed of each solute atom. It is proportional to the force on the atom by the same relationship as appeared above: f = 6πrvη or v = f/6πrη. Inserting our equation for f= -kT/y dy/dz, we find that the velocity of the average solute molecule,

v = -kT/6πrηy dy/dz.

Let’s say that the molar concentration of solvent is C, so that, for water, C will equal about 1/18 mols/cc. The atomic concentration of dilute solvent will then equal Cy. We find that the molar flux of material, the diffusive flux equals Cyv, or that

Molar flux (mols/cm2/s) = Cy (-kT/6πrηy dy/dz) = -kTC/6πrη dy/dz -kT/6πrη dCy/dz.

where Cy is the molar concentration of solvent per volume.

Classical engineering comes to a similar equation with a property called diffusivity. Sp that

Molar flux of y (mols y/cm2/s) = -D dCy/dz, and D is an experimentally determined constant. We thus now have a prediction for D:

D = kT/6πrη.

This again is the Stokes Einstein Equation, the same as above but derived with far less math. I was fascinated, but felt sure there was something wrong here. Macroscopic viscosity was not the same as microscopic. I just could not think of a great case where there was much difference until I realized that, in polymer solutions there was a big difference.

Polymer solutions, I reasoned had large viscosities, but a diffusing solute probably didn’t feel the liquid as anywhere near as viscous. The viscometer measured at a larger distance, more similar to that of the polymer coil entanglement length, while a small solute might dart between the polymer chains like a rabbit among trees. I applied an equation for heat transfer in a dispersion that JK Maxwell had derived,

where κeff is the modified effective thermal conductivity (or diffusivity in my case), κl and κp are the thermal conductivity of the liquid and the particles respectively, and φ is the volume fraction of particles. 

To convert this to diffusion, I replaced κl by Dl, and κp by Dp where

Dl = kT/6πrηl

and Dp = kT/6πrη.

In the above ηl is the viscosity of the pure, liquid solvent.

The chair of the department, Don Anderson didn’t believe my equation, but agreed to help test it. A student named Kit Yam ran experiments on a variety of polymer solutions, and it turned out that the equation worked really well down to high polymer concentrations, and high viscosity.

As a simple, first approximation to the above, you can take Dp = 0, since it’s much smaller than Dl and you can take Dl to equal Dl = kT/6πrηl as above. The new, first order approximation is:

D = kT/6πrηl (1 – 3φ/2).

We published in Science. That is I published along with the two colleagues who tested the idea and proved the theory right, or at least useful. The reference is Yam, K., Anderson, D., Buxbaum, R. E., Science 240 (1988) p. 330 ff. “Diffusion of Small Solutes in Polymer-Containing Solutions”. This result is one of my proudest achievements.

R.E. Buxbaum, March 20, 2024

Low temperature hydrogen removal

Platinum catalysts can be very effective at removing hydrogen from air. Platinum promotes the irreversible reaction of hydrogen with oxygen to make water: H2 + 1/2 O2 –> H2O, a reaction that can take off, at great rates, even at temperatures well below freezing. In the 1800s, when platinum was cheap, platinum powder was used to light town-gas, gas street lamps. In those days, street lamps were not fueled by methane, ‘natural gas’, but by ‘town gas’, a mix of hydrogen and carbon monoxide and many impurities like H2S. It was made by reacting coal and steam in a gas plant, and it is a testament to the catalytic power of Pt that it could light this town gas. These impurities are catalytic poisons. When exposed to any catalyst, including platinum, the catalyst looses it’s power to. This is especially true at low temperatures where product water condenses, and this too poisons the catalytic surface.

Nowadays, platinum is expensive and platinum catalysts are no longer made of Pt powder, but rather by coating a thin layer of Pt metal on a high surface area substrate like alumina, ceria, or activated carbon. At higher temperatures, this distribution of Pt improves the reaction rate per gram Pt. Unfortunately, at low temperatures, the substrate seems to be part of the poisoning problem. I think I’ve found a partial way around it though.

My company, REB Research, sells Pt catalysts for hydrogen removal use down to about 0°C, 32°F. For those needing lower temperature hydrogen removal, we offer a palladium-hydrocarbon getter that continues to work down to -30°C and works both in air and in the absence of air. It’s pretty good, but poisons more readily than Pt does when exposed to H2S. For years, I had wanted to develop a version of the platinum catalyst that works well down to -30°C or so, and ideally that worked both in air and without air. I got to do some of this development work during the COVID downtime year.

My current approach is to add a small amount of teflon and other hydrophobic materials. My theory is that normal Pt catalysts form water so readily that the water coats the catalytic surface and substrate pores, choking the catalyst from contact with oxygen or hydrogen. My thought of why our Pd-organic works better than Pt is that it’s part because Pd is a slower water former, and in part because the organic compounds prevent water condensation. If so, teflon + Pt should be more active than uncoated Pt catalyst. And it is so.

Think of this in terms of the  Van der Waals equation of state:{\displaystyle \left(p+{\frac {a}{V_{m}^{2}}}\right)\left(V_{m}-b\right)=RT}

where V_{m} is molar volume. The substance-specific constants a and b can be understood as an attraction force between molecules and a molecular volume respectively. Alternately, they can be calculated from the critical temperature and pressure as

{\displaystyle a={\frac {27(RT_{c})^{2}}{64p_{c}}}}{\displaystyle b={\frac {RT_{c}}{8p_{c}}}.}

Now, I’m going to assume that the effect of a hydrophobic surface near the Pt is to reduce the effective value of a. This is to say that water molecules still attract as before, but there are fewer water molecules around. I’ll assume that b remains the same. Thus the ratio of Tc and Pc remains the same but the values drop by a factor of related to the decrease in water density. If we imagine the use of enough teflon to decrease he number of water molecules by 60%, that would be enough to reduce the critical temperature by 60%. That is, from 647 K (374 °C) to 359 K, or -14°C. This might be enough to allow Pt catalysts to be used for H2 removal from the gas within a nuclear wast casket. I’m into nuclear, both because of its clean power density and its space density. As for nuclear waste, you need these caskets.

I’ve begun to test of my theory by making hydrogen removal catalyst that use both platinum and palladium along with unsaturated hydrocarbons. I find it works far better than the palladium-hydrocarbon getter, at least at room temperature. I find it works well even when the catalyst is completely soaked in water, but the real experiments are yet to come — how does this work in the cold. Originally I planned to use a freezer for these tests, but I now have a better method: wait for winter and use God’s giant freezer.

Robert E. Buxbaum October 20, 2021. I did a fuller treatment of the thermo above, a few weeks back.

Weird thermodynamics near surfaces can prevent condensation and make water more slippery.

It is a fundamental of science that that the properties of every pure one-phase material is totally fixed properties at any given temperature and pressure. Thus for example, water at 0°C is accepted to always have a density of 0.998 gm/cc, a vapor pressure of 17.5 Torr, a viscosity of 1.002 centipoise (milliPascal seconds) and a speed of sound of 1481 m/s. Set the temperature and pressure of any other material and every other quality is set. But things go screwy near surfaces, and this is particularly true for water where the hydrogen bond — a quantum bond — predominates.

its vapor pressure rises and it becomes less inclined to condense or freeze. I use this odd aspect of thermodynamics to keep my platinum-based hydrogen getter catalysis active at low temperatures where they would normally clog. Normal platinum catalysts are not suitable for hydrogen removal at normal temperatures, eg room temperature, because the water that forms from hydrogen oxidation chokes off the catalytic surface. Hydrophobic additions prevent this, and I’d like to show you why this works, and why other odd things happen, based on an approximation called the  Van der Waals equation of state:

{\displaystyle \left(p+{\frac {a}{V_{m}^{2}}}\right)\left(V_{m}-b\right)=RT} (1)

This equation described the molar volume of a pure material, V_{m}, of any pure material based not the pressure, the absolute temperature (Kelvin) and two, substance-specific constants, a and b. These constants can be understood as an attraction force term, and a molecular volume respectively. It is common to calculate a and b from the critical temperature and pressure as follows, where Tc is absolute temperature:

{\displaystyle a={\frac {27(RT_{c})^{2}}{64p_{c}}}}, {\displaystyle b={\frac {RT_{c}}{8p_{c}}}.} (2 a,b)

For water Tc = 647 K (374°C) and 220.5 bar. Plugging in these numbers, the Van der Waals gives reasonable values for the density of water both as a liquid and a gas, and thus gives a reasonable value for the boiling point.

Now consider the effect that an inert surface would have on the effective values of a and b near that surface. The volume of the molecules will not change, and thus b will not change, but the value of a will change, likely by about half. This is because, the number of molecules surrounding any other molecule is reduced by about half while the inert surface adds nothing to the attraction. Near a surface, surrounding molecules still attract each other the same as before, but there are about half as many molecules at any temperature and pressure.

To get a physical sense of what the surface does, consider using the new values of a and b to determine a new value for Tc and Pc, for materials near the surface. Since b does not change, we see that the presence of a surface does not affect the ratio of Tc and Pc, but it decreases the effective value of Tc — by about half. For water, that is a change from 647 K to 323.5K, 50.5°C, very close to room temperature. Pc changes to 110 bar, about 1600 psi. Since the new value of Tc is close to room temperature, the the density of water will be much lower near the surface, and the viscosity can be expected to drop. The net result is that water flows more readily through a teflon pipe than through an ordinary pipe, a difference that is particularly apparent at small diameters.

This decrease in effective Tc is useful for fire hoses, and for making sailing ships go faster (use teflon paint) and for making my hydrogen removal catalysts more active at low temperatures. Condensed water can block the pores to the catalyst; teflon can forestall this condensation. It’s a general trick of thermodynamics, reasonably useful. Now you know it, and now you know why it works.

Robert Buxbaum August 30, 2021

The remarkable efficiency of 22 caliber ammunition.

22 long rifle shells contain early any propellant.

The most rifle cartridge in the US today is the 22lr a round that first appeared in 1887. It is suitable to small game hunting and while it is less–deadly than larger calibers, data suggests it is effective for personal protection. It is also remarkably low cost. This is because the cartridge in almost entirely empty as shown in the figure at right. It is also incredibly energy efficient, that is to say, it’s incredibly good at transforming heat energy of the powder into mechanical energy in the bullet.

The normal weight of a 22lr is 40 grains, or 2.6 grams; a grain is the weight of a barley grain 1/15.4 gram. Virtually every brand of 22lr will send its bullet at about the speed of sound, 1200 ft/second, with a kinetic energy of about 120 foot pounds, or 162 Joules. This is about twice the energy of a hunting bow, and it will go through a deer. Think of a spike driven by a 120 lb hammer dropped from one foot. That’s the bullet from a typical 22lr.

The explosive combustion heat of several Hodgdon propellants.

The Hodgdon power company is the largest reseller of smokeless powder in the US with products from all major manufacturers, with products selling for an average of $30/lb or .43¢ per grain. The CCI Mini-Mag, shown above, uses 0.8 grains of some powder 0.052 grams, or about 1/3¢ worth, assuming that CCI bought from Hodgdon rather than directly from the manufacturer. You will notice that the energies of the powders hardly varies from type to type, from a low of 3545 J/gram to a high of 4060 J/gram. While I don’t know which powder is used, I will assume CCI uses a high-energy propellant, 4000 J/gram. I now calculate that the heat energy available as 0.052*4000 = 208 Joules. To calculate the efficiency, divide the kinetic energy of the bullet by the 208 Joules. The 40 grain CCI MiniMag bullet has been clocked at 1224 feet per second indicating 130 foot pounds of kinetic energy, or 176 J. Divide by the thermal energy and you find a 85% efficiency: 176J/ 208 J = 85%. That’s far better than your car engine. If the powder were weaker, the efficiency would have to be higher.

The energy content of various 22lr bullets shot from different length barrels.

I will now calculate the pressure of the gas behind a 22lr. I note that the force on the bullet is equal to the pressure times the cross-sectional area of the barrel. Since energy equals force times distance, we can expect that the kinetic energy gained per inch of barrel equals this force times this distance (1 inch). Because of friction this is an under-estimate of the pressure, but based on the high efficiency, 85%, it’s clear that the pressure can be no more than 15% higher than I will calculate. As it happens, the maximum allowable pressure for 22lr cartridges is set by law at 24,000 psi. When I calculate the actual pressure (below) I find it is about half this maximum.

The change in kinetic energy per inch of barrel is calculated as the change in 1/2 mv2, where m is the mass of the bullet and v is the velocity. There is a web-site with bullet velocity information for many brands of ammunition, “ballistics by the inch”. Data is available for many brands of bullet shot from gun barrels that they cut shorter inch by inch; data for several 22lr are shown here. For the 40 grain CCI MiniMag, they find a velocity of 862 ft/second for 2″ barrel, 965 ft/second for a 3″ barrel, 1043 ft/second for a 4″ barrel, etc. The cross-section area of the barrel is 0.0038 square inches.

Every 22 cartridge has space to spare.

Based on change in kinetic energy, the average pressure in the first two inches of barrel must be 10,845 psi, 5,485 psi in the next inch, and 4,565 psi in the next inch, etc. If I add a 15% correction for friction, I find that the highest pressure is still only half the maximum pressure allowable. Strain gauge deformation data (here) gives a slightly lower value. It appears to me that, by adding more propellant, one could make a legal, higher-performance version of the 22lr — one with perhaps twice the kinetic energy. Given the 1/3¢ cost of powder relative to the 5 to 20¢ price of ammo, I suspect that making a higher power 22lr would be a success.

Robert Buxbaum, March 18, 2021. About 10% of Michigan hunts dear every year during hunting season. Another 20%, as best I can tell own guns for target shooting or personal protection. Just about every lawyer I know carries a gun. They’re afraid people don’t like them. I’m afraid they’re right.

Water Towers, usually a good thing.

Most towns have at least one water tower. Oakland county, Michigan has four. When they are sized right, they serve several valuable purposes. They provide water in case of a power failure; they provide increased pressure in the morning when people use a lot of water showering etc.; and they allow a town to use smaller pumps and to pump with cheaper electricity, e.g. at night. If a town has no tower, all these benefits are gone, but a town can still have water. It’s also possible to have a situation that’s worse than nothing. My plan is to show, at the end of this essay, one of the ways that can happen. It involves thermodynamic properties of state i a situation where there is no expansion headspace or excess drain (most towers have both).

A typical water tower — spheroidal design. A tower of the dimensions shown would contain about 1/2 million gallons of water.

The typical tower stands at the highest point in the town, with the water level about 170 feet above street level. It’s usable volume should be about as much water as the town uses in a typical day. The reason for the height has to do with the operating pressure of most city-level water pipes. It’s about 75 psi and each foot of water “head” gives you about 0.43 psi. You want pressures about 75 psi for fire fighting, and to provide for folks in apartment buildings. If you have significantly higher pressures, you pay a cost in electricity, and you start losing a lot of water to leaks. These leaks should be avoided. They can undermine the roads and swallow houses. Bob Dadow estimates that, for our water system the leakage rate is between 15 and 25%.

Oakland county has four water towers with considerably less volume than the 130 million gallons per day that the county uses. I estimate that the South-east Oakland county tower, located near my home, contains, perhaps 2 million gallons. The other three towers are similar in size. Because our county’s towers are so undersized, we pay a lot for water, and our water pressure is typically quite low in the mornings. We also have regular pressure excursions and that leads to regular water-boil emergencies. In some parts of Oakland county this happens fairly often.

There are other reasons why a system like ours should have water towers with something more like one days’ water. Having a large water reserve means you can benefit from the fact that electric prices are the lowest at night. With a days’ volume, you can avoid running the pumps during high priced, day times. Oakland county loses this advantage. The other advantage to having a large volume is that it gives you more time to correct problems, e.g. in case of an electric outage or a cyber attack. Perhaps Oakland thinks that only one pump can be attacked at one time or that the entire electric grid will not go out at one time, but these are clearly false assumptions. A big system also means you can have pumps powered by solar cells or other renewable power. Renewable power is a good thing for reliability and air pollution avoidance. Given the benefits, you’d expect Oakland county would reward towns that add water towers, but they don’t, as best I can tell.

Here’s one way that a water column can cause problems. You really need those pressure reliefs.

Now for an example of the sort of things that can go wrong in a water tower with no expansion relief. Every stand-pipe is a small water tower, and since water itself is incompressible, it’s easy to see that a small expansion in the system could produce a large pressure rise. The law requires that every apartment hose water system has to have expansion relief to limit these increases; The water tower above had two forms of reliefs, a roof vent, and an overflow pipe, both high up so that pressure could be maintained. But you can easily imagine a plumber making a mistake and installing a stand pipe without an expansion relief. I show a system like that at left, a 1000 foot tall water pipe, within a skyscraper, with a pump at the bottom, and pipes leading off at the sides to various faucets.

Lets assume that the pressure at the top is 20 psi, the pressure at the bottom will be about 450 psi. The difference in pressure (430 psi) equals the weight of the water divided by the area of the pipe. Now let’s imagine that a bubble of air at the bottom of the pipe detaches and rises to the top of the pipe when all of the faucets are closed. Since air is compressible, while water is not, the pressure at the bubble will remain the same as the bubble rises. By the time the bubble reaches the top of the pipe, the pressure there will rise to 450 psi. Since water has weight, 430 psi worth, the pressure at the bottom will rise to 880 psi = 450 + 430. This is enough to damage pump and may blow the pipes as well. A scenario like this likely destroyed the New Horizon oil platform to deadly consequences. You really want those pressure reliefs, and you want a competent plumber / designer for any water system, even a small one.

Robert Buxbaum, September 28- October 6, 2019. I ran for water commissioner is 2016.

magnetic separation of air

As some of you will know, oxygen is paramagnetic, attracted slightly by a magnet. Oxygen’s paramagnetism is due to the two unpaired electrons in every O2 molecule. Oxygen has a triple-bond structure as discussed here (much of the chemistry you were taught is wrong). Virtually every other common gas is diamagnetic, repelled by a magnet. These include nitrogen, water, CO2, and argon — all diamagnetic. As a result, you can do a reasonable job of extracting oxygen from air by the use of a magnet. This is awfully cool, and could make for a good science fair project, if anyone is of a mind.

But first some math, or physics, if you like. To a good approximation the magnetization of a material, M = CH/T where M is magnetization, H is magnetic field strength, C is the Curie constant for the material, and T is absolute temperature.

Ignoring for now, the difference between entropy and internal energy, but thinking only in terms of work derived by lowering a magnet towards a volume of gas, we can say that the work extracted, and thus the decrease in energy of the magnetic gas is ∫∫HdM  = MH/2. At constant temperature and pressure, we can say ∆G = -CH2/2T.

The maximum magnetization you’re likely to get with any permanent magnet (not achieved to date) is about 50 Tesla, or 40,000 ampere meters. At 20°C, the per-mol, magnetic susceptibility of oxygen is 1.34×10−6  This suggests that the Curie constant is 1.34 ×10−6 x 293 = 3.93 ×10−4. Applying this value to oxygen in a 50 Tesla magnet at 20°C, we find the energy difference, ∆G is 1072 J/mole = RT ln ß where ß is a concentration ratio factor between the O2 content of the magnetized and un-magnetized gas, C1/C2 =ß

At room temperature, 298K ß = 1.6, and thus we find that the maximum oxygen concentration you’re likely to get is about 1.6 x 21% = 33%. It’s slightly more than this due to nitrogen’s diamagnetism, but this effect is too small the matter. What does matter is that 33% O2 is a good amount for a variety of medical uses.

I show below my simple design for a magnetic O2 concentrator. The dotted line is a permeable membrane of no selectivity – with a little O2 permeability the design will work better. All you need is a blower or pump. A coffee filter could serve as a membrane.bux magneitc air separator

This design is as simple as the standard membrane-based O2 concentrator – those based on semi-permeable membranes, but this design should require less pressure differential — just enough to overcome the magnet. Less pressure means the blower should be smaller, and less noisy, with less energy use.  I figure this could be really convenient for people who need portable oxygen. With current magnets it would take 4-5 stages or low temperatures to reach this concentration, still this design could have commercial use, I’d think.

On the theoretical end, an interesting thing I find concerns the effect on the entropy of the magnetic oxygen. (Please ignore this paragraph if you have not learned statistical thermodynamics.) While you might imagine that magnetization decreases entropy, other-things being equal because the molecules are somewhat aligned with the field, temperature and pressure being fixed, I’ve come to realize that entropy is likely higher. A sea of semi-aligned molecules will have a slightly higher heat capacity than nonaligned molecules because the vibrational Cp is higher, other things being equal. Thus, unless I’m wrong, the temperature of the gas will be slightly lower in the magnetic area than in the non-magnetic field area. Temperature and pressure are not the same within the separator as out, by the way; the blower is something of a compressor, though a much less-energy intense one than used for most air separators. Because of the blower, both the magnetic and the non magnetic air will be slightly warmer than in the surround (blower Work = ∆T/Cp). This heat will be mostly lost when the gas leaves the system, that is when it flows to lower pressure, both gas streams will be, essentially at room temperature. Again, this is not the case with the classic membrane-based oxygen concentrators — there the nitrogen-rich stream is notably warm.

Robert E. Buxbaum, October 11, 2017. I find thermodynamics wonderful, both as science and as an analog for society.

The chemistry of sewage treatment

The first thing to know about sewage is that it’s mostly water and only about 250 ppm solids. That is, if you boiled down a pot of sewage, only about 1/40 of 1% of it would remain as solids at the bottom of the pot. There would be some dried poop, some bits of lint and soap, the remains of potato peelings… Mostly, the sewage is water, and mostly it would have boiled away. The second thing to know, is that the solids, the bio-solids, are a lot like soil but better: more valuable, brown gold if used right. While our county mostly burns and landfills the solids remnant of our treated sewage, the wiser choice would be to convert it to fertilizer. Here is a comparison between the composition of soil and bio-solids.

The composition of soil and the composition of bio-solid waste. biosolids are like soil, just better.

The composition of soil and the composition of bio-solid waste. biosolids are like soil, just better.

Most of Oakland’s sewage goes to Detroit where they mostly dry and burn it, and land fill the rest. These processes are expensive and engineering- problematic. It takes a lot of energy to dry these solids to the point where they burn (they’re like really wet wood), and even then they don’t burn nicely. As shown above, the biosolids contain lots of sulfur and that makes combustion smelly. They also contain nitrate, and that makes combustion dangerous. It’s sort of like burning natural gun powder.

The preferred solution is partial combustion (oxidation) at room temperature by bacteria followed by conversion to fertilizer. In Detroit we do this first stage of treatment, the slow partial combustion by bacteria. Consider glucose, a typical carbohydrate,

-HCOH- + O–> CO+ H2O.    ∆G°= -114.6 kcal/mol.

The value of ∆G°, is relevant as a determinate of whether the reaction will proceed. A negative value of ∆G°, as above, indicates that the reaction can progress substantially to completion at standard conditions of 25°C and 1 atm pressure. In a sewage plant, many different carbohydrates are treated by many different bacteria (amoebae, paramnesia, and lactobacilli), and the temperature is slightly cooler than room, about 10-15°C, but this value of ∆G° suggests that near total biological oxidation is possible.

The Detroit plant, like most others, do this biological oxidation treatment using either large stirred tanks, of million gallon volume or so, or in flow reactors with a large fraction of cellular-material returning as recycle. Recycle is needed also in the stirred tank process because of the low solid content. The reaction is approximately first order in oxygen, carbohydrate, and bacteria. Thus a 50% cell recycle more or less doubles the speed of the reaction. Air is typically bubbled through the reactor to provide the oxygen, but in Detroit, pure oxygen is used. About half the organic carbon is oxidized and the remainder is sent to a settling pond. The decant (top) water is sent for “polishing” and dumped in the river, while the goop (the bottom) is currently dried for burning or carted off for landfill. The Holly, MI sewage plant uses a heterogeneous reactors for the oxidation: a trickle bed followed by a rotating disk contractor. These have higher bio-content and thus lower area demands and separation costs, but there is a somewhat higher capital cost.

A major component of bio-solids is nitrogen. Much of this in enters the form of urea, NH2-CO-NH2. In an oxidizing environment, bacteria turns the urea and other nitrogen compounds into nitrate. Consider the reaction the presence of washing soda, Na2CO3. The urea is turned into nitrate, a product suitable for gun powder manufacture. The value of ∆G° is negative, and the reaction is highly favorable.

NH2-CO-NH2 + Na2CO3 + 4 O2 –> 2 Na(NO3) + 2 CO2 + 2 H2O.     ∆G° = -177.5 kcal/mol

The mixture of nitrates and dry bio-solids is highly flammable, and there was recently a fire in the Detroit biosolids dryer. If we wished to make fertilizer, we’d probably want to replace the drier with a further stage of bio-treatment. In Wisconsin, and on a smaller scale in Oakland MI, biosolids are treated by higher temperature (thermophilic) bacteria in the absence of air, that is anaerobically. Anaerobic digestion produces hydrogen and methane, and produces highly useful forms of organic carbon.

2 (-HCOH-) –> COCH4        ∆G° = -33.7 Kcal/mol

3 (-HCOH-) + H2O –> -CH2COOH + CO2 +  2 1/2 H2        ∆G° = -21.9 kcal/mol

In a well-designed plant, the methane is recovered to provide heat to the plant, and sometimes to generate power. In Wisconsin, enough methane is produced to cook the fertilizer to sterilization. The product is called “Milorganite” as much of it comes from Milwaukee and much of the nitrate is bound to organics.

Egg-shaped, anaerobic biosolid digestors.

Egg-shaped, anaerobic biosolid digestors, Singapore.

The hydrogen could be recovered too, but typically reacts further within the anaerobic digester. Typically it will reduce the iron oxide in the biosolids from the brown, ferric form, Fe2O3, to black FeO.  In a reducing atmosphere,

Fe2O3 + H2 –> 2 FeO + H2O.

Fe2O3 is the reason leaves turn brown in the fall and is the reason that most poop is brown. FeO is the reason that composted soil is typically black. You’ll notice that swamps are filled with black goo, that’s because of a lack of oxygen at the bottom. Sulphate and phosphorous can be bound to ferrous iron and this is good for fertilizer. Generally you want the reduction reactions to go no further.

Weir dam on the river dour. Used to manage floods, increase residence time, and oxygenate the flow.

Weir dam on the river Dour in Scotland. Dams of this type increase residence time, and oxygenate the flow. They’re good for fish, pollution, and flooding.

When allowed to continue, the hydrogen produced by anaerobic digestion begins to reduce sulfate to H2S.

NaSO4 + 4.5 H2 –>  NaOH + 3H2O + H2S.

I’m running for Oakland county, MI water commissioner, and one of my aims is to stop wasting our biosolids. Oakland produces nearly 1000,000 pounds of dry biosolids per day. This is either a blessing or a curse depending on how we use it.

Another issue, Oakland county dumps unpasteurized, smelly black goo into Lake St. Clair every other week, whenever it rains more than one inch. I’d like to stop this by separating the storm and “sanitary” sewage. There is a capital cost, but it can save money because we’d no longer have to pay to treat our rainwater at the Detroit sewage plant. To clean the storm runoff, I’d use mini wetlands and weir dams to increase residence time and provide oxygen. Done right, it would look beautiful and would avoid the flash floods. It should also bring natural fish back to the Clinton River.

Robert Buxbaum, May 24 – Sept. 15, 2016 Thermodynamics plays a big role in my posts. You can show that, when the global ∆G is negative, there is an increase in the entropy of the universe.

Alcohol and gasoline don’t mix in the cold

One of the worst ideas to come out of the Iowa caucuses, I thought, was Ted Cruz claiming he’d allow farmers to blend as much alcohol into their gasoline as they liked. While this may have sounded good in Iowa, and while it’s consistent with his non-regulation theme, it’s horribly bad engineering.

At low temperatures ethanol and gasoline are no longer quite miscible

Ethanol and gasoline are that miscible at temperatures below freezing, 0°C. The tendency is greater if the ethanol is wet or the gasoline contains benzenes

We add alcohol to gasoline, not to save money, mostly, but so that farmers will produce excess so we’ll have secure food for wartime or famine — or so I understand it. But the government only allows 10% alcohol in the blend because alcohol and gasoline don’t mix well when it’s cold. You may notice, even with the 10% mixture we use, that your car starts poorly on the coldest winter days. The engine turns over and almost catches, but dies. A major reason is that the alcohol separates from the rest of the gasoline. The concentrated alcohol layer screws up combustion because alcohol doesn’t burn all that well. With Cruz’s higher alcohol allowance, you’d get separation more often, at temperatures as high as 13°C (55°F) for a 65 mol percent mix, see chart at right. Things get worse yet if the gasoline gets wet, or contains benzene. Gasoline blending is complex stuff: something the average joe should not do.

Solubility of dry alcohol (ethanol) in gasoline. The solubility is worse at low temperature and if the gasoline is wet or aromatic.

Solubility of alcohol (ethanol) in gasoline; an extrapolation based on the data above.

To estimate the separation temperature of our normal, 10% alcohol-gasoline mix, I extended the data from the chart above using linear regression. From thermodynamics, I extrapolated ln-concentration vs 1/T, and found that a 10% by volume mix (5% mol fraction alcohol) will separate at about -40°F. Chances are, you won’t see that temperature this winter (and if you you do, try to find a gas mix that has no alcohol. Another thought, add hydrogen or other combustible gas to get the engine going.

Robert E. Buxbaum, February 10, 2016. Two more thoughts: 1) Thermodynamics is a beautiful subject to learn, and (2) Avoid people who stick to foolish consistency. Too much regulation is bad, as is too little: it’s a common pattern: The difference between a cure and a poison is often just the dose.

It’s rocket science

Here are six or so rocket science insights, some simple, some advanced. It’s a fun area of engineering that touches many areas of science and politics. Besides, some people seem to think I’m a rocket scientist.

A basic question I get asked by kids is how a rocket goes up. My answer is it does not go up. That’s mostly an illusion. The majority of the rocket — the fuel — goes down, and only the light shell goes up. People imagine they are seeing the rocket go up. Taken as a whole, fuel and shell, they both go down at 1 G: 9.8 m/s2, 32 ft/sec2.

Because 1 G ofupward acceleration is always lost to gravity, you need more thrust from the rocket engine than the weight of rocket and fuel. This can be difficult at the beginning when the rocket is heaviest. If your engine provides less thrust than the weight of your rocket, your rocket sits on the launch pad, burning. If your thrust is merely twice the weight of the rocket, you waste half of your fuel doing nothing useful, just fighting gravity. The upward acceleration you’ll see, a = F/m -1G where F is the force of the engine, and m is the mass of the rocket shell + whatever fuel is in it. 1G = 9.8m/s is the upward acceleration lost to gravity.  For model rocketry, you want to design a rocket engine so that the upward acceleration, a, is in the range 5-10 G. This range avoids wasting lots of fuel without requiring you to build the rocket too sturdy.

For NASA moon rockets, a = 0.2G approximately at liftoff increasing as fuel was used. The Saturn V rose, rather majestically, into the sky with a rocket structure that had to be only strong enough to support 1.2 times the rocket weight. Higher initial accelerations would have required more structure and bigger engines. As it was the Saturn V was the size of a skyscraper. You want the structure to be light so that the majority of weight is fuel. What makes it tricky is that the acceleration weight has to sit on an engine that gimbals (slants) and runs really hot, about 3000°C. Most engineering projects have fewer constraints than this, and are thus “not rocket science.”

Basic force balance on a rocket going up.

Basic force balance on a rocket going up.

A space rocket has to reach very high, orbital speed if the rocket is to stay up indefinitely, or nearly orbital speed for long-range, military uses. You can calculate the orbital speed by balancing the acceleration of gravity, 9.8 m/s2, against the orbital acceleration of going around the earth, a sphere of 40,000 km in circumference (that’s how the meter was defined). Orbital acceleration, a = v2/r, and r = 40,000,000 m/2π = 6,366,000m. Thus, the speed you need to stay up indefinitely is v=√(6,366,000 x 9.8) = 7900 m/s = 17,800 mph. That’s roughly Mach 35, or 35 times the speed of sound at sea level, (343 m/s). You need some altitude too, just to keep air friction from killing you, but for most missions, the main thing you need is velocity, kinetic energy, not potential energy, as I’ll show below. If your speed exceeds 17,800 m/s, you go higher up, but the stable orbital velocity is lower. The gravity force is lower higher up, and the radius to the earth higher too, but you’re balancing this lower gravity force against v2/r, so v2 has to be reduced to stay stable high up, but higher to get there. This all makes docking space-ships tricky, as I’ll explain also. Rockets are the only way practical to reach Mach 35 or anything near it. No current cannon or gun gets close.

Kinetic energy is a lot more important than potential energy for sending an object into orbit. To get a sense of the comparison, consider a one kg mass at orbital speed, 7900 m/s, and 200 km altitude. For these conditions, the kinetic energy, 1/2mv2 is 31,205 kJ, while the potential energy, mgh, is only 1,960 kJ . The potential energy is thus only 1/16 the kinetic energy.

Not that it’s easy to reach 200 miles altitude, but you can do it with a sophisticated cannon. The Germans did it with “simple”, one stage, V2-style rockets. To reach orbit, you generally need multiple stages. As a way to see this, consider that the energy content of gasoline + oxygen is about 10.5 MJ/kg (10,500 kJ/kg); this is only 1/3 of the kinetic energy of the orbital rocket, but it’s 5 times the potential energy. A fairly efficient gasoline + oxygen powered cannon could not provide orbital kinetic energy since the bullet can move no faster than the explosive vapor. In a rocket this is not a constraint since most of the mass is ejected.

A shell fired at a 45° angle that reaches 200 km altitude would go about 800 km — the distance between North Korea and Japan, or between Iran and Israel. That would require twice as much energy as a shell fired straight up, about 4000 kJ/kg. This is still within the range for a (very large) cannon or a single-stage rocket. For Russia or China to hit the US would take much more: orbital, or near orbital rocketry. To reach the moon, you need more total energy, but less kinetic energy. Moon rockets have taken the approach of first going into orbit, and only later going on. While most of the kinetic energy isn’t lost, it’s likely not the best trajectory in terms of energy use.

The force produced by a rocket is equal to the rate of mass shot out times its velocity. F = ∆(mv). To get a lot of force for each bit of fuel, you want the gas exit velocity to be as fast as possible. A typical maximum is about 2,500 m/s. Mach 10, for a gasoline – oxygen engine. The acceleration of the rocket itself is this ∆mv force divided by the total remaining mass in the rocket (rocket shell plus remaining fuel) minus 1 (gravity). Thus, if the exhaust from a rocket leaves at 2,500 m/s, and you want the rocket to accelerate upward at an average of 10 G, you must exhaust fast enough to develop 10 G, 98 m/s2. The rate of mass exhaust is the average mass of the rocket times 98/2500 = .0392/second. That is, about 3.92% of the rocket mass must be ejected each second. Assuming that the fuel for your first stage engine is less than 80% of the total mass, the first stage will flare-out in about 20 seconds. Typically, the acceleration at the end of the 20 burn is much greater than at the beginning since the rocket gets lighter as fuel is burnt. This was the case with the Apollo missions. The Saturn V started up at 0.5G but reached a maximum of 4G by the time most of the fuel was used.

If you have a good math background, you can develop a differential equation for the relation between fuel consumption and altitude or final speed. This is readily done if you know calculous, or reasonably done if you use differential methods. By either method, it turns out that, for no air friction or gravity resistance, you will reach the same speed as the exhaust when 64% of the rocket mass is exhausted. In the real world, your rocket will have to exhaust 75 or 80% of its mass as first stage fuel to reach a final speed of 2,500 m/s. This is less than 1/3 orbital speed, and reaching it requires that the rest of your rocket mass: the engine, 2nd stage, payload, and any spare fuel to handle descent (Elon Musk’s approach) must weigh less than 20-25% of the original weight of the rocket on the launch pad. This gasoline and oxygen is expensive, but not horribly so if you can reuse the rocket; that’s the motivation for NASA’s and SpaceX’s work on reusable rockets. Most orbital rocket designs require three stages to accelerate to the 7900 m/s orbital speed calculated above. The second stage is dropped from high altitude and almost invariably lost. If you can set-up and solve the differential equation above, a career in science may be for you.

Now, you might wonder about the exhaust speed I’ve been using, 2500 m/s. You’ll typically want a speed at lest this high as it’s associated with a high value of thrust-seconds per weight of fuel. Thrust seconds pre weight is called specific impulse, SI, SI = lb-seconds of thrust/lb of fuel. This approximately equals speed of exhaust (m/s) divided by 9.8 m/s2. For a high molecular weight burn it’s not easy to reach gas speed much above 2500, or values of SI much above 250, but you can get high thrust since thrust is related to momentum transfer. High thrust is why US and Russian engines typically use gasoline + oxygen. The heat of combustion of gasoline is 42 MJ/kg, but burning a kg of gasoline requires roughly 2.5 kg of oxygen. Thus, for a rocket fueled by gasoline + oxygen, the heat of combustion per kg is 42/3.5 = 12,000,000 J/kg. A typical rocket engine is 30% efficient (V2 efficiency was lower, Saturn V higher). Per corrected unit of fuel+oxygen mass, 1/2 v2 = .3 x 12,000,000; v =√7,200,000 = 2680 m/s. Adding some mass for the engine and fuel tanks, the specific impulse for this engine will be, about 250 s. This is fairly typical. Higher exhaust speeds have been achieved with hydrogen fuel, it has a higher combustion energy per weight. It is also possible to increase the engine efficiency; the Saturn V, stage 2 efficiency was nearly 50%, but the thrust was low. The sources of inefficiency include inefficiencies in compression, incomplete combustion, friction flows in the engine, and back-pressure of the atmosphere. If you can make a reliable, high efficiency engine with good lift, a career in engineering may be for you. A yet bigger challenge is doing this at a reasonable cost.

At an average acceleration of 5G = 49 m/s2 and a first stage that reaches 2500 m/s, you’ll find that the first stage burns out after 51 seconds. If the rocket were going straight up (bad idea), you’d find you are at an altitude of about 63.7 km. A better idea would be an average trajectory of 30°, leaving you at an altitude of 32 km or so. At that altitude you can expect to have far less air friction, and you can expect the second stage engine to be more efficient. It seems to me, you may want to wait another 10 seconds before firing the second stage: you’ll be 12 km higher up and it seems to me that the benefit of this will be significant. I notice that space launches wait a few seconds before firing their second stage.

As a final bit, I’d mentioned that docking a rocket with a space station is difficult, in part, because docking requires an increase in angular speed, w, but this generally goes along with a decrease in altitude; a counter-intuitive outcome. Setting the acceleration due to gravity equal to the angular acceleration, we find GM/r2 = w2r, where G is the gravitational constant, and M is the mass or the earth. Rearranging, we find that w2  = GM/r3. For high angular speed, you need small r: a low altitude. When we first went to dock a space-ship, in the early 60s, we had not realized this. When the astronauts fired the engines to dock, they found that they’d accelerate in velocity, but not in angular speed: v = wr. The faster they went, the higher up they went, but the lower the angular speed got: the fewer the orbits per day. Eventually they realized that, to dock with another ship or a space-station that is in front of you, you do not accelerate, but decelerate. When you decelerate you lose altitude and gain angular speed: you catch up with the station, but at a lower altitude. Your next step is to angle your ship near-radially to the earth, and accelerate by firing engines to the side till you dock. Like much of orbital rocketry, it’s simple, but not intuitive or easy.

Robert Buxbaum, August 12, 2015. A cannon that could reach from North Korea to Japan, say, would have to be on the order of 10 km long, running along the slope of a mountain. Even at that length, the shell would have to fire at 450 G, or so, and reach a speed about 3000 m/s, or 1/3 orbital.