Category Archives: Engineering

Nuclear fusion

I got my PhD at Princeton University 33 years ago (1981) working on the engineering of nuclear fusion reactors, and I thought I’d use this blog to rethink through the issues. I find I’m still of the opinion that developing fusion is important as the it seems the best, long-range power option. Civilization will still need significant electric power 300 to 3000 years from now, it seems, when most other fuel sources are gone. Fusion is also one of the few options for long-range space exploration; needed if we ever decide to send colonies to Alpha Centauri or Saturn. I thought fusion would be ready by now, but it is not, and commercial use seems unlikely for the next ten years at least — an indication of the difficulties involved, and a certain lack of urgency.

Oil, gas, and uranium didn’t run out like we’d predicted in the mid 70s. Instead, population growth slowed, new supplies were found, and better methods were developed to recover and use them. Shale oil and fracking unlocked hydrocarbons we thought were unusable, and nuclear fission reactors got better –safer and more efficient. At the same time, the more we studied, the clearer it came that fusion’s technical problems are much harder to tame than uranium fission’s.

Uranium fission was/is frighteningly simple — far simpler than even the most basic fusion reactor. The first nuclear fission reactor (1940) involved nothing more than uranium pellets in a pile of carbon bricks stacked in a converted squash court at the University of Chicago. No outside effort was needed to get the large, unstable uranium atoms split to smaller, more stable ones. Water circulating through the pile removed the heat released, and control was maintained by people lifting and lowering cadmium control rods while standing on the pile.

A fusion reactor requires high temperature or energy to make anything happen. Fusion energy is produced by combining small, unstable heavy hydrogen atoms into helium, a bigger more stable one, see figure. To do this reaction you need to operate at the equivalent of about 500,000,000 degrees C, and containing it requires (typically) a magnetic bottle — something far more complex than a pile of graphic bricks. The reward was smaller too: “only” about 1/13th as much energy per event as fission. We knew the magnetic bottles were going to be tricky, e.g. there was no obvious heat transfer and control method, but fusion seemed important enough, and the problems seemed manageable enough that fusion power seemed worth pursuing — with just enough difficulties to make it a challenge.

Basic fusion reaction: deuterium + tritium react to give helium, a neutron and energy.

Basic fusion reaction: deuterium + tritium react to give helium, a neutron and energy.

The plan at Princeton, and most everywhere, was to use a TOKAMAK, a doughnut-shaped reactor like the one shown below, but roughly twice as big; TOKAMAK was a Russian acronym. The doughnut served as one side of an enormous transformer. Hydrogen fuel was ionized into a plasma (a neutral soup of protons and electrons) and heated to 300,000,000°C by a current in the TOKOMAK generated by varying the current in the other side of the transformer. Plasma containment was provided by enormous magnets on the top and bottom, and by ring-shaped magnets arranged around the torus.

As development went on, we found we kept needing bigger and bigger doughnuts and stronger and stronger magnets in an effort to balance heat loss with fusion heating. The number density of hydrogen atoms per volume, n, is proportional to the magnetic strength. This is important because the fusion heat rate per volume is proportional to n-squared, n2, while heat loss is proportional to n divided by the residence time, something we called tau, τ. The main heat loss was from the hot plasma going to the reactor surface. Because of the above, a heat balance ratio was seen to be important, heat in divided by heat out, and that was seen to be more-or-less proportional to nτ. As the target temperatures increased, we found we needed larger and larger nτ reactors to make a positive heat balance. And this translated to ever larger reactors and ever stronger magnetic fields, but even here there was a limit, 1 billion Kelvin, a thermodynamic temperature where the fusion reaction went backward and no energy was produced. The Princeton design was huge, with super strong, super magnets, and was operated at 300 million°C, near the top of the reaction curve. If the temperature went above or below this temperature, the fire would go out. There was no room for error, but relatively little energy output per volume — compared to fission.

Fusion reaction options and reaction rates.

Fusion reaction options and reaction rates.

The most likely reaction involved deuterium and tritium, referred to as D and T. This was the reaction of the two heavy isotopes of hydrogen shown in the figure above — the same reaction used in hydrogen bombs, a point we rarely made to the public. For each reaction D + T –> He + n, you get 17.6 million electron volts (17.6 MeV). This is 17.6 million times the energy you get for an electron moving over one Volt, but only 1/13 the energy of a fission reaction. By comparison, the energy of water-forming, H2 + 1/2 O2 –> H2O, is the equivalent of two electrons moving over 1.2 Volts, or 2.4 electron volts (eV), some 8 million times less than fusion.

The Princeton design involved reacting 40 gm/hr of heavy hydrogen to produce 8 mol/hr of helium and 4000 MW of heat. The heat was converted to electricity at 38% efficiency using a topping cycle, a modern (relatively untried) design. Of the 1500 MWh/hr of electricity that was supposed to be produced, all but about 400 MW was to be delivered to the power grid — if everything worked right. Sorry to say, the value of the electricity did not rise anywhere as fast as the cost of the reactor and turbines. Another problem: 1100 MW was more than could be easily absorbed by any electrical grid. The output was high and steady, and could not be easily adjusted to match fluctuating customer demand. By contrast a coal plant’s or fuel cell’s output could be easily adjusted (and a nuclear plant with a little more difficulty).

Because of the need for heat balance, it turned out that at least 9% of the hydrogen had to be burnt per pass through the reactor. The heat lost per mol by conduction to the wall was, to good approximation, the heat capacity of each mol of hydrogen ions, 82 J/°C mol, times the temperature of the ions, 300 million °C divided by the containment time, τ. The Princeton design was supposed to have a containment of about 4 seconds. As a result, the heat loss by conduction was 6.2 GW per mol. This must be matched by the molar heat of reaction that stayed in the plasma. This was 17.6 MeV times Faraday’s constant, 96,800 divided by 4 seconds (= 430 GW/mol reacted) divided by 5. Of the 430 GW/mol produced in fusion reactions only 1/5 remains in the plasma (= 86 GW/mol) the other 4/5 of the energy of reaction leaves with the neutron. To get the heat balance right, at least 9% of the hydrogen must react per pass through the reactor; there were also some heat losses from radiation, so the number is higher. Burn more or less percent of the hydrogen and you had problems. The only other solution was to increase τ > 4 seconds, but this meant ever bigger reactors.

There was also a material handling issue: to get enough fuel hydrogen into the center of the reactor, quite a lot of radioactive gas had to be handled — extracted from the plasma chamber. These were to be frozen into tiny spheres of near-solid hydrogen and injected into the reactor at ultra-sonic velocity. Any slower and the spheres would evaporate before reaching the center. As 40 grams per hour was 9% of the feed, it became clear that we had to be ready to produce and inject 1 pound/hour of tiny spheres. These “snowballs-in-hell” had to be small so they didn’t dampen the fire. The vacuum system had to be able to be big enough to handle the lb/hr or so of unburned hydrogen and ash, keeping the pressure near total vacuum. You then had to purify the hydrogen from the ash-helium and remake the little spheres that would be fed back to the reactor. There were no easy engineering problems here, but I found it enjoyable enough. With a colleague, I came up with a cute, efficient high vacuum pump and recycling system, and published it here.

Yet another engineering challenge concerned the difficulty of finding a material for the first-wall — the inner wall of the doughnut facing the plasma. Of the 4000 MW of heat energy produced, all the conduction and radiation heat, about 1000 MW is deposited in the first wall and has to be conducted away. Conducting this heat means that the wall must have an enormous coolant flow and must withstand an enormous amount of thermal stress. One possible approach was to use a liquid wall, but I’ve recently come up with a rather nicer solid wall solution (I think) and have filed a patent; more on that later, perhaps after/if the patent is accepted. Another engineering challenge was making T, tritium, for the D-T reaction. Tritium is not found in nature, but has to be made from the neutron created in the reaction and from lithium in a breeder blanket, Li + n –> He + T. I examined all possible options for extracting this tritium from the lithium at low concentrations as part of my PhD thesis, and eventually found a nice solution. The education I got in the process is used in my, REB Research hydrogen engineering business.

Man inside the fusion reactor doughnut at ITER. He'd better leave before the 8,000,000°C plasma turns on.

Man inside the fusion reactor doughnut at ITER. He’d better leave before the 8,000,000°C plasma turns on.

Because of its complexity, and all these engineering challenges, fusion power never reached the maturity of fission power; and then Three-mile Island happened and ruined the enthusiasm for all things nuclear. There were some claims that fusion would be safer than fission, but because of the complexity and improvements in fission, I am not convinced that fusion would ever be even as safe. And the long-term need keeps moving out: we keep finding more uranium, and we’ve developed breeder reactors and a thorium cycle: technologies that make it very unlikely we will run out of fission material any time soon.

The main, near term advantage I see for fusion over fission is that there are fewer radioactive products, see comparison.  A secondary advantage is neutrons. Fusion reactors make excess neutrons that can be used to make tritium, or other unusual elements. A need for one of these could favor the development of fusion power. And finally, there’s the long-term need: space exploration, or basic power when we run out of coal, uranium, and thorium. Fine advantages but unlikely to be important for a hundred years.

Robert E. Buxbaum, March 1, 2014. Here’s a post on land use, on the aesthetics of engineering design, and on the health risks of nuclear power. The sun’s nuclear fusion reactor is unstable too — one possible source of the chaotic behavior of the climate. Here’s a control joke.

Hydrogen cars and buses are better than Tesla

Hydrogen fueled cars and buses are as clean to drive as battery vehicles and have better range and faster fueling times. Cost-wise, a hydrogen fuel tank is far cheaper and lighter than an equivalent battery and lasts far longer. Hydrogen is likely safer because the tanks do not carry their oxidant in them. And the price of hydrogen is relatively low, about that of gasoline on a per-mile basis: far lower than batteries when the cost of battery wear-out is included. Both Presidents Clinton and Bush preferred hydrogen over batteries, but the current administration favors batteries. Perhaps history will show them correct, but I think otherwise. Currently, there is not a hydrogen bus, car, or boat making runs at Disney’s Experimental Community of Tomorrow (EPCOT), nor is there an electric bus car or boat. I suspect it’s a mistake, at least convening the lack of a hydrogen vehicle. 

The best hydrogen vehicles on the road have more range than the best electric vehicle, and fuel faster. The hydrogen powered, Honda Clarity debuted in 2008. It has a 270 mile range and takes 3-5 minutes to fuel with hydrogen at 350 atm, 5150 psi. By contrast, the Tesla S-sedan that debuted in 2012 claims only a 208 mile range for its standard, 60kWh configuration (the EPA claims: 190 miles) and requires three hours to charge using their fastest charger, 20 kW.

What limits the range of battery vehicles is that the stacks are very heavy and expensive. Despite using modern lithium-ion technology, Tesla’s 60 kWh battery weighs 1050 lbs including internal cooling, and adds another 250 lbs to the car for extra structural support. The Clarity fuel system weighs a lot less. The hydrogen cylinders weigh 150 lb and require a fuel cell stack (30 lb) and a smaller lithium-ion battery for start-up (90 lb). The net effect is that the Clarity weighs 3582 lbs vs 4647 lbs for the Tesla S. This extra weight of the Tesla seems to hurt its mileage by about 10%. The Tesla gets about 3.3 mi/kWh or 0.19 mile/lb of battery versus 60 miles/kg of hydrogen for the Clarity suggesting  3.6 mi/kWh at typical efficiencies. 

High pressure hydrogen tanks are smaller than batteries and cheaper per unit range. The higher the pressure the smaller the tank. The current Clarity fuels with 350 atm, 5,150 psi hydrogen, and the next generation (shown below) will use higher pressure to save space. But even with 335 atm hydrogen (5000 psi) a Clarity could fuel a 270 mile range with four, 8″ diameter tanks (ID), 4′ long. I don’t know how Honda makes its hydrogen tanks, but suitable tanks might be made from 0.065″ Maranging (aged) stainless steel (UTS = 350,000 psi, density 8 g/cc), surrounded by 0.1″ of aramid fiber (UTS = 250,000 psi, density = 1.6 g/cc). With this construction, each tank would weigh 14.0 kg (30.5 lbs) empty, and hold 11,400 standard liters, 1.14 kg (2.5 lb) of hydrogen at pressure. These tanks could cost $1500 total; the 270 mile range is 40% more Than the Tesla S at about 1/10 the cost of current Tesla S batteries The current price of a replacement Tesla battery pack is $12,000, subsidized by DoE; without the subsidy, the likely price would be $40,000.

Next generation Honda fuel cell vehicle prototype at the 2014 Detroit Auto Show.

Next generation Honda fuel cell vehicle prototype at the 2014 Detroit Auto Show.

Currently hydrogen is more expensive than electricity per energy value, but my company has technology to make it cheaply and more cleanly than electricity. My company, REB Research makes hydrogen generators that produce ultra pure hydrogen by steam reforming wow alcohol in a membrane reactor. A standard generator, suitable to a small fueling station outputs 9.5 kg of hydrogen per day, consuming 69 gal of methanol-water. At 80¢/gal for methanol-water, and 12¢/kWh for electricity, the output hydrogen costs $2.50/kg. A car owner who drove 120,000 miles would spend $5,000 on hydrogen fuel. For that distance, a Tesla owner would spend only $4400 on electricity, but would have to spend another $12,000 to replace the battery. Tesla batteries have a 120,000 mile life, and the range decreases with age. 

For a bus or truck at EPCOT, the advantages of hydrogen grow fast. A typical bus is expected to travel much further than 120,000 miles, and is expected to operate for 18 hour shifts in stop-go operation getting perhaps 1/4 the miles/kWh of a sedan. The charge time and range advantages of hydrogen build up fast. it’s common to build a hydrogen bus with five 20 foot x 8″ tanks. Fueled at 5000 psi., such buses will have a range of 420 miles between fill-ups, and a total tank weight and cost of about 600 lbs and $4000 respectively. By comparison, the range for an electric bus is unlikely to exceed 300 miles, and even this will require a 6000 lb., 360 kWh lithium-ion battery that takes 4.5 hours to charge assuming an 80 kW charger (200 Amps at 400 V for example). That’s excessive compared to 10-20 minutes for fueling with hydrogen.

While my hydrogen generators are not cheap: for the one above, about $500,000 including the cost of a compressor, the cost of an 80 kW DC is similar if you include the cost to run a 200 Amp, 400 V power line. Tesla has shown there are a lot of people who value clean, futuristic transport if that comes with comfort and style. A hydrogen car can meet that handily, and can provide the extra comforts of longer range and faster refueling.

Robert E. Buxbaum, February 12, 2014 (Lincoln’s birthday). Here’s an essay on Lincoln’s Gettysburg address, on the safety of batteries, and on battery cost vs hydrogen. My company, REB Research makes hydrogen generators and purifiers; we also consult.

Nerves are tensegrity structures and grow when pulled

No one quite knows how nerve cells learn stuff. It is incorrectly thought that you can not get new nerves in the brain, nor that you can get brain cells to grow out further, but people have made new nerve cells, and when I was a professor at Michigan State, a Physiology colleague and I got brain and sensory nerves to grow out axons by pulling on them without the use of drugs.

I had just moved to Michigan State as a fresh PhD (Princeton) as an assistant professor of chemical engineering. Steve Heidemann was a few years ahead of me, a Physiology professor PhD from Princeton. We were both new Yorkers. He had been studying nerve structure, and wondered about how the growth cone makes nerves grow out axons (the axon is the long, stringy part of the nerve). A thought was that nerves were structured as Snelson-Fuller tensegrity structures, but it was not obvious how that would relate to growth or anything else. A Snelson-Fuller structure is shown below the structure stands erect not by compression, as in a pyramid or igloo, but rather because tension in the wires helps lift the metal pipes, and puts them in compression. The nerve cell, shown further below is similar with actin-protein as the outer, tensed skin, and a microtubule-protein core as the compress pipes. 

A Snelson-Fuller tensegrity sculpture in the graduate college courtyard at Princeton, where Steve and I got our PhDs

A Snelson-Fuller tensegrity sculpture in the graduate college courtyard at Princeton, an inspiration for our work.

Biothermodynamics was pretty basic 30 years ago (It still is today), and it was incorrectly thought that objects were more stable when put in compression. It didn’t take too much thermodynamics on my part to show otherwise, and so I started a part-time career in cell physiology. Consider first how mechanical force should affect the Gibbs free energy, G, of assembled microtubules. For any process at constant temperature and pressure, ∆G = work. If force is applied we expect some elastic work will be put into the assembled Mts in an amount  ∫f dz, where f is the force at every compression, and ∫dz is the integral of the distance traveled. Assuming a small force, or a constant spring, f = kz with k as the spring constant. Integrating the above, ∆G = ∫kz dz = kz2; ∆G is always positive whether z is positive or negative, that is the microtubule is most stable with no force, and is made less stable by any force, tension or compression. 

A cell showing what appears to be tensegrity. The microtubules in green surrounded by actin in red. If the actin is under tension the microtubules are in compression. From here.

A cell showing what appears to be tensegrity. The microtubules (green) surrounded by actin (red). In nerves Heidemann and I showed actin is in tension the microtubules in compression.

Assuming that microtubules in the nerve- axon are generally in compression as in the Snelson-Fuller structure, then pulling on the axon could potentially reduce the compression. Normally, this is done by a growth cone, we posited, but we could also do it by pulling. In either case, a decrease in the compression of the assembled microtubules should favor microtubule assembly.

To calculate the rates, I used absolute rate theory, something I’d learned from Dr. Mortimer Kostin, a most-excellent thermodynamics professor. I assumed that the free energy of the monomer was unaffected by force, and that the microtubules were in pseudo- equilibrium with the monomer. Growth rates were predicted to be proportional to the decrease in G, and the prediction matched experimental data. 

Our few efforts to cure nerve disease by pulling did not produce immediate results; it turns out to by hard to pull on nerves in the body. Still, we gained some publicity, and a variety of people seem to have found scientific and/or philosophical inspiration in this sort of tensegrity model for nerve growth. I particularly like this review article by Don Ingber in Scientific American. A little more out there is this view of consciousness life and the fate of the universe (where I got the cell picture). In general, tensegrity structures are more tough and flexible than normal construction. A tensegrity structure will bend easily, but rarely break. It seems likely that your body is held together this way, and because of this you can carry heavy things, and still move with flexibility. It also seems likely that bones are structured this way; as with nerves; they are reasonably flexible, and can be made to grow by pulling.

Now that I think about it, we should have done more theoretical or experimental work in this direction. I imagine that  pulling on the nerve also affects the stability of the actin network by affecting the chain configuration entropy. This might slow actin assembly, or perhaps not. It might have been worthwhile to look at new ways to pull, or at bone growth. In our in-vivo work we used an external magnetic field to pull. We might have looked at NASA funding too, since it’s been observed that astronauts grow in outer space by a solid inch or two, and their bodies deteriorate. Presumably, the lack of gravity causes the calcite in the bones to grow, making a person less of a tensegrity structure. The muscle must grow too, just to keep up, but I don’t have a theory for muscle.

Robert Buxbaum, February 2, 2014. Vaguely related to this, I’ve written about architecture, art, and mechanical design.

Land use nuclear vs wind and solar

An advantage of nuclear power over solar and wind is that it uses a lot less land, see graphic below. While I am doubtful that industrial gas causes global warming, I am not a fan of pollution, and that’s why I like nuclear power. Nuclear power adds no water or air pollution when it runs right, and removes a lot less land than wind and solar. Consider the newly approved Hinkley Point C (England), see graphic below. The site covers 430 acres, 1.74 km2, and is currently the home of Hinkley Point B, a nuclear plant slated for retirement. When Hinkley Point C is built on the same site, it will add 26 trillion Watt-hr/ year (3200 MW, 93% up time), about 7% of the total UK demand. Yet more power would be provided from these 430 acres if Hinkley B is not shut down.

Nuclear land use vs solar and wind; British Gov't. regarding their latest plant

Nuclear land use vs solar and wind; British Gov’t. regarding their latest plant

A solar farm to produce 26 trillion W-hr/year would require 130,000 acres, 526 km2. This area would suggest they get the equivalent of 1.36 hours per day of full sun on every m2, not unreasonable given the space for roads and energy storage, and how cloudy England is. Solar power requires a lot energy-storage since you only get full power in the daytime, when there are no clouds.

A wind farm requires even more land than solar, 250,000 acres, or somewhat more than 1000 km2. Wind farms require less storage but that the turbines be spaced at a distance. Storage options could include hydrogen, batteries, and pumped hydro.; I make the case that hydrogen is better. While wind-farm space can be dual use — allowing farming for example, 1000 square km, is still a lot of space to carve up with roads and turbines. It’s nearly the size of greater London; the tourist area, London city is only 2.9 km2.

All these power sources produce pollution during construction and decommissioning. But nuclear produces somewhat less as the plants are less massive in total, and work for more years without the need for major rebuilds. Hinkley C will generate about 30,000 kg/year of waste assuming 35 MW-days/kg, but the cost to bury it in salt domes should not be excessive. Salt domes are needed because Hinkley waste will generate 100 kW of after-heat, even 16 years out. Nuclear fusion, when it comes, should produce 1/10,000 as much after-heat, 100W, 1 year out, but fusion isn’t here yet.

There is also the problem of accidents. In the worst nuclear disaster, Chernobyl, only 31 people died as a direct result, and now (strange to say) the people downwind are healthier than the average up wind; it seems that small amounts of radiation may be good for you. By comparison, in Iowa alone there were 317 driving fatalities in 2013. And even wind and solar have accidents, e.g. people falling from wind-turbines.

Robert Buxbaum, January 22, 2014. I’m president of REB Research, a manufacturer of hydrogen generators and purifiers — mostly membrane reactor based. I also do contract research, mostly on hydrogen, and I write this blog. My PhD research was on nuclear fusion power. I’ve also written about conservation, e.g. curtainsinsulation; paint your roof white.

Ocean levels down from 3000 years ago; up from 20,000 BC

In 2006 Al Gore claimed that industry was causing 2-5°C of global warming per century, and that this, in turn, would cause the oceans to rise by 8 m by 2100. Despite a record cold snap this week, and record ice levels in the antarctic, the US this week banned all incandescent light bulbs of 40W and over in an effort to stop the tragedy. This was a bad move, in my opinion, for a variety of reasons, not least because it seems the preferred replacement, compact fluorescents, produce more pollution than incandescents when you include disposal of the mercury and heavy metals they contain. And then there is the weak connection between US industry and global warming.

From the geologic record, we know that 2-5° higher temperatures have been seen without major industrial outputs of pollution. These temperatures do produce the sea level rises that Al Gore warns about. Temperatures and sea levels were higher 3200 years ago (the Trojan war period), without any significant technology. Temperatures and sea levels were also higher 1900 years ago during the Roman warming. In those days Pevensey Castle (England), shown below, was surrounded by water.

During Roman times Pevensey Castle (at right) was surrounded by water at high tide.If Al Gore is right, it will be surrounded by water again soon.

During Roman times the world was warmer, and Pevensey Castle (right) was surrounded by water;. If Al Gore is right about global warming, it will be surrounded by water again by 2100.

From a plot of sea level and global temperature, below, we see that during cooler periods the sea was much shallower than today: 140 m shallower 20,000 years ago at the end of the last ice age, for example. In those days, people could walk from Asia to Alaska. Climate, like weather appears to be cyclically chaotic. I don’t think the last ice age ended because of industry, but it is possible that industry might help the earth to warm by 2-5°C by 2100, as Gore predicts. That would raise the sea levels, assuming there is no new ice age.

Global temperatures and ocean levels rise and sink together

Global temperatures and ocean levels change by a lot; thousands of years ago.

While I doubt there is much we could stop the next ice age — it is very hard to change a chaotic cycle — trying to stop global cooling seems more worthwhile than trying to stop warming. We could survive a 2 m rise in the seas, e.g. by building dykes, but a 2° of cooling would be disastrous. It would come with a drastic reduction in crops, as during the famine year of 1814. And if the drop continued to a new ice age, that would be much worse. The last ice age included mile high glaciers that extended over all of Canada and reached to New York. Only the polar bear and saber-toothed tiger did well (here’s a Canada joke, and my saber toothed tiger sculpture).

The good news is that the current global temperature models appear to be wrongor highly over-estimated. Average global temperatures have not changed in the last 16 years, though the Chinese keep polluting the air (for some reason, Gore doesn’t mind Chinese pollution). It is true that arctic ice extent is low, but then antarctic ice is at record high levels. Perhaps it’s time to do nothing. While I don’t want more air pollution, I’d certainly re-allow US incandescent light bulbs. In cases where you don’t know otherwise, perhaps the wisest course is to do nothing.

Robert Buxbaum, January 8, 2014

Fractal power laws and radioactive waste decay

Here’s a fairly simple model for nuclear reactor decay heat versus time. It’s based on a fractal model I came up with for dealing with the statistics of crime, fires, etc. The start was to notice that radioactive waste is typically a mixture of isotopes with different decay times and different decay heats. I then came to suspect that there would be a general fractal relation, and that the fractal relation would hold through as the elements of the mixed waste decayed to more stable, less radioactive products. After looking a bit, if seems that the fractal time characteristic is time to the 1/4 power, that is

heat output = H° exp (-at1/4).

Here H° is the heat output rate at some time =0 and “a” is a characteristic of the waste. Different waste mixes will have different values of this decay characteristic.

If nuclear waste consisted of one isotope and one decay path, the number of atoms decaying per day would decrease exponentially with time to the power of 1. If there were only one daughter product produced, and it were non-radioactive, the heat output of a sample would also decay with time to the power of 1. Thus, Heat output would equal  H° exp (-at) and a plot of the log of the decay heat would be linear against linear time — you could plot it all conveniently on semi-log paper.

But nuclear waste generally consists of many radioactive components with different half lives, and these commpnents decay into other radioactive isotopes, all of whom have half-lives that vary by quite a lot. The result is that a semi-log plot is rarely helpful.  Some people therefore plot radioactivity on a log-log plot, typically including a curve for each major isotope and decay mode. I find these plots hardly useful. They are certainly impossible to extrapolate. What I’d like to propose instead is a fractal variation of the original semi-log plot: a  plot of the log of the heat rate against a fractal time. As shown below the use of time to the 1/4 power seems to be helpful. The plot is similar to a fractal decay model that I’d developed for crimes and fires a few weeks ago

Afterheat of fuel rods used to generate 20 kW/kg U; Top graph 35 MW-days/kg U; bottom graph 20 Mw-day /kg  U. Data from US NRC Regulatory Guide 3.54 - Spent Fuel Heat Generation in an Independent Spent Fuel Storage Installation, rev 1, 1999. http://www.nrc.gov/reading-rm/doc-collections/reg-guides/fuels-materials/rg/03-054/

After-heat of nuclear fuel rods used at 20 kW/kg U; Top graph 35 MW-days/kg U; bottom graph 20 Mw-day /kg U. Data from US NRC Regulatory Guide 3.54. A typical reactor has 200,000 kg of uranium.

A plausible justification for this fractal semi-log plot is to observe that the half-life of daughter isotopes relates to the parent isotopes. Unless I find that someone else has come up with this sort of plot or analysis before, I’ll call it after myself: a Buxbaum Mandelbrot plot –Why not?

Nuclear power is attractive because it is a lot more energy dense than any normal fuel. Still the graph at right illustrates the problem of radioactive waste. With nuclear, you generate about 35 MW-days of power per kg of uranium. This is enough to power an average US home for 8 years, but it produces 1 kg of radioactive waste. Even after 81 years the waste is generating about 1/2 W of decay heat. It should be easier to handle and store the 1 kg of spent uranium than to deal with the many tons of coal-smoke produced when 35 MW-days of electricity is made from coal, still, there is reason to worry about the decay heat.

I’ve made a similar plot of decay heat of a fusion reactor, see below. Fusion looks better in this regard. A fission-based nuclear reactor to power 1/2 of Detroit, would hold some 200,000 kg of uranium that would be replaced every 5 years. Even 81 years after removal, the after-heat would be about 100 kW, and that’s a lot.

Afterheat of a 4000 MWth Fusion Reactor, from UMAC III Report. Nb-1%Zr is a fairly common high-temerature engineering material of construction.

After-heat of a 4000 MWth Fusion Reactor built from niobium-1%zirconium; from UWMAC III Report. The after heat is far less than with normal uranium fission.

The plot of the after-heat of a similar power fusion reactor (right) shows a far greater slope, but the same time to the1/4 power dependence. The heat output drops from 1 MW at 3 weeks to only 100 W after 1 year and far less than 1 W after 81 years. Nuclear fusion is still a few years off, but the plot at left shows the advantages fairly clearly, I. think.

This plot was really designed to look at the statistics of crime, fires, and the need for servers / checkout people.

Dr. R.E. Buxbaum, January 2, 2014, edited Aug 30, 2022. *A final, final thought about theory from Yogi Berra: “In theory, it matches reality.”

Physics of no fear, no fall ladders

I recently achieved a somewhat mastery over my fear of heights while working on the flat roof of our lab building / factory. I decided to fix the flat roof of our hydrogen engineering company, REB Research (with help from employees), and that required me to climb some 20 feet to the roof to do some work myself and inspect the work of others. I was pretty sure we could tar the roof cheaper and better than the companies we’d used in the past, and decided that the roof  should be painted white over the tar or that silvered tar should be used — see why. So far the roof is holding up pretty well (looks good, no leaks) and my summer air-conditioning bills were lowered as well.

Perhaps the main part of overcoming my fear of heights was practice, but another part was understanding the physics of what it takes to climb a tall ladder safely. Once I was sure I knew what to do, I was far less afraid. As Emil Faber famously said, “Knowledge is good.”

me on tall ladder

Me on tall ladder and forces. It helps to use the step above the roof, and to have a ladder that extends 3-4′ feet past roof level

One big thing I learned (and this isn’t physics), was to not look down, especially when you are going down the ladder. It’s best to look at the ladder and make sure your hands and feet are going where they should. The next trick I learned was to use a tall ladder — one that I could angle at 20° and extends 4 feet above the roof, see figure. Those 4 feet gave me something to hold on to, and something to look at while going on and off the ladder. I found I preferred to go to or from the roof from a rung that was either at the level of the roof, or a half-step above (see figure). By contrast, I found it quite scary to step on a ladder rung that was significantly below roof level even when I had an extended ladder. I bought my ladder from Acme Ladder of Capital St. in Oak Park; a fiberglass ladder, light weight and rot-proof.

I preferred to set the ladder level (with the help of a shim if needed) at an angle about 20° to the wall, see figure. At this angle, I felt certain the ladder would not tip over from the wind or my motion, and that it would not slip at the bottom, see calculations below.

if the force of the wall acts at right angles to the ladder (mostly horizontally), the wall force will depend only on the lever angle and the center of mass for me and the ladder. It will be somewhat less than the total weight of me and the ladder times sin 20°. Since sin 20° is 0.342, I’ll say the wall force will be less than 30% of the total weight, about 65lb. The wall force provides some lift to the ladder, 34.2% of the wall force, about 22 lb, or 10% of the total weight. Mostly, the wall provides horizontal force, 65 lb x cos 20°, or about 60 lbs. This is what keeps the ladder from tipping backward if I make a sudden motion, and this is the force that must be restrained by friction from the ladder feet. At a steeper angle the anti-tip force would be less, but the slip tendency would be less too.

The rest of the total weight of me and the ladder, the 90% of the weight that is not supported by the roof, rests on the ground. This is called the “normal force,” the force in the vertical direction from the ground. The friction force, what keeps the ladder from slipping out while I’m on it, is this “normal force” times the ‘friction factor’ of the ground. The bottom of my ladder has rubber pads, suggesting a likely friction factor of .8, and perhaps more. As the normal force will be about 90% of the total weight, the slip-restraining force is calculated to be at least 72% of this weight, more than double the 28% of weight that the wall pushes with. The difference, some 44% of the weight (100 lbs or so) is what keeps the ladder from slipping, even when I get on and off the ladder. I find that I don’t need a person on the ground for physics reasons, but sometimes found it helped to steady my nerves, especially in a strong wind.

Things are not so rosy if you use a near vertical ladder, with <10° to the wall, or a widely inclined one, >40°. The vertical ladder can tip over, and the widely inclined ladder can slip at the bottom, especially if you climb past the top of the roof or if your ladder is on a slippery surface without rubber feet.

Robert E. Buxbaum Nov 20, 2013. For a visit to our lab, see here. For some thoughts on wind force, and comments on Engineering aesthetics. I owe to Th. Roosevelt the manly idea that overcoming fear is a worthy achievement. Here he is riding a moose. Here are some advantages of our hydrogen generators for gas chromatography.

Ab Normal Statistics and joke

The normal distribution of observation data looks sort of like a ghost. A Distribution  that really looks like a ghost is scary.

The normal distribution of observation data looks sort of like a ghost. A Distribution that really looks like a ghost is scary.

It’s funny because …. the normal distribution curve looks sort-of like a ghost. It’s also funny because it would be possible to imagine data being distributed like the ghost, and most people would be totally clue-less as to how to deal with data like that — abnormal statistics. They’d find it scary and would likely try to ignore the problem. When faced with a statistics problem, most people just hope that the data is normal; they then use standard mathematical methods with a calculator or simulation package and hope for the best.

Take the following example: you’re interested in buying a house near a river. You’d like to analyze river flood data to know your risks. How high will the river rise in 100 years, or 1000. Or perhaps you would like to analyze wind data to know how strong to make a sculpture so it does not blow down. Your first thought is to use the normal distribution math in your college statistics book. This looks awfully daunting (it doesn’t have to) and may be wrong, but it’s all you’ve got.

The normal distribution graph is considered normal, in part, because it’s fairly common to find that measured data deviates from the average in this way. Also, this distribution can be derived from the mathematics of an idealized view of the world, where any variety derives from multiple small errors around a common norm, and not from some single, giant issue. It’s not clear this is a realistic assumption in most cases, but it is comforting. I’ll show you how to do the common math as it’s normally done, and then how to do it better and quicker with no math at all, and without those assumptions.

Lets say you want to know the hundred-year maximum flood-height of a river near your house. You don’t want to wait 100 years, so you measure the maximum flood height every year over five years, say, and use statistics. Lets say you measure 8 foot, 6 foot, 3 foot (a draught year), 5 feet, and 7 feet.

The “normal” approach (pardon the pun), is to take a quick look at the data, and see that it is sort-of normal (many people don’t bother). One now takes the average, calculated here as (8+6+3+5+7)/5 = 5.8 feet. About half the times the flood waters should be higher than this (a good researcher would check this, many do not). You now calculate the standard deviation for your data, a measure of the width of the ghost, generally using a spreadsheet. The formula for standard deviation of a sample is s = √{[(8-5.8)2 + (6-5.8)2 + (3-5.8)2 + (5-5.8)2 + (7-5.8)2]/4} = 1.92. The use of 4 here in the denominator instead of 5 is called the Brussels correction – it refers to the fact that a standard of deviation is meaningless if there is only one data point.

For normal data, the one hundred year maximum height of the river (the 1% maximum) is the average height plus 2.2 times the deviation; in this case, 5.8 + 2.2 x 1.92 = 10.0 feet. If your house is any higher than this you should expect few troubles in a century. But is this confidence warranted? You could build on stilts or further from the river, but you don’t want to go too far. How far is too far?

So let’s do this better. We can, with less math, through the use of probability paper. As with any good science we begin with data, not assumptions, like that the data is normal. Arrange the river height data in a list from highest to lowest (or lowest to highest), and plot the values in this order on your probability paper as shown below. That is on paper where likelihoods from .01% to 99.99% are arranged along the bottom — x axis, and your other numbers, in this case the river heights, are the y values listed at the left. Graph paper of this sort is sold in university book stores; you can also get jpeg versions on line, but they don’t look as nice.

probability plot of maximum river height over 5 years -- looks reasonably normal, but slightly ghost-like.

Probability plot of the maximum river height over 5 years. If the data suggests a straight line, like here the data is reasonably normal. Extrapolating to 99% suggests the 100 year flood height would be 9.5 to 10.2 feet, and that it is 99.99% unlikely to reach 11 feet. That’s once in 10,000 years, other things being equal.

For the x axis values of the 5 data points above, I’ve taken the likelihood to be the middle of its percentile. Since there are 5 data points, each point is taken to represent its own 20 percentile; the middles appear at 10%, 30%, 50%, etc. I’ve plotted the highest value (8 feet) at the 10% point on the x axis, that being the middle of the upper 20%. I then plotted the second highest (7 feet) at 30%, the middle of the second 20%; the third, 6 ft at 50%; the fourth at 70%; and the draught year maximum (3 feet) at 90%.  When done, I judge if a reasonably straight line would describe the data. In this case, a line through the data looks reasonably straight, suggesting a fairly normal distribution of river heights. I notice that, if anything the heights drop off at the left suggesting that really high river levels are less likely than normal. The points will also have to drop off at the right since a negative river height is impossible. Thus my river heights describe a version of the ghost distribution in the cartoon above. This is a welcome finding since it suggests that really high flood levels are unlikely. If the data were non-normal, curving the other way we’d want to build our house higher than a normal distribution would suggest. 

You can now find the 100 year flood height from the graph above without going through any the math. Just draw your best line through the data, and look where it crosses the 1% value on your graph (that’s two major lines from the left in the graph above — you may have to expand your view to see the little 1% at top). My extrapolation suggests the hundred-year flood maximum will be somewhere between about 9.5 feet, and 10.2 feet, depending on how I choose my line. This prediction is a little lower than we calculated above, and was done graphically, without the need for a spreadsheet or math. What’s more, our predictions is more accurate, since we were in a position to evaluate the normality of the data and thus able to fit the extrapolation line accordingly. There are several ways to handle extreme curvature in the line, but all involve fitting the curve some way. Most weather data is curved, e.g. normal against a fractal, I think, and this affects you predictions. You might expect to have an ice age in 10,000 years.

The standard deviation we calculated above is related to a quality standard called six sigma — something you may have heard of. If we had a lot of parts we were making, for example, we might expect to find that the size deviation varies from a target according to a normal distribution. We call this variation σ, the greek version of s. If your production is such that the upper spec is 2.2 standard deviations from the norm, 99% of your product will be within spec; good, but not great. If you’ve got six sigmas there is one-in-a-billion confidence of meeting the spec, other things being equal. Some companies (like Starbucks) aim for this low variation, a six sigma confidence of being within spec. That is, they aim for total product uniformity in the belief that uniformity is the same as quality. There are several problems with this thinking, in my opinion. The average is rarely an optimum, and you want to have a rational theory for acceptable variation boundaries. Still, uniformity is a popular metric in quality management, and companies that use it are better off than those that do nothing. At REB Research, we like to employ the quality methods of W. Edwards Deming; we assume non-normality and aim for an optimum (that’s subject matter for a further essay). If you want help with statistics, or a quality engineering project, contact us.

I’ve also meant to write about the phrase “other things being equal”, Ceteris paribus in Latin. All this math only makes sense so long as the general parameters don’t change much. Your home won’t flood so long as they don’t build a new mall up river from you with runoff in the river, and so long as the dam doesn’t break. If these are concerns (and they should be) you still need to use statistics and probability paper, but you will now have to use other data, like on the likelihood of malls going up, or of dams breaking. When you input this other data, you will find the probability curve is not normal, but typically has a long tail (when the dam breaks, the water goes up by a lot). That’s outside of standard statistic analysis, but why those hundred year floods come a lot more often than once in 100 years. I’ve noticed that, even at Starbucks, more than 1/1,000,000,000 cups of coffee come out wrong. Even in analyzing a common snafu like this, you still use probability paper, though. It may be ‘situation normal”, but the distribution curve it describes has an abnormal tail.

by Dr. Robert E. Buxbaum, November 6, 2013. This is my second statistics post/ joke, by the way. The first one dealt with bombs on airplanes — well, take a look.

An Aesthetic of Mechanical Strength

Back when I taught materials science to chemical engineers, I used the following poem to teach my aesthetic for the strength target for product design:

The secret to design, as the parson explained, is that the weakest part must withstand the strain. And if that part is to withstand the test, then it must be made as strong as all the rest. (by R.E. Buxbaum, based on “The Wonderful, One-hoss Shay, by Oliver Wendell Holmes, 1858).

My thought was, if my students had no idea what good mechanical design looked like, they’d never  be able to it well. I wanted them to realize that there is always a weakest part of any device or process for every type of failure. Good design accepts this and designs everything else around it. You make sure that the device will fail at a part of your choosing, when it fails, preferably one that you can repair easily and cheaply (a fuse, or a door hinge), and which doesn’t cause too much mayhem when it fails. Once this failure part is chosen and in place, I taught that the rest should be stronger, but there is no point in making any other part of that failure chain significantly stronger than the weakest link. Thus for example, once you’ve decided to use a fuse of a certain amperage, there is no point in making the rest of the wiring take more than 2-3 times the amperage of the fuse.

This is an aesthetic argument, of course, but it’s important for a person to know what good work looks like (to me, and perhaps to the student) — beyond just by compliments from the boss or grades from me. Some day, I’ll be gone, and the boss won’t be looking. There are other design issues too: If you don’t know what the failure point is, make a prototype and test it to failure, and if you don’t like what you see, remodel accordingly. If you like the point of failure but decide you really want to make the device stronger or more robust, be aware that this may involve strengthening that part only, or strengthening the entire chain of parts so they are as failure resistant as this part (the former is cheaper).

I also wanted to teach that there are many failure chains to look out for: many ways that things can wrong beyond breaking. Check for failure by fire, melting, explosion, smell, shock, rust, and even color change. Color change should not be ignored, BTW; there are many products that people won’t use as soon as they look bad (cars, for example). Make sure that each failure chain has it’s own known, chosen weak link. In a car, the paint on a car should fade, chip, or peel some (small) time before the metal underneath starts rusting or sagging (at least that’s my aesthetic). And in the DuPont gun-powder mill below, one wall should be weaker so that the walls should blow outward the right way (away from traffic).Be aware that human error is the most common failure mode: design to make things acceptably idiot-proof.

Dupont powder mills had a thinner wall and a stronger wall so that, if there were an explosion it would blow out towards the river. This mill has a second wall to protect workers. The thinner wall should be barely strong enough to stand up to wind and rain; the stronger walls should stand up to explosions that blow out the other wall.

Dupont powder mills had a thinner wall and a stronger wall so that, if there were an explosion, it would blow out ‘safely.’ This mill has a second wall to protect workers. The thinner wall must be strong enough to stand up to wind and rain; the stronger walls should stand up to all likely explosions.

Related to my aesthetic of mechanical strength, I tried to teach an aesthetic of cost, weight, appearance, and green: Choose materials that are cheaper, rather than more expensive; use less weight rather than more if both ways worked equally well. Use materials that look better if you’ve got the choice, and use recyclable materials. These all derive from the well-known axiom, omit needless stuff. Or, as William of Occam put it, “Entia non sunt multiplicanda sine necessitate.” As an aside, I’ve found that, when engineers use Latin, we look smart: “lingua bona lingua motua est.” (a good language is a dead language) — it’s the same with quoting 19th century poets, BTW: dead 19th century poets are far better than undead ones, but I digress.

Use of recyclable materials gets you out of lots of problems relative to materials that must be disposed of. E.g. if you use aluminum insulation (recyclable) instead of ceramic fiber, you will have an easier time getting rid of the scrap. As a result, you are not as likely to expose your workers (or you) to mesothelioma, or similar disease. You should not have to pay someone to haul away excess or damaged product; a scraper will oblige, and he may even pay you for it if you have enough. Recycling helps cash flow with decommissioning too, when money is tight. It’s better to find your $1 worth of scrap is now worth $2 instead of discovering that your $1 worth of garbage now costs $2 to haul away. By the way, most heat loss is from black body radiation, so aluminum foil may actually work better than ceramics of the same thermal conductivity.

Buildings can be recycled too. Buy them and sell them as needed. Shipping containers make for great lab buildings because they are cheap, strong, and movable. You can sell them off-site when you’re done. We have a shipping container lab building, and a shipping container storage building — both worth more now than when I bought them. They are also rather attractive with our advertising on them — attractive according to my design aesthetic. Here’s an insight into why chemical engineers earn more than chemists; and insight into the difference between mechanical engineering and civil engineering. Here’s an architecture aesthetic. Here’s one about the scientific method.

Robert E. Buxbaum, October 31, 2013

Lets make a Northwest Passage

The Northwest passage opened briefly last year, and the two years before allowing some minimal shipping between the Atlantic and the Pacific by way of the Arctic ocean, but was closed in 2013 because there was too much ice. I’ve a business / commercial thought though: we could make a semi-permanent northwest passage if we dredged a canal across the Bootha peninsula at Taloyoak, Nunavut (Canada).Map of Northern Canada showing cities and the Perry Channel, the current Northwest passage. A canal north of the Bootha Peninsula would seem worthwhile.

Map of Northern Canada showing cities and the Perry Channel, the current Northwest passage. A canal north or south of the Bootha Peninsula would seem worthwhile.

 

 

As things currently stand, ships must sail 500 miles north of Taloyoak, and traverse the Parry Channel. Shown below is a picture of ice levels in August 2012 and 2013. The proposed channels could have been kept open even in 2013 providing a route for valuable shipping commerce. As a cheaper alternative, one could maintain the Hudson Bay trading channel at Fort Ross, between the Bootha Peninsula and Somerset Island. This is about 250 miles north of Taloyoak, but still 250 miles south of the current route.

Arctic Ice August 2012-2013; both Taloyoak and Igloolik appear open this year.

The NW passage was open by way of the Perry Channel north of Somerset Island and Baffin Island in 2012, but not 2013. The proposed channels could have been kept open even this year.

Dr. Robert E. Buxbaum, October 2013. Here are some random thoughts on Canadian crime, the true north, and the Canadian pastime (Ice fishing).