Category Archives: Physics

If hot air rises, why is it cold on mountain-tops?

This is a child’s question that’s rarely answered to anyone’s satisfaction. To answer it well requires college level science, and by college the child has usually been dissuaded from asking anything scientific that would likely embarrass teacher — which is to say, from asking most anything. By a good answer, I mean here one that provides both a mathematical, checkable prediction of the temperature you’d expect to find on mountain tops, and one that also gives a feel for why it should be so. I’ll try to provide this here, as previously when explaining “why is the sky blue.” A word of warning: real science involves mathematics, something that’s often left behind, perhaps in an effort to build self-esteem. If I do a poor job, please text me back: “if hot air rises, what’s keeping you down?”

As a touchy-feely answer, please note that all materials have internal energy. It’s generally associated with the kinetic energy + potential energy of the molecules. It enters whenever a material is heated or has work done on it, and for gases, to good approximation, it equals the gas heat capacity of the gas times its temperature. For air, this is about 7 cal/mol°K times the temperature in degrees Kelvin. The average air at sea-level is taken to be at 1 atm, or 101,300  Pascals, and 15.02°C, or 288.15 °K; the internal energy of this are is thus 288.15 x 7 = 2017 cal/mol = 8420 J/mol. The internal energy of the air will decrease as the air rises, and the temperature drops for reasons I will explain below. Most diatomic gases have heat capacity of 7 cal/mol°K, a fact that is only explained by quantum mechanics; if not for quantum mechanics, the heat capacities of diatomic gases would be about 9 cal/mol°K.

Lets consider a volume of this air at this standard condition, and imagine that it is held within a weightless balloon, or plastic bag. As we pull that air up, by pulling up the bag, the bag starts to expand because the pressure is lower at high altitude (air pressure is just the weight of the air). No heat is exchanged with the surrounding air because our air will always be about as warm as its surroundings, or if you like you can imagine weightless balloon prevents it. In either case the molecules lose energy as the bag expands because they always collide with an outwardly moving wall. Alternately you can say that the air in the bag is doing work on the exterior air — expansion is work — but we are putting no work into the air as it takes no work to lift this air. The buoyancy of the air in our balloon is always about that of the surrounding air, or so we’ll assume for now.

A classic, difficult way to calculate the temperature change with altitude is to calculate the work being done by the air in the rising balloon. Work done is force times distance: w=  ∫f dz and this work should equal the effective cooling since heat and work are interchangeable. There’s an integral sign here to account for the fact that force is proportional to pressure and the air pressure will decrease as the balloon goes up. We now note that w =  ∫f dz = – ∫P dV because pressure, P = force per unit area. and volume, V is area times distance. The minus sign is because the work is being done by the air, not done on the air — it involves a loss of internal energy. Sorry to say, the temperature and pressure in the air keeps changing with volume and altitude, so it’s hard to solve the integral, but there is a simple approach based on entropy, S.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

Les Droites Mountain, in the Alps, at the intersect of France Italy and Switzerland is 4000 m tall. The top is generally snow-covered.

I discussed entropy last month, and showed it was a property of state, and further, that for any reversible path, ∆S= (Q/T)rev. That is, the entropy change for any reversible process equals the heat that enters divided by the temperature. Now, we expect the balloon rise is reversible, and since we’ve assumed no heat transfer, Q = 0. We thus expect that the entropy of air will be the same at all altitudes. Now entropy has two parts, a temperature part, Cp ln T2/T1 and a pressure part, R ln P2/P1. If the total ∆S=0 these two parts will exactly cancel.

Consider that at 4000m, the height of Les Droites, a mountain in the Mont Blanc range, the typical pressure is 61,660 Pa, about 60.85% of sea level pressure (101325 Pa). If the air were reduced to this pressure at constant temperature (∆S)T = -R ln P2/P1 where R is the gas constant, about 2 cal/mol°K, and P2/P1 = .6085; (∆S)T = -2 ln .6085. Since the total entropy change is zero, this part must equal Cp ln T2/T1 where Cp is the heat capacity of air at constant pressure, about 7 cal/mol°K for all diatomic gases, and T1 and T2 are the temperatures (Kelvin) of the air at sea level and 4000 m. (These equations are derived in most thermodynamics texts. The short version is that the entropy change from compression at constant T equals the work at constant temperature divided by T,  ∫P/TdV=  ∫R/V dV = R ln V2/V1= -R ln P2/P1. Similarly the entropy change at constant pressure = ∫dQ/T where dQ = Cp dT. This component of entropy is thus ∫dQ/T = Cp ∫dT/T = Cp ln T2/T1.) Setting the sum to equal zero, we can say that Cp ln T2/T1 =R ln .6085, or that 

T2 = T1 (.6085)R/Cp

T2 = T1(.6085)2/7   where 0.6065 is the pressure ratio at 4000, and because for air and most diatomic gases, R/Cp = 2/7 to very good approximation, matching the prediction from quantum mechanics.

From the above, we calculate T2 = 288.15 x .8676 = 250.0°K, or -23.15 °C. This is cold enough to provide snow  on Les Droites nearly year round, and it’s pretty accurate. The typical temperature at 4000 m is 262.17 K (-11°C). That’s 26°C colder than at sea-level, and only 12°C warmer than we’d predicted.

There are three weak assumptions behind the 11°C error in our predictions: (1) that the air that rises is no hotter than the air that does not, and (2) that the air’s not heated by radiation from the sun or earth, and (3) that there is no heat exchange with the surrounding air, e.g. from rain or snow formation. The last of these errors is thought to be the largest, but it’s still not large enough to cause serious problems.

The snow cover on Kilimanjaro, 2013. If global warming models were true, it should be gone, or mostly gone.

Snow on Kilimanjaro, Tanzania 2013. If global warming models were true, the ground should be 4°C warmer than 100 years ago, and the air at this altitude, about 7°C (12°F) warmer; and the snow should be gone.

You can use this approach, with different exponents, estimate the temperature at the center of Jupiter, or at the center of neutron stars. This iso-entropic calculation is the model that’s used here, though it’s understood that may be off by a fair percentage. You can also ask questions about global warming: increased CO2 at this level is supposed to cause extreme heating at 4000m, enough to heat the earth below by 4°C/century or more. As it happens, the temperature and snow cover on Les Droites and other Alp ski areas has been studied carefully for many decades; they are not warming as best we can tell (here’s a discussion). By all rights, Mt Blanc should be Mt Green by now; no one knows why. The earth too seems to have stopped warming. My theory: clouds. 

Robert Buxbaum, May 10, 2014. Science requires you check your theory for internal and external weakness. Here’s why the sky is blue, not green.

Entropy, the most important pattern in life

One evening at the Princeton grad college a younger fellow (an 18-year-old genius) asked the most simple, elegant question I had ever heard, one I’ve borrowed and used ever since: “tell me”, he asked, “something that’s important and true.” My answer that evening was that the entropy of the universe is always increasing. It’s a fundamentally important pattern in life; one I didn’t discover, but discovered to have a lot of applications and meaning. Let me explain why it’s true here, and then why I find it’s meaningful.

Famous entropy cartoon, Harris

Famous entropy cartoon, Harris

The entropy of the universe is not something you can measure directly, but rather indirectly, from the availability of work in any corner of it. It’s related to randomness and the arrow of time. First off, here’s how you can tell if time is moving forward: put an ice-cube into hot water, if the cube dissolves and the water becomes cooler, time is moving forward — or, at least it’s moving in the same direction as you are. If you can reach into a cup of warm water and pull out an ice-cube while making the water hot, time is moving backwards. — or rather, you are living backwards. Within any closed system, one where you don’t add things or energy (sunlight say), you can tell that time is moving forward because the forward progress of time always leads to the lack of availability of work. In the case above, you could have generated some electricity from the ice-cube and the hot water, but not from the glass of warm water.

You can not extract work from a heat source alone; to extract work some heat must be deposited in a cold sink. At best the entropy of the universe remains unchanged. More typically, it increases.

You can not extract work from a heat source alone; to extract work some heat must be deposited in a cold sink. At best the entropy of the universe remains unchanged.

This observation is about as fundamental as any to understanding the world; it is the basis of entropy and the second law of thermodynamics: you can never extract useful work from a uniform temperature body of water, say, just by making that water cooler. To get useful work, you always need something some other transfer into or out of the system; you always need to make something else hotter, colder, or provide some chemical or altitude changes that can not be reversed without adding more energy back. Thus, so long as time moves forward everything runs down in terms of work availability.

There is also a first law; it states that energy is conserved. That is, if you want to heat some substance, that change requires that you put in a set amount of work plus heat. Similarly, if you want to cool something, a set amount of heat + work must be taken out. In equation form, we say that, for any change, q +w is constant, where q is heat, and w is work. It’s the sum that’s constant, not the individual values so long as you count every 4.174 Joules of work as if it were 1 calorie of heat. If you input more heat, you have to add less work, and visa versa, but there is always the same sum. When adding heat or work, we say that q or w is positive; when extracting heat or work, we say that q or w are negative quantities. Still, each 4.174 joules counts as if it were 1 calorie.

Now, since for every path between two states, q +w is the same, we say that q + w represents a path-independent quantity for the system, one we call internal energy, U where ∆U = q + w. This is a mathematical form of the first law of thermodynamics: you can’t take q + w out of nothing, or add it to something without making a change in the properties of the thing. The only way to leave things the same is if q + w = 0. We notice also that for any pure thing or mixture, the sum q +w for the change is proportional to the mass of the stuff; we can thus say that internal energy is an intensive quality. q + w = n ∆u where n is the grams of material, and ∆u is the change in internal energy per gram.

We are now ready to put the first and second laws together. We find we can extract work from a system if we take heat from a hot body of water and deliver some of it to something at a lower temperature (the ice-cube say). This can be done with a thermopile, or with a steam engine (Rankine cycle, above), or a stirling engine. That an engine can only extract work when there is a difference of temperatures is similar to the operation of a water wheel. Sadie Carnot noted that a water wheel is able to extract work only when there is a flow of water from a high level to low; similarly in a heat engine, you only get work by taking in heat energy from a hot heat-source and exhausting some of it to a colder heat-sink. The remainder leaves as work. That is, q1 -q2 = w, and energy is conserved. The second law isn’t violated so long as there is no way you could run the engine without the cold sink. Accepting this as reasonable, we can now derive some very interesting, non-obvious truths.

We begin with the famous Carnot cycle. The Carnot cycle is an idealized heat engine with the interesting feature that it can be made to operate reversibly. That is, you can make it run forwards, taking a certain amount of work from a hot source, producing a certain amount of work and delivering a certain amount of heat to the cold sink; and you can run the same process backwards, as a refrigerator, taking in the same about of work and the same amount of heat from the cold sink and delivering the same amount to the hot source. Carnot showed by the following proof that all other reversible engines would have the same efficiency as his cycle and no engine, reversible or not, could be more efficient. The proof: if an engine could be designed that will extract a greater percentage of the heat as work when operating between a given hot source and cold sink it could be used to drive his Carnot cycle backwards. If the pair of engines were now combined so that the less efficient engine removed exactly as much heat from the sink as the more efficient engine deposited, the excess work produced by the more efficient engine would leave with no effect besides cooling the source. This combination would be in violation of the second law, something that we’d said was impossible.

Now let us try to understand the relationship that drives useful energy production. The ratio of heat in to heat out has got to be a function of the in and out temperatures alone. That is, q1/q2 = f(T1, T2). Similarly, q2/q1 = f(T2,T1) Now lets consider what happens when two Carnot cycles are placed in series between T1 and T2, with the middle temperature at Tm. For the first engine, q1/qm = f(T1, Tm), and similarly for the second engine qm/q2 = f(Tm, T2). Combining these we see that q1/q2 = (q1/qm)x(qm/q2) and therefore f(T1, T2) must always equal f(T1, Tm)x f(Tm/T2) =f(T1,Tm)/f(T2, Tm). In this relationship we see that the second term Tm is irrelevant; it is true for any Tm. We thus say that q1/q2 = T1/T2, and this is the limit of what you get at maximum (reversible) efficiency. You can now rearrange this to read q1/T1 = q2/T2 or to say that work, W = q1 – q2 = q2 (T1 – T2)/T2.

A strange result from this is that, since every process can be modeled as either a sum of Carnot engines, or of engines that are less-efficient, and since the Carnot engine will produce this same amount of reversible work when filled with any substance or combination of substances, we can say that this outcome: q1/T1 = q2/T2 is independent of path, and independent of substance so long as the process is reversible. We can thus say that for all substances there is a property of state, S such that the change in this property is ∆S = ∑q/T for all the heat in or out. In a more general sense, we can say, ∆S = ∫dq/T, where this state property, S is called the entropy. Since as before, the amount of heat needed is proportional to mass, we can say that S is an intensive property; S= n s where n is the mass of stuff, and s is the entropy change per mass. 

Another strange result comes from the efficiency equation. Since, for any engine or process that is less efficient than the reversible one, we get less work out for the same amount of q1, we must have more heat rejected than q2. Thus, for an irreversible engine or process, q1-q2 < q2(T1-T2)/T2, and q2/T2 is greater than -q1/T1. As a result, the total change in entropy, S = q1/T1 + q2/T2 >0: the entropy of the universe always goes up or stays constant. It never goes down. Another final observation is that there must be a zero temperature that nothing can go below or both q1 and q2 could be positive and energy would not be conserved. Our observations of time and energy conservation leaves us to expect to find that there must be a minimum temperature, T = 0 that nothing can be colder than. We find this temperature at -273.15 °C. It is called absolute zero; nothing has ever been cooled to be colder than this, and now we see that, so long as time moves forward and energy is conserved, nothing will ever will be found colder.

Typically we either say that S is zero at absolute zero, or at room temperature.

We’re nearly there. We can define the entropy of the universe as the sum of the entropies of everything in it. From the above treatment of work cycles, we see that this total of entropy always goes up, never down. A fundamental fact of nature, and (in my world view) a fundamental view into how God views us and the universe. First, that the entropy of the universe goes up only, and not down (in our time-forward framework) suggests there is a creator for our universe — a source of negative entropy at the start of all things, or a reverser of time (it’s the same thing in our framework). Another observation, God likes entropy a lot, and that means randomness. It’s his working principle, it seems.

But before you take me now for a total libertine and say that since science shows that everything runs down the only moral take-home is to teach: “Let us eat and drink,”… “for tomorrow we die!” (Isaiah 22:13), I should note that his randomness only applies to the universe as a whole. The individual parts (planets, laboratories, beakers of coffee) does not maximize entropy, but leads to a minimization of available work, and this is different. You can show that the maximization of S, the entropy of the universe, does not lead to the maximization of s, the entropy per gram of your particular closed space but rather to the minimization of a related quantity µ, the free energy, or usable work per gram of your stuff. You can show that, for any closed system at constant temperature, µ = h -Ts where s is entropy per gram as before, and h is called enthalpy. h is basically the potential energy of the molecules; it is lowest at low temperature and high order. For a closed system we find there is a balance between s, something that increases with increased randomness, and h, something that decreases with increased randomness. Put water and air in a bottle, and you find that the water is mostly on the bottom of the bottle, the air is mostly on the top, and the amount of mixing in each phase is not the maximum disorder, but rather the one you’d calculate will minimize µ.

As the protein folds its randomness and entropy decrease, but its enthalpy decreases too; the net effect is one precise fold that minimizes µ.

As a protein folds its randomness and entropy decrease, but its enthalpy decreases too; the net effect is one precise fold that minimizes µ.

This is the principle that God applies to everything, including us, I’d guess: a balance. Take protein folding; some patterns have big disorder, and high h; some have low disorder and very low h. The result is a temperature-dependent  balance. If I were to take a moral imperative from this balance, I’d say it matches better with the sayings of Solomon the wise: “there is nothing better for a person under the sun than to eat, drink and be merry. Then joy will accompany them in their toil all the days of the life God has given them under the sun.” (Ecclesiastes 8:15). There is toil here as well as pleasure; directed activity balanced against personal pleasures. This is the µ = h -Ts minimization where, perhaps, T is economic wealth. Thus, the richer a society, the less toil is ideal and the more freedom. Of necessity, poor societies are repressive. 

Dr. Robert E. Buxbaum, Mar 18, 2014. My previous thermodynamic post concerned the thermodynamics of hydrogen production. It’s not clear that all matter goes forward in time, by the way; antimatter may go backwards, so it’s possible that anti matter apples may fall up. On microscopic scale, time becomes flexible so it seems you can make a time machine. Religious leaders tend to be anti-science, I’ve noticed, perhaps because scientific miracles can be done by anyone, available even those who think “wrong,” or say the wrong words. And that’s that, all being heard, do what’s right and enjoy life too: as important a pattern in life as you’ll find, I think. The relationship between free-energy and societal organization is from my thesis advisor, Dr. Ernest F. Johnson.

Toxic electrochemistry and biology at home

A few weeks back, I decided to do something about the low quality of experiments in modern chemistry and science sets; I posted to this blog some interesting science experiments, and some more-interesting experiments that could be done at home using the toxic (poisonous dangerous) chemicals available under the sink or on the hardware store. Here are some more. As previously, the chemicals are toxic and dangerous but available. As previously, these experiments should be done only with parental (adult) supervision. Some of these next experiments involve some math, as key aspect of science; others involve some new equipment as well as the stuff you used previously. To do them all, you will want a stop watch, a volt-amp meter, and a small transformer, available at RadioShack; you’ll also want some test tubes or similar, clear cigar tubes, wire and baking soda; for the coating experiment you’ll want copper drain clear, or copper containing fertilizer and some washers available at the hardware store; for metal casting experiment you’ll need a tin can, pliers, a gas stove and some pennies, plus a mold, some sand, good shoes, and a floor cover; and for the biology experiment you will need several 9 V batteries, and you will have to get a frog and kill it. You can skip any of these experiments, if you like and do the others. If you have not done the previous experiments, look them over or do them now.

1) The first experiments aim to add some numerical observations to our previous studies of electrolysis. Here is where you will see why we think that molecules like water are made of fixed compositions of atoms. Lets redo the water electrolysis experiment now with an Ammeter in line between the battery and one of the electrodes. With the ammeter connected, put both electrodes deep into a solution of water with a little lye, and then (while watching the ammeter) lift one electrode half out, place it back, and lift the other. You will find, I think, that one of the other electrode is the limiting electrode, and that the amperage goes to 1/2 its previous value when this electrode is half lifted. Lifting the other electrode changes neither the amperage or the amount of bubbles, but lifting this limiting electrode changes both the amount of bubbles and the amperage. If you watch closely, though, you’ll see it changes the amount of bubbles at both electrodes in proportion, and that the amount of bubbles is in promotion to the amperage. If you collect the two gasses simultaneously, you’ll see that the volume of gas collected is always in a ratio of 2 to 1. For other electrolysis (H2 and Cl2) it will be 1 to1; it’s always a ratio of small numbers. See diagram below on how to make and collect oxygen and hydrogen simultaneously by electrolyzing water with lye or baking soda as electrolyte. With lye or baking soda, you’ll find that there is always twice as much hydrogen produced as oxygen — exactly.

You can also do electrolysis with table salt or muriatic acid as an electrolyte, but for this you’ll need carbon or platinum electrodes. If you do it right, you’ll get hydrogen and chlorine, a green gas that smells bad. If you don’t do this right, using a wire instead of a carbon or platinum electrode, you’ll still get hydrogen, but no chlorine. Instead of chlorine, you’ll corrode the wire on that end, making e.g. copper chloride. With a carbon electrode and any chloride compound as the electrolyte, you’ll produce chlorine; without a chloride electrolyte, you will not produce chlorine at any voltage, or with any electrode. And if you make chlorine and check the volumes, you’ll find you always make one volume of chlorine for every volume of hydrogen. We imagine from this that the compounds are made of fixed atoms that transfer electrons in fixed whole numbers per molecule. You always make two volumes of hydrogen for every volume of oxygen because (we think) making oxygen requires twice as many electrons as making hydrogen.

At home electrolysis experiment

At home electrolysis experiment

We get the same volume of chlorine as hydrogen because making chlorine and hydrogen requires the same amount of electrons to be transferred. These are the sort of experiments that caused people to believe in atoms and molecules as the fundamental unchanging components of matter. Different solutes, voltages, and electrodes will affect how fast you make hydrogen and oxygen, as will the amount of dissolved solute, but the gas produced are always the same, and the ratio of volumes is always proportional to the amperage in a fixed ratio of small whole numbers.

As always, don’t let significant quantities of use hydrogen and oxygen or pure hydrogen and chlorine mix in a closed space. Hydrogen and oxygen is quite explosive brown’s gas; hydrogen and chlorine are reactive as well. When working with chlorine it is best to work outside or near an open window: chlorine is a poison gas.

You may also want to try this with non-electrolytes, pure water or water with sugar or alcohol dissolved. You will find there is hardly any amperage or gas with these, but the small amount of gas produced will retain the same ratio. For college level folks, here is some physics/math relating to the minimum voltage and relating to the quantities you should expect at any amperage.

2) Now let’s try electro-plating metals. Using the right solutes, metals can be made to coat your electrodes the same way that bubbles of gas coated your electrodes in the experiments above. The key is to find the right chemical, and as a start let me suggest the copper sulphate sold in hardware stores to stop root growth. As an alternative copper sulphate is often sold as part of a fertilizer solution like “Miracle grow.” Look for copper on the label, or for a blue color fertilizer. Make a solution of copper using enough copper so that the solution is recognizably green, Use two steel washers as electrodes (that is connect the wires from your battery to the washers) and put them in the solution. You will find that one side turns red, as it is coated with copper. Depending on what else your copper solution contained, bubbles may appear at the other washer, or the other washer will corrode. 

You are now ready to take this to a higher level — silver coating. take a piece of silver plated material that you want to coat, and clean it nicely with soap and water. Connect it to the electrode where you previously coated copper. Now clean out the solution carefully. Buy some silver nitrate from a drug store, and dissolve a few grams (1/8 tsp for a start) in pure water; place the silverware and the same electrodes as before, connected to the battery. For a nicer coat use a 1 1/2 volt lantern battery; the 6 V battery will work too, but the silver won’t look as nice. With silver nitrate, you’ll notice that one electrode produces gas (oxygen) and the other turns silvery. Now disconnect the silvery electrode. You can use this method to silver coat a ring, fork, or cup — anything you want to have silver coated. This process is called electroplating. As with hydrogen production, there is a proportional relationship between the time, the amperage and the amount of metal you deposit — until all the silver nitrate in solution is used up.

As a yet-more complex version, you can also electroplate without using a battery. This was my Simple electroplating (presented previously). Consider this only after you understand most everything else I’ve done. When I saw this the first time in high school I was confused.

3) Casting metal objects using melted pennies, heat from a gas stove, and sand or plaster as a cast. This is pretty easy, but sort of dangerous — you need parents help, if only as a watcher. This is a version of an experiment I did as a kid.  I did metal casting using lead that some plumbers had left over. I melted it in a tin can on our gas stove and cast “quarters” in a plaster mold. Plumbers no longer use lead, but modern pennies are mostly zinc, and will melt about as well as my lead did. They are also much safer.

As a preparation for this experiment, get a bucket full of sand. This is where you’ll put your metal when you’re done. Now get some pennies (1970 or later), a pair of pliers, and an empty clean tin can, and a gas stove. If you like you can make a plaster mold of some small object: a ring, a 50 piece — anything you might want to cast from your pennies. With parents’ help, light your gas stove, put 5-8 pennies in the empty tin can, and hold the can over the lit gas burner using your pliers. Turn the gas to high. In a few minutes the bottom of the can will burn and become red-hot. About this point, the pennies will soften and melt into a silvery puddle. By tilting the can, you can stir the metal around (don’t get it on you!). When it looks completely melted you can pour the molten pennies into your sand bucket (carefully), or over your plaster mold (carefully). If you use a mold, you’ll get a zinc copy of whatever your mold was: jewelry, coins, etc. If you work at it, you’ll learn to make fancier and fancier casts. Adult help is welcome to avoid accidents. Once the metal solidifies, you can help cool it faster by dripping water on it from a faucet. Don’t touch it while it’s hot!

A plaster mold can be made by putting a 50¢ piece at the bottom of a paper cup, pouring plaster over the coin, and waiting for it to dry. Tear off the cup, turn the plaster over and pull out the coin; you’ve got a one-sided mold, good enough to make a one-sided coin. If you enjoy this, you can learn more about casting on Wikipedia; it’s an endeavor that only costs 4 or 5 cents per try. As a safety note: wear solid leather shoes and cover the floor near the stove with a board. If you drop the metal on the floor you’ll have a permanent burn mark on the floor and your mother will not be happy. If you drop hot metal on your you’ll have a permanent injury, and you won’t be happy. Older pennies are made of copper and will not melt. Here’s a video of someone pouring a lot of metal into an ant-hill (kills lots of ants, makes a mold of the hill).

It's often helpful to ask yourself, "what would Dr. Frankenstein do?"

It’s nice to have assistants, friends and adult help in the laboratory when you do science. Even without the castle, it’s what Dr. Frankenstein did.

4) Bringing a dead frog back to life (sort of). Make a high voltage battery of 45 to 90 V battery by attaching 5-10, 9V batteries in a daisy chain they will snap together. If you touch both exposed contacts you’ll give yourself a wicked shock. If you touch the electrodes to a newly killed frog, the frog legs will kick. This is sort of groovy. It was the inspiration for Dr. Frankenstein (at right), who then decides he could bring a person back from the dead with “more power.” Frankenstein’s monster is brought back to life this way, but ends up killing the good doctor. Shocks are sometimes helpful reanimating people stricken by heat attacks, and many buildings have shockers for this purpose. But don’t try to bring back the long-dead. By all accounts, the results are less-than pleasing. Try dissecting the rest of the frog and guess what each part is (a world book encyclopedia helps). As I recall, the heart keeps going for a while after it’s out of the frog — spooky.

5) Another version of this shocker is made with a small transformer (1″ square, say, radioshack) and a small battery (1.5-6V). Don’t use the 90V battery, you’ll kill someone. As a first version of this shocker, strip 1″ of  insulation off of the ends of some wire 12″ long say, and attach one end to two paired wires of the transformer (there will usually be a diagram in the box). If the transformer already has some wires coming out, all you have to do is strip more insulation off the ends so 1″ is un-inuslated. Take two paired ends in your hand, holding onto the uninsulated part and touch both to the battery for a second or two. Then disconnect them while holding the bare wires; you’ll get a shock. As a nastier version, get a friend to hope the opposite pair of wires on the uninsulated parts, while you hold the insulated parts of your two. Touch your two to the battery and disconnect while holding the insulation, you will see a nice spark, and your friend will get a nice shock. Play with it; different arrangements give more sparks or bigger shocks. Another thing you can do: put your experiment near a radio or TV. The transformer sparks will interfere with most nearby electronics; you can really mess up a computer this way, so keep it far from your computer. This is how wireless radio worked long ago, and how modern warfare will probably go. The atom bomb was detonated with a spark like this.

If you want to do more advanced science, it’s a good idea to learn math. This is important for statistics, for engineering, for quantum mechanics, and can even help for music. Get a few good high school or college books and read them cover to cover. An approach to science is to try to make something cool, that sort-of works, and then try to improve it. You then decide what a better version would work like,  modify your original semi-randomly and see if you’re going in the right direction. Don’t redesign with only one approach –it may not work. Read whatever you can, but don’t believe all you read. Often books are misleading, or wrong, and blogs are worse (I ought to know). When you find mistakes, note them in the margin, and try to explain them. You may find you were right, or that the book was right, but it’s a learning experience. If you like you can write the author and inform him/her of the errors. I find mailed letters are more respectful than e-mails — it shows you put in more effort.

Robert Buxbaum, February 20, 2014. Here’s the difference between metals and non-metals, and a periodic table cup that I made, and sell. And here’s a difference between science and religion – reproducibility.

Nerves are tensegrity structures and grow when pulled

No one quite knows how nerve cells learn stuff. It is incorrectly thought that you can not get new nerves in the brain, nor that you can get brain cells to grow out further, but people have made new nerve cells, and when I was a professor at Michigan State, a Physiology colleague and I got brain and sensory nerves to grow out axons by pulling on them without the use of drugs.

I had just moved to Michigan State as a fresh PhD (Princeton) as an assistant professor of chemical engineering. Steve Heidemann was a few years ahead of me, a Physiology professor PhD from Princeton. We were both new Yorkers. He had been studying nerve structure, and wondered about how the growth cone makes nerves grow out axons (the axon is the long, stringy part of the nerve). A thought was that nerves were structured as Snelson-Fuller tensegrity structures, but it was not obvious how that would relate to growth or anything else. A Snelson-Fuller structure is shown below the structure stands erect not by compression, as in a pyramid or igloo, but rather because tension in the wires helps lift the metal pipes, and puts them in compression. The nerve cell, shown further below is similar with actin-protein as the outer, tensed skin, and a microtubule-protein core as the compress pipes. 

A Snelson-Fuller tensegrity sculpture in the graduate college courtyard at Princeton, where Steve and I got our PhDs

A Snelson-Fuller tensegrity sculpture in the graduate college courtyard at Princeton, an inspiration for our work.

Biothermodynamics was pretty basic 30 years ago (It still is today), and it was incorrectly thought that objects were more stable when put in compression. It didn’t take too much thermodynamics on my part to show otherwise, and so I started a part-time career in cell physiology. Consider first how mechanical force should affect the Gibbs free energy, G, of assembled microtubules. For any process at constant temperature and pressure, ∆G = work. If force is applied we expect some elastic work will be put into the assembled Mts in an amount  ∫f dz, where f is the force at every compression, and ∫dz is the integral of the distance traveled. Assuming a small force, or a constant spring, f = kz with k as the spring constant. Integrating the above, ∆G = ∫kz dz = kz2; ∆G is always positive whether z is positive or negative, that is the microtubule is most stable with no force, and is made less stable by any force, tension or compression. 

A cell showing what appears to be tensegrity. The microtubules in green surrounded by actin in red. If the actin is under tension the microtubules are in compression. From here.

A cell showing what appears to be tensegrity. The microtubules (green) surrounded by actin (red). In nerves Heidemann and I showed actin is in tension the microtubules in compression.

Assuming that microtubules in the nerve- axon are generally in compression as in the Snelson-Fuller structure, then pulling on the axon could potentially reduce the compression. Normally, this is done by a growth cone, we posited, but we could also do it by pulling. In either case, a decrease in the compression of the assembled microtubules should favor microtubule assembly.

To calculate the rates, I used absolute rate theory, something I’d learned from Dr. Mortimer Kostin, a most-excellent thermodynamics professor. I assumed that the free energy of the monomer was unaffected by force, and that the microtubules were in pseudo- equilibrium with the monomer. Growth rates were predicted to be proportional to the decrease in G, and the prediction matched experimental data. 

Our few efforts to cure nerve disease by pulling did not produce immediate results; it turns out to by hard to pull on nerves in the body. Still, we gained some publicity, and a variety of people seem to have found scientific and/or philosophical inspiration in this sort of tensegrity model for nerve growth. I particularly like this review article by Don Ingber in Scientific American. A little more out there is this view of consciousness life and the fate of the universe (where I got the cell picture). In general, tensegrity structures are more tough and flexible than normal construction. A tensegrity structure will bend easily, but rarely break. It seems likely that your body is held together this way, and because of this you can carry heavy things, and still move with flexibility. It also seems likely that bones are structured this way; as with nerves; they are reasonably flexible, and can be made to grow by pulling.

Now that I think about it, we should have done more theoretical or experimental work in this direction. I imagine that  pulling on the nerve also affects the stability of the actin network by affecting the chain configuration entropy. This might slow actin assembly, or perhaps not. It might have been worthwhile to look at new ways to pull, or at bone growth. In our in-vivo work we used an external magnetic field to pull. We might have looked at NASA funding too, since it’s been observed that astronauts grow in outer space by a solid inch or two, and their bodies deteriorate. Presumably, the lack of gravity causes the calcite in the bones to grow, making a person less of a tensegrity structure. The muscle must grow too, just to keep up, but I don’t have a theory for muscle.

Robert Buxbaum, February 2, 2014. Vaguely related to this, I’ve written about architecture, art, and mechanical design.

Physics of no fear, no fall ladders

I recently achieved a somewhat mastery over my fear of heights while working on the flat roof of our lab building / factory. I decided to fix the flat roof of our hydrogen engineering company, REB Research (with help from employees), and that required me to climb some 20 feet to the roof to do some work myself and inspect the work of others. I was pretty sure we could tar the roof cheaper and better than the companies we’d used in the past, and decided that the roof  should be painted white over the tar or that silvered tar should be used — see why. So far the roof is holding up pretty well (looks good, no leaks) and my summer air-conditioning bills were lowered as well.

Perhaps the main part of overcoming my fear of heights was practice, but another part was understanding the physics of what it takes to climb a tall ladder safely. Once I was sure I knew what to do, I was far less afraid. As Emil Faber famously said, “Knowledge is good.”

me on tall ladder

Me on tall ladder and forces. It helps to use the step above the roof, and to have a ladder that extends 3-4′ feet past roof level

One big thing I learned (and this isn’t physics), was to not look down, especially when you are going down the ladder. It’s best to look at the ladder and make sure your hands and feet are going where they should. The next trick I learned was to use a tall ladder — one that I could angle at 20° and extends 4 feet above the roof, see figure. Those 4 feet gave me something to hold on to, and something to look at while going on and off the ladder. I found I preferred to go to or from the roof from a rung that was either at the level of the roof, or a half-step above (see figure). By contrast, I found it quite scary to step on a ladder rung that was significantly below roof level even when I had an extended ladder. I bought my ladder from Acme Ladder of Capital St. in Oak Park; a fiberglass ladder, light weight and rot-proof.

I preferred to set the ladder level (with the help of a shim if needed) at an angle about 20° to the wall, see figure. At this angle, I felt certain the ladder would not tip over from the wind or my motion, and that it would not slip at the bottom, see calculations below.

if the force of the wall acts at right angles to the ladder (mostly horizontally), the wall force will depend only on the lever angle and the center of mass for me and the ladder. It will be somewhat less than the total weight of me and the ladder times sin 20°. Since sin 20° is 0.342, I’ll say the wall force will be less than 30% of the total weight, about 65lb. The wall force provides some lift to the ladder, 34.2% of the wall force, about 22 lb, or 10% of the total weight. Mostly, the wall provides horizontal force, 65 lb x cos 20°, or about 60 lbs. This is what keeps the ladder from tipping backward if I make a sudden motion, and this is the force that must be restrained by friction from the ladder feet. At a steeper angle the anti-tip force would be less, but the slip tendency would be less too.

The rest of the total weight of me and the ladder, the 90% of the weight that is not supported by the roof, rests on the ground. This is called the “normal force,” the force in the vertical direction from the ground. The friction force, what keeps the ladder from slipping out while I’m on it, is this “normal force” times the ‘friction factor’ of the ground. The bottom of my ladder has rubber pads, suggesting a likely friction factor of .8, and perhaps more. As the normal force will be about 90% of the total weight, the slip-restraining force is calculated to be at least 72% of this weight, more than double the 28% of weight that the wall pushes with. The difference, some 44% of the weight (100 lbs or so) is what keeps the ladder from slipping, even when I get on and off the ladder. I find that I don’t need a person on the ground for physics reasons, but sometimes found it helped to steady my nerves, especially in a strong wind.

Things are not so rosy if you use a near vertical ladder, with <10° to the wall, or a widely inclined one, >40°. The vertical ladder can tip over, and the widely inclined ladder can slip at the bottom, especially if you climb past the top of the roof or if your ladder is on a slippery surface without rubber feet.

Robert E. Buxbaum Nov 20, 2013. For a visit to our lab, see here. For some thoughts on wind force, and comments on Engineering aesthetics. I owe to Th. Roosevelt the manly idea that overcoming fear is a worthy achievement. Here he is riding a moose. Here are some advantages of our hydrogen generators for gas chromatography.

Calculus is taught wrong, and is often wrong

The high point of most people’s college math is The Calculus. Typically this is a weeder course that separates the science-minded students from the rest. It determines which students are admitted to medical and engineering courses, and which will be directed to english or communications — majors from which they can hope to become lawyers, bankers, politicians, and spokespeople (the generally distrusted). While calculus is very useful to know, my sense is that it is taught poorly: it is built up on a year of unnecessary pre-calculus and several shady assumptions that were not necessary for the development, and that are not generally true in the physical world. The material is presented in a way that confuses and turns off many of the top students — often the ones most attached to the reality of life.

The most untenable assumption in calculus teaching, in my opinion, are that the world involves continuous functions. That is, for example, that at every instant in time an object has one position only, and that its motion from point to point is continuous, defining a slow-changing quantity called velocity. That is, every x value defines one and only one y value, and there is never more than a small change in y at the limit of a small change in X. Does the world work this way? Some parts do, others do not. Commodity prices are not really defined except at the moment of sale, and can jump significantly between two sales a micro-second apart. Objects do not really have one position, the quantum sense, at any time, but spread out, sometimes occupying several positions, and sometimes jumping between positions without ever occupying the space in-between.

These are annoying facts, but calculus works just fine in a discontinuous world — and I believe that a discontinuous calculus is easier to teach and understand too. Consider the fundamental law of calculus. This states that, for a continuous function, the integral of the derivative of changes equals the function itself (nearly incomprehensible, no?) Now consider the same law taught for a discontinuous group of changes: the sum of the changes that take place over a period equals the total change. This statement is more general, since it applies to discrete and continuous functions, and it’s easier to teach. Any idiot can see that this is true. By contrast, it takes weeks of hard thinking to see that the integral of all the derivatives equals the function — and then it takes more years to be exposed to delta functions and realize that the statement is still true for discrete change. Why don’t we teach so that people will understand? Teach discrete first and then smooth as a special case where the discrete changes happen at a slow rate. Is calculus taught this way to make us look smart, or because we want this to be a weeder course?

Because most students are not introduced to discrete change, they are in a very poor position  to understand, or model, activities that are discreet, like climate change or heart rate. Climate only makes sense year to year, as day-to-day behavior is mostly affected by seasons, weather, and day vs night. We really want to model the big picture and leave out the noise by considering each day or year as a whole, keeping track of the average temperature for noon on September 21, for example. Similarly with heart rate, the rate has no meaning if measured every microsecond; it’s only meaning is as a measure of the time between beats. If we taught calculus in terms of discrete functions, our students would be in a better place to deal with these things, and in a better place to deal with total discontinuous behaviors, like chaos and fractals, an important phenomena when dealing with economics, for example.

A fundamental truth of quantum mechanics is that there is no defined speed and position of an object at any given time. Students accept this, but (because they are used to continuous change) they come to wonder how it is that over time energy is conserved. It’s simple, quantum motion involves a gross discrete changes in position that leaves energy conserved by the end, but where an item goes from here to there without ever having to be in the middle. This helps explain the old joke about Heisenberg and his car.

Calculus-based physics is taught in terms of limits and the mean value theorem: that if x is the position of a thing at any time, t then the derivative of these positions, the velocity, will approach ∆x/∆t more and more as ∆x and ∆t become more tightly defined. When this is found to be untrue in a quantum sense, the remnant of the belief in it hinders them when they try to solve real world problems. Normal physics is the limit of quantum physics because velocity is really a macroscopic ratio of difference in position divided by macroscopic difference in time. Because of this, it is obvious that the sum of these differences is the total distance traveled even when summed over many simultaneous paths. A feature of electromagnetism, Green’s theorem becomes similarly obvious: the sum effect of a field of changes is the total change. It’s only confusing if you try to take the limits to find the exact values of these change rates at some infinitesimal space.

This idea is also helpful in finance, likely a chaotic and fractal system. Finance is not continuous: just because a stock price moved from $1 to $2 per share in one day does not mean that the price was ever $1.50 per share. While there is probably no small change in sales rate caused by a 1¢ change in sales price at any given time, this does not mean you won’t find it useful to consider the relation between the sales of a product. Though the details may be untrue, the price demand curve is still very useful (but unjustified) abstraction.

This is not to say that there are not some real-world things that are functions and continuous, but believing that they are, just because the calculus is useful in describing them can blind you to some important insights, e.g. of phenomena where the butterfly effect predominates. That is where an insignificant change in one place (a butterfly wing in China) seems to result in a major change elsewhere (e.g. a hurricane in New York). Recognizing that some conclusions follow from non-continuous math may help students recognize places where some parts of basic calculus allies, while others do not.

Dr. Robert Buxbaum (my thanks to Dr. John Klein for showing me discrete calculus).

How to make a simple time machine

I’d been in science fairs from the time I was in elementary school until 9th grade, and  usually did quite well. One trick: I always like to do cool, unexpected things. I didn’t have money, but tried for the gee-whiz factor. Sorry to say, the winning ideas of my youth are probably old hat, but here’s a project that I never got to do, but is simple and cheap and good enough to win today. It’s a basic time machine, or rather a quantum eraser — it lets you go back in time and erase something.

The first thing you should know is that the whole aspect of time rests on rather shaky footing in modern science. It is possible therefore that antimatter, positrons say, are just regular matter moving backwards in time.

The trick behind this machine is the creation of entangled states, an idea that Einstein and Rosen proposed in the 1930s (they thought it could not work and thus disproved quantum mechanics, turned out the trick works). The original version of the trick was this: start with a particle that splits in half at a given, known energy. If you measure the energy of either of the halves of the particle they are always the same, assuming the source particle starts at rest. The thing is, if you start with the original particle at absolute zero and were to measure the position of one half, and the velocity of the other, you’d certainly know the position and velocity of the original particle. Actually, you should not need to measure the velocity, since that’s fixed by they energy of the split, but we’re doing it just to be sure. Thing is quantum mechanics is based on the idea that you can not know both the velocity and position, even just before the split. What happens? If you measure the position of one half the velocity of the other changes, but if you measure the velocity of both halves it is the same, and this even works backward in time. QM seems to know if you intend to measure the position, and you measure an odd velocity even before you do so. Weird. There is another trick to making time machines, one found in Einstein’s own relativity by Gödel. It involves black holes, and we’re not sure if it works since we’ve never had a black hole to work with. With the QM time machine you’re never able to go back in time before the creation of the time machine.

To make the mini-version of this time machine, we’re going to split a few photons and play with the halves. This is not as cool as splitting an elephant, or even a proton, but money don’t grow on trees, and costs go up fast as the mass of the thing being split increases. You’re not going back in time more than 10 attoseconds (that’s a hundredth of a femtosecond), but that’s good enough for the science fair judges (you’re a kid, and that’s your lunch money at work). You’ll need a piece of thick aluminum foil, a sharp knife or a pin, a bright lamp, superglue (or, in a pinch, Elmer’s), a polarizing sunglass lens, some colored Saran wrap or colored glass, a shoe-box worth of cardboard, and wood + nails  to build some sort of wooden frame to hold everything together. Make your fixture steady and hard to break; judges are clumsy. Use decent wood (judges don’t like splinters). Keep spares for the moving parts in case someone breaks them (not uncommon). Ideally you’ll want to attach some focussing lenses a few inches from the lamp (a small magnifier or reading glass lens will do). You’ll want to lay the colored plastic smoothly over this lens, away from the lamp heat.

First make a point light source: take the 4″ square of shoe-box cardboard and put a quarter-inch hole in it near the center. Attach it in front of your strong electric light at 6″ if there is no lens, or at the focus if there is a lens. If you have no lens, you’ll want to put the Saran over this cardboard.

Take two strips of aluminum foil about 6″ square and in the center of each, cut two slits perhaps 4 mm long by .1 mm wide, 1 mm apart from each other near the middle of both strips. Back both strips with some cardboard with a 1″ hole in the middle (use glue to hold it there). Now take the sunglass lens; cut two strips 2 mm x 10 mm on opposite 45° diagonals to the vertical of the lens. Confirm that this is a polarized lens by rotating one against the other; at some rotation the pieces of sunglass, the pair should be opaque, at 90° it should be fairly clear. If this is not so, get a different sunglass.

Paste these two strips over the two slits on one of the aluminum foil sheets with a drop of super-glue. The polarization of the sunglasses is normally up and down, so when these strips are glued next to one another, the polarization of the strips will be opposing 45° angles. Look at the point light source through both of your aluminum foils (the one with the polarized filter and the one without); they should look different. One should look like two pin-points (or strips) of light. The other should look like a fog of dots or lines.

The reason for the difference is that, generally speaking a photon passes through two nearby slits as two entangled halves, or its quantum equivalent. When you use the foil without the polarizers, the halves recombine to give an interference pattern. The result with the polarization is different though since polarization means you can (in theory at least) tell the photons apart. The photons know this and thus behave like they were not two entangled halves, but rather like they passed either through one slit or the other. Your device will go back in time after the light has gone through the holes and will erase this knowledge.

Now cut another 3″ x 3″ cardboard square and cut a 1/4″ hole in the center. Cut a bit of sunglass lens, 1/2″ square and attach it over the hole of this 3×3″ cardboard square. If you view the aluminum square through this cardboard, you should be able to make one hole or the other go black by rotating this polarized piece appropriately. If it does not, there is a problem.

Set up the lamp (with the lens) on one side so that a bright light shines on the slits. Look at the light from the other side of the aluminum foil. You will notice that the light that comes through the foil with the polarized film looks like two dots, while the one that comes through the other one shows a complex interference pattern; putting the other polarizing lens in front of the foil or behind it does not change the behavior of the foil without the polarizing filters, but if done right it will change things if put behind the other foil, the one with the filters.

Robert Buxbaum, of the future.

yet another quantum joke

Why do you get more energy from a steak than from the same amount of hamburger?

 

Hamburger is steak in the ground state.

 

Is funny because….. it’s a pun on the word ground. Hamburger is ground-up meat, of course, but the reference to a ground state also relates to a basic discovery of quantum mechanics (QM): that all things exist in quantized energy states. The lowest of these is called the ground state, and you get less energy out of a process if you start with things at this ground state. Lasers, as an example, get their energy by electrons being made to drop to their ground state at the same time; you can’t get any energy from a laser if the electrons start in the ground state.

The total energy of a thing can be thought of as having a kinetic and a potential energy part. The potential energy is usually higher the more an item moves from its ideal (lowest potential point). The kinetic energies of though tends to get lower when more space is available because, from Heisenberg uncertainty, ∆l•∆v=h. That is, the more space there is, the less uncertainty of speed, and thus the less kinetic energy other things being equal. The ground energy state is the lowest sum of potential and kinetic energy, and thus all things occupy a cloud of some size, even at the ground state. Without this size, the world would cease to exist. Atoms would radiate energy, and shrink until they vanished.

In grad school, I got into understanding thermodynamics, transport phenomena, and quantum mechanics, particularly involving hydrogen. This lead to my hydrogen production and purification inventions, what my company sells.

Click here for a quantum cartoon on waves and particles, an old Heisenberg joke, or a joke about how many quantum mechanicians it takes to change a lightbulb.

R. E. Buxbaum, July 16, 2013. I once claimed that the unseen process that maintains existence could be called God; this did not go well with the religious.

 

Thermodynamics of hydrogen generation

Perhaps the simplest way to make hydrogen is by electrolysis: you run some current through water with a little sulfuric acid or KOH added, and for every two electrons transferred, you get a molecule of hydrogen from one electrode and half a molecule of oxygen from the other.

2 OH- –> 2e- + 1/2 O2 +H2O

2H2O + 2e- –>  H2 + 2OH-

The ratio between amps, seconds and mols of electrons (or hydrogen) is called the Faraday constant, F = 96500; 96500 amp-seconds transfers a mol of electrons. For hydrogen production, you need 2 mols of electrons for each mol of hydrogen, n= 2, so

it = 2F where and i is the current in amps, and t is the time in seconds and n is the number electrons per molecule of desired product. For hydrogen, t = 96500*2/i; in general, t = Fn/i.

96500 is a large number, and it takes a fair amount of time to make any substantial amount of hydrogen by electrolysis. At 1 amp, it takes 96500*2 = 193000 seconds, 2 days, to generate one mol of hydrogen (that’s 2 grams Hor 22.4 liters, enough to fill a garment bag). We can reduce the time by using a higher current, but there are limits. At 25 amps, the maximum current of you can carry with house wiring it takes 2.14 hours to generate 2 grams. (You’ll have to rectify your electricity to DC or you’ll get a nasty H2 /O2 mix called Brown’s gas, While normal H2 isn’t that dangerous, Browns gas is a mix of H2 and O2 and is quite explosive. Here’s an essay I wrote on separating Browns gas).

Electrolysis takes a fair amount of electric energy too; the minimum energy needed to make hydrogen at a given temperature and pressure is called the reversible energy, or the Gibbs free energy ∆G of the reaction. ∆G = ∆H -T∆S, that is, ∆G equals the heat of hydrogen production ∆H – minus an entropy effect, T∆S. Since energy is the product of voltage current and time, Vit = ∆G, where ∆G is the Gibbs free energy measured in Joules and V,i, and t are measured Volts, Amps, and seconds respectively.

Since it = nF, we can rewrite the relationship as: V =∆G/nF for a process that has no energy losses, a reversible process. This is the form found in most thermodynamics textbooks; the value of V calculated this way is the minimum voltage to generate hydrogen, and the maximum voltage you could get in a fuel cell putting water back together.

To calculate this voltage, and the power requirements to make hydrogen, we use the Gibbs free energy for water formation found in Wikipedia, copied below (in my day, we used the CRC Handbook of Chemistry and Physics or a table in out P-chem book). You’ll notice that there are two different values for ∆G depending on whether the water is a gas or a liquid, and you’ll notice a small zero at the upper right (∆G°). This shows that the values are for an imaginary standard state: 20°C and 1 atm pressure. You can’t get 1 atm steam at 20°C, it’s an extrapolation; behavior at typical temperatures, 40°C and above is similar but not identical. I’ll leave it to a reader to send this voltage as a comment.

Liquid H2O formation ∆G° = -237.14
Gaseous H2O formation ∆G° = -228.61

The reversible voltage for creating liquid water in a reversible fuel cell is found to be -237,140/(2 x 96,500) = -1.23V. We find that 1.23 Volts is about the minimum voltage you need to do electrolysis at 0°C because you need liquid water to carry the current; -1.18 V is about the maximum voltage you can get in a fuel cell because they operate at higher temperature with oxygen pressures significantly below 1 atm. (typically). The minus sign is kept for accounting; it differentiates the power out case (fuel cells) from power in (electrolysis). It is typical to find that fuel cells operate at lower voltages, between about .5V and 1.0V depending on the fuel cell and the power load.

Most electrolysis is done at voltages above about 1.48 V. Just as fuel cells always give off heat (they are exothermic), electrolysis will absorb heat if run reversibly. That is, electrolysis can act as a refrigerator if run reversibly. but electrolysis is not a very good refrigerator (the refrigerator ability is tied up in the entropy term mentioned above). To do electrolysis at reasonably fast rates, people give up on refrigeration (sucking heat from the environment) and provide all the entropy needed for electrolysis in the electricity they supply. This is to say, they operate at V’ were nFV’ ≥ ∆H, the enthalpy of water formation. Since ∆H is greater than ∆G, V’ the voltage for electrolysis is higher than V. Based on the enthalpy of liquid water formation,  −285.8 kJ/mol we find V’ = 1.48 V at zero degrees. The figure below shows that, for any reasonably fast rate of hydrogen production, operation must be at 1.48V or above.

Electrolyzer performance; C-Pt catalyst on a thin, nafion membrane

Electrolyzer performance; C-Pt catalyst on a thin, nafion membrane

If you figure out the energy that this voltage and amperage represents (shown below) you’re likely to come to a conclusion I came to several years ago: that it’s far better to generate large amounts of hydrogen chemically, ideally from membrane reactors like my company makes.

The electric power to make each 2 grams of hydrogen at 1.5 volts is 1.5 V x 193000 Amp-s = 289,500 J = .080 kWh’s, or 0.9¢ at current rates, but filling a car takes 20 kg, or 10,000 times as much. That’s 800 kW-hr, or $90 at current rates. The electricity is twice as expensive as current gasoline and the infrastructure cost is staggering too: a station that fuels ten cars per hour would require 8 MW, far more power than any normal distributor could provide.

By contrast, methanol costs about 2/3 as much as gasoline, and it’s easy to deliver many giga-joules of methanol energy to a gas station by truck. Our company’s membrane reactor hydrogen generators would convert methanol-water to hydrogen efficiently by the reaction CH3OH + H2O –> 3H2 + CO2. This is not to say that electrolysis isn’t worthwhile for lower demand applications: see, e.g.: gas chromatography, and electric generator cooling. Here’s how membrane reactors work.

R. E. Buxbaum July 1, 2013; Those who want to show off, should post the temperature and pressure corrections to my calculations for the reversible voltage of typical fuel cells and electrolysis.

Another Quantum Joke, and Schrödinger’s waves derived

Quantum mechanics joke. from xkcd.

Quantum mechanics joke. from xkcd.

Is funny because … it’s is a double entente on the words grain (as in grainy) and waves, as in Schrödinger waves or “amber waves of grain” in the song America (Oh Beautiful). In Schrödinger’s view of the quantum world everything seems to exist or move as a wave until you observe it, and then it always becomes a particle. The math to solve for the energy of things is simple, and thus the equation is useful, but it’s hard to understand why,  e.g. when you solve for the behavior of a particle (atom) in a double slit experiment you have to imagine that the particle behaves as an insubstantial wave traveling though both slits until it’s observed. And only then behaves as a completely solid particle.

Math equations can always be rewritten, though, and science works in the language of math. The different forms appear to have different meaning but they don’t since they have the same practical predictions. Because of this freedom of meaning (and some other things) science is the opposite of religion. Other mathematical formalisms for quantum mechanics may be more comforting, or less, but most avoid the wave-particle duality.

The first formalism was Heisenberg’s uncertainty. At the end of this post, I show that it is identical mathematically to Schrödinger’s wave view. Heisenberg’s version showed up in two quantum jokes that I explained (beat into the ground), one about a lightbulb  and one about Heisenberg in a car (also explains why water is wet or why hydrogen diffuses through metals so quickly).

Yet another quantum formalism involves Feynman’s little diagrams. One assumes that matter follows every possible path (the multiple universe view) and that time should go backwards. As a result, we expect that antimatter apples should fall up. Experiments are underway at CERN to test if they do fall up, and by next year we should finally know if they do. Even if anti-apples don’t fall up, that won’t mean this formalism is wrong, BTW: all identical math forms are identical, and we don’t understand gravity well in any of them.

Yet another identical formalism (my favorite) involves imagining that matter has a real and an imaginary part. In this formalism, the components move independently by diffusion, and as a result look like waves: exp (-it) = cost t + i sin t. You can’t observe the two parts independently though, only the following product of the real and imaginary part: (the real + imaginary part) x (the real – imaginary part). Slightly different math, same results, different ways of thinking of things.

Because of quantum mechanics, hydrogen diffuses very quickly in metals: in some metals quicker than most anything in water. This is the basis of REB Research metal membrane hydrogen purifiers and also causes hydrogen embrittlement (explained, perhaps in some later post). All other elements go through metals much slower than hydrogen allowing us to make hydrogen purifiers that are effectively 100% selective. Our membranes also separate different hydrogen isotopes from each other by quantum effects (big things tunnel slower). Among the uses for our hydrogen filters is for gas chromatography, dynamo cooling, and to reduce the likelihood of nuclear accidents.

Dr. Robert E. Buxbaum, June 18, 2013.

To see Schrödinger’s wave equation derived from Heisenberg for non-changing (time independent) items, go here and note that, for a standing wave there is a vibration in time, though no net change. Start with a version of Heisenberg uncertainty: h =  λp where the uncertainty in length = wavelength = λ and the uncertainty in momentum = momentum = p. The kinetic energy, KE = 1/2 p2/m, and KE+U(x) =E where E is the total energy of the particle or atom, and U(x) is the potential energy, some function of position only. Thus, p = √2m(E-PE). Assume that the particle can be described by a standing wave with a physical description, ψ, and an imaginary vibration you can’t ever see, exp(-iωt). And assume this time and space are completely separable — an OK assumption if you ignore gravity and if your potential fields move slowly relative to the speed of light. Now read the section, follow the derivation, and go through the worked problems. Most useful applications of QM can be derived using this time-independent version of Schrödinger’s wave equation.