Category Archives: Physics

Highest temperature superconductor so far: H2S

The new champion of high-temperature superconductivity is a fairly common gas, hydrogen sulphide, H2S. By compressing it to 150 GPa, 1.5 million atm., a team lead by Alexander Drozdov and M. Eremets of the Max Planck Institute coaxed superconductivity from H2S at temperatures as high as 203.5°K (-70°C). This is, by far, the warmest temperature of any superconductor discovered to-date, and it’s main significance is to open the door for finding superconductivity in other, related hydrogen compounds — ideally at warmer temperatures and/or less-difficult pressures. Among the interesting compounds that will certainly get more attention: PH3, BH3, Methyl mercaptan, and even water, either alone or in combination with H2S.

Relationship between H2S pressure and critical temperature for superconductivity.

Relation between pressure and critical temperature for superconductivity, Tc, in H2S (filled squares) and D2S (open red). The magenta point was measured by magnetic susceptibility (Nature)

H2S superconductivity appears to follow the standard, Bardeen–Cooper–Schrieffer theory (B-C-S). According to this theory superconductivity derives from the formation of pairs of opposite-spinning electrons (Cooper pairs) particularly in light, stiff, semiconductor materials. The light, positively charged lattice quickly moves inward to follow the motion of the electrons, see figure below. This synchronicity of motion is posited to create an effective bond between the electrons, enough to counter the natural repulsion, and allows the the pairs to condense to a low-energy quantum state where they behave as if they were very large and very spread out. In this large, spread out state, they slide through the lattice without interacting with the atoms or the few local vibrations and unpaired electrons found at low temperatures. From this theory, we would expect to find the highest temperature superconductivity in the lightest lattice, materials like ice, boron hydride, magnesium hydride, or H2S, and we expect to find higher temperature behavior in the hydrogen version, H2O, or H2S than in the heavier, deuterium analogs, D2O or D2S. Experiments with H2S and D2S (shown at right) confirm this expectation suggesting that H2S superconductivity is of the B-C-S type. Sorry to say, water has not shown any comparable superconductivity in experiments to date.

We have found high temperature superconductivity in few of materials that we would expect from B-C-S theory, and yet-higher temperature is seen in many unexpected materials. While hydride materials generally do become superconducting, they mostly do so only at low temperatures. The highest temperature semiconductor B-C-S semiconductor discovered until now was magnesium boride, Tc = 27 K. More bothersome, the most-used superconductor, Nb-Sn, and the world record holder until now, copper-oxide ceramics, Tc = 133 K at ambient pressure; 164 K at 35 GPa (350,000 atm) were not B-C-S. There is no version of B-C-S theory to explain why these materials behave as well as they do, or why pressure effects Tc in them. Pressure effects Tc in B-C-S materials by raising the energy of small-scale vibrations that would be necessary to break the pairs. Why should pressure effect copper ceramics? No one knows.

The standard theory of superconductivity relies on Cooper pairs of electrons held together by lattice elasticity.  The lighter and stiffer the lattice, the higher temperature the superconductivity.

The standard theory of superconductivity relies on Cooper pairs of electrons held together by lattice elasticity. The lighter and stiffer the lattice, the higher temperature the superconductivity.

The assumption is that high-pressure H2S acts as a sort of metallic hydrogen. From B-C-S theory, metallic hydrogen was predicted to be a room-temperature superconductor because the material would likely to be a semi-metal, and thus a semiconductor at all temperatures. Hydrogen’s low atomic weight would mean that there would be no significant localized vibrations even at room temperature, suggesting room temperature superconductivity. Sorry to say, we have yet to reach the astronomical pressures necessary to make metallic hydrogen, so we don’t know if this prediction is true. But now it seems H2S behaves nearly the same without requiring the extremely high pressures. It is thought that high temperature H2S superconductivity occurs because H2S somewhat decomposes to H3S and S, and that the H3S provides a metallic-hydrogen-like operative lattice. The sulfur, it’s thought, just goes along for the ride. If this is the explanation, we might hope to find the same behaviors in water or phosphine, PH3, perhaps when mixed with H2S.

One last issue, I guess, is what is this high temperature superconductivity good for. As far as H2S superconductivity goes, the simple answer is that it’s probably good for nothing. The pressures are too high. In general though, high temperature superconductors like NbSn are important. They have been valuable for making high strength magnets, and for prosaic applications like long distance power transmission. The big magnets are used for submarine hunting, nuclear fusion, and (potentially) for levitation trains. See my essay on Fusion here, it’s what I did my PhD on — in chemical engineering, and levitation trains, potentially, will revolutionize transport.

Robert Buxbaum, December 24, 2015. My company, REB Research, does a lot with hydrogen. Not that we make superconductors, but we make hydrogen generators and purifiers, and I try to keep up with the relevant hydrogen research.

Why I don’t like the Iran deal

Treaties, I suspect, do not exist to create love between nations, but rather to preserve, in mummified form, the love that once existed between leaders. They are useful for display, and as a guide to the future, their main purpose is to allow a politician to help his friends while casting blame on someone else when problems show up. In the case of the US Iran-deal that seems certain to pass in a day or two with only Democratic-party support, and little popular support, I see no love between the nations. On a depressingly regular basis, Iranian leaders promise Death to America, and Death to America’s sometime-ally Israel. Iran has acted on these statements too, funding Hezbollah missiles and suicide bombers, and hanging its dissidents: practices that lead it to become something of a pariah among its neighbors. They also display the sort of nuclear factories and ICBMs (long-range rockets) that could make them much bigger threats if they choose to become bigger threats. The deal just signed by US Secretary of State and his counterpart in Iran (read in full here) seems to preserve this state. It releases to Iran $100,000,000,000 to $150,000,000,000 that it claims it will use against Israel, and Iran claims to have no interest in developing multi-point compression atom bombs. This is a tiny concession given that our atom bomb at Hiroshima was single-point compression, first generation, and killed 90,000 people.

Iranian intercontinental ballistic missile, several stories high, brought out during negotiations. Should easily deliver nuclear weapons far beyond Israel, and even to the USA.

Iranian intercontinental ballistic missile, new for 2015. Should easily deliver warheads far beyond Israel -even to the US.

The deal itself is about 170 pages long and semi-legalistic, but I found it easy to read. The print is large, Iran has few obligations, and the last 100 pages or so are a list companies that will no longer be sanctioned. The treaty asserts that we will defend Iran against attacks including military and cyber attacks, and sabotage –presumably from Israel, but gives no specifics. Also we are to help them with oil, naval, and fusion technology, while leaving them with 1500 kg of 20% enriched U235. That’s enough for quick conversion to 8 to 10 Hiroshima-size A-bombs (atom bombs) containing 25-30 kg each of 90% U235. The argument in favor of the bill seems to be that, by giving Iran the money and technology, and agreeing with their plans, Iran will come to like us. My sense is that this is wishful-thinking, and unlikely (as Jimmy Carter discovered). The unwritten contract isn’t worth the paper it’s written on.

As currently written, Iran does not recognize Israel’s right to exist. To the contrary, John Kerry has stated that a likely consequence is further attacks on Israel. Given Hezbollah’s current military budget is only about $150,000,000 and Hamas’s only about $15,000,000 (virtually all from Iran), we can expect a very significant increase in attacks once the money is released — unless Iran’s leaders prove to be cheapskates or traitors to their own revolution (unlikely). Given our president’s and Ms Clinton’s comments against Zionist racism, I assume that they hope to cow Israel into being less militant and less racist, i.e. less Jewish. I doubt it, but you never know. I also expect an arms race in the middle east to result. As for Iran’s statements that they seek to kill every Jew and wipe out the great satan, the USA: our leaders may come to regret hat they ignore such statements. I guess they hope that none of their friends or relatives will be among those killed.

Kerry on why we give Iran the ability to self-inspect.

Kerry on why we give Iran the ability to self-inspect.

I’d now like to turn to fusion technology, an area I know better than most. Nowhere does the treaty say what Iran will do with nuclear fusion technology, but it specifies we are to provide it, and there seem to be only two possibilities of what they might do with it: (1) Build a controlled fusion reactor like the TFTR at Princeton — a very complex, expensive option, or (2) develop a hydrogen fusion bomb of the sort that vaporized the island of Bimini: an H-bomb. I suspect Iran means to do the latter, while I imagine that, John Kerry is thinking the former. Controlled fusion is very difficult; uncontrolled fusion is a lot easier. With a little thought, you’ll see how to build a decent H-bomb.

My speculation of why Iran would want to make an H-bomb is this: they may not trust their A-bombs to win a war with Israel. As things stand, their A-bomb scientists are unlikely to coax more than 25 to 100 kilotons of explosive power out of each bomb, perhaps double that of Hiroshima and Nagasaki. But our WWII bombs “only” killed 70,000 to 90,000 people each, even with the radiation deaths. Used against Israel, such bombs could level the core of Jerusalem or Tel Aviv. But most Israelis would survive, and they would strike back, hard.

To beat the Israelis, you’d need a Megaton-size, hydrogen bomb. Just one Megaton bomb would vaporize Jerusalem and it’s suburbs, kill a million inhabitants at a shot, level the hills, vaporize the artifacts in the jewish museum, and destroy anything we now associate with Israel. If Iran did that, while retaining a second bomb for Tel-Aviv, it is quite possible Israel would surrender. As for our aim, perhaps we hope Iran will attack Israel and leave us alone. Very bright people pushed for WWI on hopes like this.

Robert E. Buxbaum. September 9, 2015. Here’s a thought about why peace in the middle east is so hard to achieve,

It’s rocket science

Here are six or so rocket science insights, some simple, some advanced. It’s a fun area of engineering that touches many areas of science and politics. Besides, some people seem to think I’m a rocket scientist.

A basic question I get asked by kids is how a rocket goes up. My answer is it does not go up. That’s mostly an illusion. The majority of the rocket — the fuel — goes down, and only the light shell goes up. People imagine they are seeing the rocket go up. Taken as a whole, fuel and shell, they both go down at 1 G: 9.8 m/s2, 32 ft/sec2.

Because 1 G ofupward acceleration is always lost to gravity, you need more thrust from the rocket engine than the weight of rocket and fuel. This can be difficult at the beginning when the rocket is heaviest. If your engine provides less thrust than the weight of your rocket, your rocket sits on the launch pad, burning. If your thrust is merely twice the weight of the rocket, you waste half of your fuel doing nothing useful, just fighting gravity. The upward acceleration you’ll see, a = F/m -1G where F is the force of the engine, and m is the mass of the rocket shell + whatever fuel is in it. 1G = 9.8m/s is the upward acceleration lost to gravity.  For model rocketry, you want to design a rocket engine so that the upward acceleration, a, is in the range 5-10 G. This range avoids wasting lots of fuel without requiring you to build the rocket too sturdy.

For NASA moon rockets, a = 0.2G approximately at liftoff increasing as fuel was used. The Saturn V rose, rather majestically, into the sky with a rocket structure that had to be only strong enough to support 1.2 times the rocket weight. Higher initial accelerations would have required more structure and bigger engines. As it was the Saturn V was the size of a skyscraper. You want the structure to be light so that the majority of weight is fuel. What makes it tricky is that the acceleration weight has to sit on an engine that gimbals (slants) and runs really hot, about 3000°C. Most engineering projects have fewer constraints than this, and are thus “not rocket science.”

Basic force balance on a rocket going up.

Basic force balance on a rocket going up.

A space rocket has to reach very high, orbital speed if the rocket is to stay up indefinitely, or nearly orbital speed for long-range, military uses. You can calculate the orbital speed by balancing the acceleration of gravity, 9.8 m/s2, against the orbital acceleration of going around the earth, a sphere of 40,000 km in circumference (that’s how the meter was defined). Orbital acceleration, a = v2/r, and r = 40,000,000 m/2π = 6,366,000m. Thus, the speed you need to stay up indefinitely is v=√(6,366,000 x 9.8) = 7900 m/s = 17,800 mph. That’s roughly Mach 35, or 35 times the speed of sound at sea level, (343 m/s). You need some altitude too, just to keep air friction from killing you, but for most missions, the main thing you need is velocity, kinetic energy, not potential energy, as I’ll show below. If your speed exceeds 17,800 m/s, you go higher up, but the stable orbital velocity is lower. The gravity force is lower higher up, and the radius to the earth higher too, but you’re balancing this lower gravity force against v2/r, so v2 has to be reduced to stay stable high up, but higher to get there. This all makes docking space-ships tricky, as I’ll explain also. Rockets are the only way practical to reach Mach 35 or anything near it. No current cannon or gun gets close.

Kinetic energy is a lot more important than potential energy for sending an object into orbit. To get a sense of the comparison, consider a one kg mass at orbital speed, 7900 m/s, and 200 km altitude. For these conditions, the kinetic energy, 1/2mv2 is 31,205 kJ, while the potential energy, mgh, is only 1,960 kJ . The potential energy is thus only 1/16 the kinetic energy.

Not that it’s easy to reach 200 miles altitude, but you can do it with a sophisticated cannon. The Germans did it with “simple”, one stage, V2-style rockets. To reach orbit, you generally need multiple stages. As a way to see this, consider that the energy content of gasoline + oxygen is about 10.5 MJ/kg (10,500 kJ/kg); this is only 1/3 of the kinetic energy of the orbital rocket, but it’s 5 times the potential energy. A fairly efficient gasoline + oxygen powered cannon could not provide orbital kinetic energy since the bullet can move no faster than the explosive vapor. In a rocket this is not a constraint since most of the mass is ejected.

A shell fired at a 45° angle that reaches 200 km altitude would go about 800 km — the distance between North Korea and Japan, or between Iran and Israel. That would require twice as much energy as a shell fired straight up, about 4000 kJ/kg. This is still within the range for a (very large) cannon or a single-stage rocket. For Russia or China to hit the US would take much more: orbital, or near orbital rocketry. To reach the moon, you need more total energy, but less kinetic energy. Moon rockets have taken the approach of first going into orbit, and only later going on. While most of the kinetic energy isn’t lost, it’s likely not the best trajectory in terms of energy use.

The force produced by a rocket is equal to the rate of mass shot out times its velocity. F = ∆(mv). To get a lot of force for each bit of fuel, you want the gas exit velocity to be as fast as possible. A typical maximum is about 2,500 m/s. Mach 10, for a gasoline – oxygen engine. The acceleration of the rocket itself is this ∆mv force divided by the total remaining mass in the rocket (rocket shell plus remaining fuel) minus 1 (gravity). Thus, if the exhaust from a rocket leaves at 2,500 m/s, and you want the rocket to accelerate upward at an average of 10 G, you must exhaust fast enough to develop 10 G, 98 m/s2. The rate of mass exhaust is the average mass of the rocket times 98/2500 = .0392/second. That is, about 3.92% of the rocket mass must be ejected each second. Assuming that the fuel for your first stage engine is less than 80% of the total mass, the first stage will flare-out in about 20 seconds. Typically, the acceleration at the end of the 20 burn is much greater than at the beginning since the rocket gets lighter as fuel is burnt. This was the case with the Apollo missions. The Saturn V started up at 0.5G but reached a maximum of 4G by the time most of the fuel was used.

If you have a good math background, you can develop a differential equation for the relation between fuel consumption and altitude or final speed. This is readily done if you know calculous, or reasonably done if you use differential methods. By either method, it turns out that, for no air friction or gravity resistance, you will reach the same speed as the exhaust when 64% of the rocket mass is exhausted. In the real world, your rocket will have to exhaust 75 or 80% of its mass as first stage fuel to reach a final speed of 2,500 m/s. This is less than 1/3 orbital speed, and reaching it requires that the rest of your rocket mass: the engine, 2nd stage, payload, and any spare fuel to handle descent (Elon Musk’s approach) must weigh less than 20-25% of the original weight of the rocket on the launch pad. This gasoline and oxygen is expensive, but not horribly so if you can reuse the rocket; that’s the motivation for NASA’s and SpaceX’s work on reusable rockets. Most orbital rocket designs require three stages to accelerate to the 7900 m/s orbital speed calculated above. The second stage is dropped from high altitude and almost invariably lost. If you can set-up and solve the differential equation above, a career in science may be for you.

Now, you might wonder about the exhaust speed I’ve been using, 2500 m/s. You’ll typically want a speed at lest this high as it’s associated with a high value of thrust-seconds per weight of fuel. Thrust seconds pre weight is called specific impulse, SI, SI = lb-seconds of thrust/lb of fuel. This approximately equals speed of exhaust (m/s) divided by 9.8 m/s2. For a high molecular weight burn it’s not easy to reach gas speed much above 2500, or values of SI much above 250, but you can get high thrust since thrust is related to momentum transfer. High thrust is why US and Russian engines typically use gasoline + oxygen. The heat of combustion of gasoline is 42 MJ/kg, but burning a kg of gasoline requires roughly 2.5 kg of oxygen. Thus, for a rocket fueled by gasoline + oxygen, the heat of combustion per kg is 42/3.5 = 12,000,000 J/kg. A typical rocket engine is 30% efficient (V2 efficiency was lower, Saturn V higher). Per corrected unit of fuel+oxygen mass, 1/2 v2 = .3 x 12,000,000; v =√7,200,000 = 2680 m/s. Adding some mass for the engine and fuel tanks, the specific impulse for this engine will be, about 250 s. This is fairly typical. Higher exhaust speeds have been achieved with hydrogen fuel, it has a higher combustion energy per weight. It is also possible to increase the engine efficiency; the Saturn V, stage 2 efficiency was nearly 50%, but the thrust was low. The sources of inefficiency include inefficiencies in compression, incomplete combustion, friction flows in the engine, and back-pressure of the atmosphere. If you can make a reliable, high efficiency engine with good lift, a career in engineering may be for you. A yet bigger challenge is doing this at a reasonable cost.

At an average acceleration of 5G = 49 m/s2 and a first stage that reaches 2500 m/s, you’ll find that the first stage burns out after 51 seconds. If the rocket were going straight up (bad idea), you’d find you are at an altitude of about 63.7 km. A better idea would be an average trajectory of 30°, leaving you at an altitude of 32 km or so. At that altitude you can expect to have far less air friction, and you can expect the second stage engine to be more efficient. It seems to me, you may want to wait another 10 seconds before firing the second stage: you’ll be 12 km higher up and it seems to me that the benefit of this will be significant. I notice that space launches wait a few seconds before firing their second stage.

As a final bit, I’d mentioned that docking a rocket with a space station is difficult, in part, because docking requires an increase in angular speed, w, but this generally goes along with a decrease in altitude; a counter-intuitive outcome. Setting the acceleration due to gravity equal to the angular acceleration, we find GM/r2 = w2r, where G is the gravitational constant, and M is the mass or the earth. Rearranging, we find that w2  = GM/r3. For high angular speed, you need small r: a low altitude. When we first went to dock a space-ship, in the early 60s, we had not realized this. When the astronauts fired the engines to dock, they found that they’d accelerate in velocity, but not in angular speed: v = wr. The faster they went, the higher up they went, but the lower the angular speed got: the fewer the orbits per day. Eventually they realized that, to dock with another ship or a space-station that is in front of you, you do not accelerate, but decelerate. When you decelerate you lose altitude and gain angular speed: you catch up with the station, but at a lower altitude. Your next step is to angle your ship near-radially to the earth, and accelerate by firing engines to the side till you dock. Like much of orbital rocketry, it’s simple, but not intuitive or easy.

Robert Buxbaum, August 12, 2015. A cannon that could reach from North Korea to Japan, say, would have to be on the order of 10 km long, running along the slope of a mountain. Even at that length, the shell would have to fire at 450 G, or so, and reach a speed about 3000 m/s, or 1/3 orbital.

Much of the chemistry you learned is wrong

When you were in school, you probably learned that understanding chemistry involved understanding the bonds between atoms. That all the things of the world were made of molecules, and that these molecules were fixed proportion combinations of the chemical elements held together by one of the 2 or 3 types of electron-sharing bonds. You were taught that water was H2O, that table salt was NaCl, that glass was SIO2, and rust was Fe2O3, and perhaps that the bonds involved an electron transferring between an electron-giver: H, Na, Si, or Fe… to an electron receiver: O or Cl above.

Sorry to say, none of that is true. These are fictions perpetrated by well-meaning, and sometime ignorant teachers. All of the materials mentioned above are grand polymers. Any of them can have extra or fewer atoms of any species, and as a result the stoichiometry isn’t quite fixed. They are not molecules at all in the sense you knew them. Also, ionic bonds hardly exist. Not in any chemical you’re familiar with. There are no common electron compounds. The world works, almost entirely on covalent, shared bonds. If bonds were ionic you could separate most materials by direct electrolysis of the pure compound, but you can not. You can not, for example, make iron by electrolysis of rust, nor can you make silicon by electrolysis of pure SiO2, or titanium by electrolysis of pure TiO. If you could, you’d make a lot of money and titanium would be very cheap. On the other hand, the fact that stoichiometry is rarely fixed allows you to make many useful devices, e.g. solid oxide fuel cells — things that should not work based on the chemistry you were taught.

Iron -zinc forms compounds, but they don't have fixed stoichiometry. As an example the compound at 60 atom % Zn is, I guess Zn3Fe2, but the composition varies quite a bit from there.

Iron -zinc forms compounds, but they don’t have fixed stoichiometry. As an example the compound at 68-80 atom% Zn is, I guess Zn7Fe3 with many substituted atoms, especially at temperatures near 665°C.

Because most bonds are covalent many compounds form that you would not expect. Most metal pairs form compounds with unusual stoicheometric composition. Here, for example, is the phase diagram for zinc and Iron –the materials behind galvanized sheet metal: iron that does not rust readily. The delta phase has a composition between 85 and 92 atom% Zn (8 and 15 a% iron): Perhaps the main compound is Zn5Fe2, not the sort of compound you’d expect, and it has a very variable compositions.

You may now ask why your teachers didn’t tell you this sort of stuff, but instead told you a pack of lies and half-truths. In part it’s because we don’t quite understand this ourselves. We don’t like to admit that. And besides, the lies serve a useful purpose: it gives us something to test you on. That is, a way to tell if you are a good student. The good students are those who memorize well and spit our lies back without asking too many questions of the wrong sort. We give students who do this good grades. I’m going to guess you were a good student (congratulations, so was I). The dullards got confused by our explanations. They asked too many questions, and asked, “can you explain that again? Or why? We get mad at these dullards and give them low grades. Eventually, the dullards feel bad enough about themselves to allow themselves to be ruled by us. We graduates who are confident in our ignorance rule the world, but inventions come from the dullards who don’t feel bad about their ignorance. They survive despite our best efforts. A few more of these folks survive in the west, and especially in America, than survive elsewhere. If you’re one, be happy you live here. In most countries you’d be beheaded.

Back to chemistry. It’s very difficult to know where to start to un-teach someone. Lets start with EMF and ionic bonds. While it is generally easier to remove an electron from a free metal atom than from a free non-metal atom, e.g. from a sodium atom instead of oxygen, removing an electron is always energetically unfavored, for all atoms. Similarly, while oxygen takes an extra electron easier than iron would, adding an electron is energetically unfavored. The figure below shows the classic ion bond, left, and two electron sharing options (center right) One is a bonding option the other anti-bonding. Nature prefers this to electron sharing to ionic bonds, even with blatantly ionic elements like sodium and chlorine.

Bond options in NaCl. Note that covalent is the stronger bond option though it requires less ionization.

Bond options in NaCl. Note that covalent is the stronger bond option though it requires less ionization.

There is a very small degree of ionic bonding in NaCl (left picture), but in virtually every case, covalent bonds (center) are easier to form and stronger when formed. And then there is the key anti-bonding state (right picture). The anti bond is hardly ever mentioned in high school or college chemistry, but it is critical — it’s this bond that keeps all mater from shrinking into nothingness.

I’ve discussed hydrogen bonds before. I find them fascinating since they make water wet and make life possible. I’d mentioned that they are just like regular bonds except that the quantum hydrogen atom (proton) plays the role that the electron plays. I now have to add that this is not a transfer, but a covalent spot. The H atom (proton) divides up like the electron did in the NaCl above. Thus, two water molecules are attracted by having partial bits of a proton half-way between the two oxygen atoms. The proton does not stay put at the center, there, but bobs between them as a quantum cloud. I should also mention that the hydrogen bond has an anti-bond state just like the electron above. We were never “taught” the hydrogen bond in high school or college — fortunately — that’s how I came to understand them. My professors, at Princeton saw hydrogen atoms as solid. It was their ignorance that allowed me to discover new things and get a PhD. One must be thankful for the folly of others: without it, no talented person could succeed.

And now I get to really weird bonds: entropy bonds. Have you ever noticed that meat gets softer when its aged in the freezer? That’s because most of the chemicals of life are held together by a sort of anti-bond called entropy, or randomness. The molecules in meat are unstable energetically, but actually increase the entropy of the water around them by their formation. When you lower the temperature you case the inherent instability of the bonds to cause them to let go. Unfortunately, this happens only slowly at low temperatures so you’ve got to age meat to tenderize it.

A nice thing about the entropy bond is that it is not particularly specific. A consequence of this is that all protein bonds are more-or-less the same strength. This allows proteins to form in a wide variety of compositions, but also means that deuterium oxide (heavy water) is toxic — it has a different entropic profile than regular water.

Robert Buxbaum, March 19, 2015. Unlearning false facts one lie at a time.

Brass monkey cold

In case it should ever come up in conversation, only the picture at left shows a brass monkey. The other is a bronze statue of some sort of a primate. A brass monkey is a rack used to stack cannon balls into a face centered pyramid. A cannon crew could fire about once per minute, and an engagement could last 5 hours, so you could hope to go through a lot of cannon balls during an engagement (assuming you survived).

A brass monkey cannonball holder. The classic monkeys were 10 x 10 and made of navy brass.

Small brass monkey. The classic monkey might have 9 x 9 or 10×10 cannon balls on the lower level.

Bronze sculpture of a primate playing with balls -- but look what the balls are sitting on: it's a surreal joke.

Bronze sculpture of a primate playing with balls — but look what the balls are sitting on: it’s a dada art joke.

But brass monkeys typically show up in conversation in terms of it being cold enough to freeze the balls off of a brass monkey, and if you imagine an ornamental statue, you’d never guess how cold could that be. Well, for a cannonball holder, the answer has to do with the thermal expansion of metals. Cannon balls were made of iron and the classic brass monkey was made of brass, an alloy with a much-greater thermal expansion than iron. As the temperature drops, the brass monkey contracts more than the iron balls. When the drop is enough the balls will fall off and roll around.

The thermal expansion coefficient of brass is 18.9 x 10-6/°C while the thermal expansion coefficient of iron is 11.7 x10-6/°C. The difference is 7.2×10-6/°C; this will determine the key temperature. Now consider a large brass monkey, one with 400 x 400 holes on the lower level, 399 x 399 at the second, and so on. Though it doesn’t affect the result, we’ll consider a monkey that holds 12 lb cannon balls, a typical size of 1750 -1830. Each 12 lb ball is 4.4″ in diameter at room temperature, 20°C in those days. At 20°C, this monkey is about 1760″ wide. The balls will fall off when the monkey shrinks more than the balls by about 1/3 of a diameter, 1.5″.

We can calculate ∆T, the temperature change, °C, that is required to lower the width-difference by 1.5″ as follows:

kepler conjecture, brass monkey

-1.5″ = ∆T x 1760″ x 7.2 x10-6

We find that ∆T = -118°C. The temperature where this happens is 118 degrees cooler than 20°C, or -98°C. That’s a temperature that you could, perhaps reach on the South Pole or maybe deepest Russia. It’s not likely to be a problem, especially with a smaller brass monkey.

Robert E. Buxbaum, February 21, 2015 (modified Apr. 28, 2021). Some fun thoughts: Convince yourself that the key temperature is independent of the size of the cannon balls. That is, that I didn’t need to choose 12 pounders. A bit more advanced, what is the equation for the number of balls on any particular base-size monkey. Show that the packing density is no more efficient if the bottom lawyer were an equilateral triangle, and not a square. If you liked this, you might want to know how much wood a woodchuck chucks if a woodchuck could chuck wood, or on the relationship between mustaches and WWII diplomacy.

Our expanding, black hole universe

In a previous post I showed a classical derivation of the mass-to-size relationship for black -holes and gave evidence to suggest that our universe (all the galaxies together) constitute a single, large black hole. Everything is inside the black hole and nothing outside but empty space — We can tell this because you can see outside from inside a black hole — it’s only others, outside who can not see in (Finkelstein, Phys Rev. 1958). Not that there appear to be others outside the universe, but if they were, they would not be able to see us.

In several ways having a private, black hole universe is a gratifying thought. It provides privacy and a nice answer to an easily proved conundrum: that the universe is not infinitely big. The black hole universe that ends as the math requires, but not with a brick wall, as i the Hitchhiker’s guide (one of badly-laid brick). There are one or two problems with this nice tidy solution. One is that the universe appears to be expanding, and black holes are not supposed to expand. Further, the universe appears to be bigger than it should be, suggesting that it expanded faster than the speed of light at some point. its radius now appears to be 40-46 billion light years despite the universe appearing to have started as a point some 14 billion years ago. That these are deeply disturbing questions does not stop NASA and Nova from publishing the picture below for use by teachers. This picture makes little sense, but it’s found in Wikipedia and most, newer books.

Standard picture of the big bang theory. Expansions, but no contractions.

Standard picture of the big bang theory: A period of faster than light expansion (inflation) then light-speed, accelerating expansion. NASA, and Wikipedia.

We think the creation event occurred some 14 billion years ago because we observe that the majority of galaxies are expanding from us at a rate proportional to their distance from us. From this proportionality between the rate of motion and the distance from us, we conclude that we were all in one spot some 14 billion years ago. Unfortunately, some of the most distant galaxies are really dim — dimmer than they would be if they were only 14 billion light years away. The model “explains this” by a period of inflation, where the universe expanded faster than the speed of light. The current expansion then slowed, but is accelerating again; not slowing as would be expected if it were held back by gravity of the galaxies. Why hasn’t the speed of the galaxies slowed, and how does the faster-than-light part work? No one knows. Like Dr. Who’s Tardis, our universe is bigger on the inside than seems possible.

Einstein's preferred view of the black-hole universe is one that expands and contracts at some (large) frequency. It could explain why the universe is near-uniform.

Einstein’s oscillating universe: it expands and contracts at some (large) frequency. Oscillations would explain why the universe is near-uniform, but not why it’s so big or moving outward so fast.

Einstein’s preferred view was of an infinite space universe where the mass within expands and contracts. He joked that two things were infinite, the universe and stupidity… see my explanation... In theory, gravity could drive the regular contractions to an extent that would turn entropy backward. Einstein’s oscillating model would explain how the universe is reasonably stable and near-uniform in temperature, but it’s not clear how his universe could be bigger than 14 billion light years across, or how it could continue to expand as fast as it does. A new view, published this month suggests that there are two universes, one going forward in time the other backward. The backward in time part of the universe could be antimatter, or regular matter going anti entropy (that’s how I understand it — If it’s antimatter, we’d run into the it all the time). Random other ideas float through the physics literature: that we’re connected to other space through a black hole/worm hole, perhaps to many other universes by many worm holes in fractal chaos, see for example, Physics Reports, 1992.

The forward-in-time expansion part of the two universes model.

The forward-in-time expansion part of the two universes model. This drawing, like the first, is from NASA.

For all I know, there are these many black hole  tunnels to parallel universes. Perhaps the universal constant and all these black-hole tunnels are windows on quantum mechanics. At some point the logic of the universe seems as perverse as in the Hitchhiker guide.

Something I didn’t mention yet is the Higgs boson, the so-called God particle. As in the joke, it’s supposed to be responsible for mass. The idea is that all particles have mass only by interaction with these near-invisible Higgs particles. Strong interactions with the Higgs are what make these particles heavier, while weaker – interacting particles are perceived to have less gravity and inertia. But this seems to me to be the theory that Einstein’s relativity and the 1919 eclipse put to rest. There is no easy way for a particle model like this to explain relativistic warping of space-time. Without mass being able to warp space-time you’d see various degrees of light bending around the sun, and preferential gravity in the direction of our planet’s motion: things we do not see. We’re back in 1900, looking for some plausible explanation for the uniform speed of light and Lawrence contraction of space.As likely an explanation as any the_hitchhikers_guide_to_the_galaxy

Dr. r µ ßuxbaum. December 20, 2014. The  meaning of the universe could be 42 for all I know, or just pickles down the worm hole. No religion seems to accept the 14 billion year old universe, and for all I know the God of creation has a wicked sense of humor. Carry a towel and don’t think too much.

A simple, classical view of and into black holes

Black holes are regions of the universe where gravity is so strong that light can not emerge. And, since the motion of light is related to the fundamental structure of space and time, they must also be regions where space curves on itself, and where time appears to stop — at least as seen by us, from outside the black hole. But what does space-time look like inside the black hole.

NASA's semi-useless depiction of a black hole -- one they created for educators. I'm not sure what you're supposed to understand from this.

NASA’s semi-useless depiction of a black hole — one they created for educators. Though it’s sort of true, I’m not sure what you’re supposed to understand from this. I hope to present a better version.

From our outside perspective, an object tossed into a black hole will appear to move slower as it approaches the hole, and at the hole horizon it will appear to have stopped. From the inside of the hole, the object appears to just fall right in. Some claim that tidal force will rip it apart, but I think that’s a mistake. Here’s a simple, classical way to calculate the size of a black hole, and to understand why things look like they do and do what they do.

Lets begin with light, and accept, for now, that light travels in particle form. We call these particles photons; they have both an energy and a mass, and mostly move in straight lines. The energy of a photon is related to its frequency by way of Plank’s constant. E = hν, where E is the photon energy, h is Plank’s constant and ν is frequency. The photon mass is related to its energy by way of the formula m=E/c2, a formula that is surprisingly easy to derive, and often shown as E= mc2. The version that’s relevant to photons and black holes is:

m =  hν/c2.

Now consider that gravity affects ν by affecting the energy of the photon. As a photon goes up, the energy and frequency goes down as energy is lost. The gravitational force between a star, mass M, and this photon, mass m, is described as follows:

F = -GMm/r2

where F is force, G is the gravitational constant, and r is the distance of the photon from the center of the star and M is the mass of the star. The amount of photon energy lost to gravity as it rises from the surface is the integral of the force.

∆E = – ∫Fdr = ∫GMm/r2 dr = -GMm/r

Lets consider a photon of original energy E° and original mass m°= E°/c2. If ∆E = m°c2, all the energy of the original photon is lost and the photon disappears. Now, lets figure out the height, r° such that all of the original energy, E° is lost in rising away from the center of a star, mass M. That is let calculate the r for which ∆E = -E°. We’ll assume, for now, that the photon mass remains constant at m°.

E° = GMm°/r° = GME°/c2r°.

We now eliminate E° from the equation and solve for this special radius, r°:

r° =  GM/c2.

This would be the radius of a black hole if space didn’t curve and if the mass of the photon didn’t decrease as it rose. While neither of these assumptions is true, the errors nearly cancel, and the true value for r° is double the size calculated this way.

r° = 2GM/c2

r° = 2.95 km (M/Msun).

schwarzschild

Karl Schwarzschild 1873-1916.

The first person to do this calculation was Karl Schwarzschild and r° is called the Schwarzschild radius. This is the minimal radius for a star of mass M to produce closed space-time; a black hole. Msun is the mass of our sun, sol, 2 × 1030 kg.  To make a black hole one would have to compress the mass of our sun into a ball of 2.95 km radius, about the size of a small asteroid. Space-time would close around it, and light starting from the surface would not be able to escape.

As it happens, our sun is far bigger than an asteroid and is not a black hole: we can see light from the sun’s surface with minimal space-time deformation (there is some seen in the orbit of Mercury). Still, if the mass were a lot bigger, the radius would be a lot bigger and the density would be less. Consider a black hole the same mass as our galaxy, about 1 x1012 solar masses, or 2 x 1042  kg. This number is ten times what you might expect since our galaxy is 90% dark matter. The Schwarzschild radius with the mass of our galaxy would be 3 x 1012 km, or 0.3 light years. That’s far bigger than our solar system, and about 1/20 the distance to the nearest star, Alpha Centauri. This is a very big black hole, though it is far smaller than our galaxy, 5 x 1017 km, or 50,000 light years. The density, though is not all that high.

Now let’s consider a black hole comprising 15 billion galaxies, the mass of the known universe. The folks at Cornell estimate the sum of dark and luminous matter in the universe to be 3 x 1052 kg, about 15 billion times the mass of our galaxy. This does not include the mass hidden in the form of dark energy, but no one’s sure what dark energy is, or even if it really exists. A black hole encompassing this, known mass would have a Schwarzschild radius about 4.5 billion light years, or about 1/3 the actual size of the universe when size is calculated based on its Hubble-constant age, 14 billion years. The universe may be 2-3 times bigger than this on the inside because space is curved and, rather like Dr. Who’s Tardis it’s bigger on the inside, but in astronomical terms a factor of 3 or 10 is nothing: the actual size of the known universe is remarkably similar to its Schwarzschild radius, and this is without considering the mass its dark energy must have if it exists.

Standard picture of the big bang theory. Dark energy causes the latter-stage expansion.

Standard picture of the big bang theory. Dark energy causes the latter-stage expansion.

The evidence for dark energy is that the universe is expanding faster and faster instead of slowing. See figure. There is no visible reason for the acceleration, but it’s there. The source of the energy might be some zero-point effect, but wherever it comes from, the significant amount of energy must have significant mass, E = mc2. If the mass of this energy is 3 to 10 times the physical mass, as seems possible, we are living inside a large black hole, something many physicists, including Einstein considered extremely likely and aesthetically pleasing. Einstein originally didn’t consider the possibility that the hole could be expanding, but a reviewer of one of his articles convinced him it was possible.

Based on the above, we now know how to calculate the size of a black hole of any mass, and we now know what a black hole the size of the universe would look like from the inside. It looks just like home. Wait for further posts on curved space-time. For some reason, no religion seems to embrace science’s 14 billion year old, black-hole universe (expanding or not). As for the tidal forces around black holes, they are horrific only for the small black holes that most people write about. If the black hole is big, the tidal forces are small.

 Dr. µß Buxbaum Nov 17, 2014. The idea for this post came from an essay by Isaac Asimov that I read in a collection called “Buy Jupiter.” You can drink to the Schwarzchild radius with my new R° cocktail.

The speed of sound, Buxbaum’s correction

Ernst Mach showed that sound must travel at a particular speed through any material, one determined by the conservation of energy and of entropy. At room temperature and 1 atm, that speed is theoretically predicted to be 343 m/s. For a wave to move at any other speed, either the laws of energy conservation would have to fail, or ∆S ≠ 0 and the wave would die out. This is the only speed where you could say there is a traveling wave, and experimentally, this is found to be the speed of sound in air, to good accuracy.

Still, it strikes me that Mach’s assumptions may have been too restrictive for short-distance sound waves. Perhaps there is room for other sound speeds if you allow ∆S > 0, and consider sound that travels short distances and dies out far from the source. Waves at these, other speeds might affect music appreciation, or headphone design. As these waves were never treated in my thermodynamics textbooks, I wondered if I could derive their speed in any nice way, and if they were faster or slower than the main wave? (If I can’t use this blog to re-think my college studies, what good is it?)

I t can help to think of a shock-wave of sound wave moving down a constant area tube of still are at speed u, with us moving along at the same speed as the wave. In this view, the wave appears stationary, but there is a wind of speed u approaching it from the left.

Imagine the sound-wave moving to the right, down a constant area tube at speed u, with us moving along at the same speed. Thus, the wave appears stationary, with a wind of speed u from the right.

As a first step to trying to re-imagine Mach’s calculation, here is one way to derive the original, for ∆S = 0, speed of sound: I showed in a previous post that the entropy change for compression can be imagines to have two parts, a pressure part at constant temperature: dS/dV at constant T = dP/dT at constant V. This part equals R/V for an ideal gas. There is also a temperature at constant volume part of the entropy change: dS/dT at constant V = Cv/T. Dividing the two equations, we find that, at constant entropy, dT/dV = RT/CvV= P/Cv. For a case where ∆S>0, dT/dV > P/Cv.

Now lets look at the conservation of mechanical energy. A compression wave gives off a certain amount of mechanical energy, or work on expansion, and this work accelerates the gas within the wave. For an ideal gas the internal energy of the gas is stored only in its temperature. Lets now consider a sound wave going down a tube flow left to right, and lets our reference plane along the wave at the same speed so the wave seems to sit still while a flow of gas moves toward it from the right at the speed of the sound wave, u. For this flow system energy is concerned though no heat is removed, and no useful work is done. Thus, any change in enthalpy only results in a change in kinetic energy. dH = -d(u2)/2 = u du, where H here is a per-mass enthalpy (enthalpy per kg).

dH = TdS + VdP. This can be rearranged to read, TdS = dH -VdP = -u du – VdP.

We now use conservation of mass to put du into terms of P,V, and T. By conservation of mass, u/V is constant, or d(u/V)= 0. Taking the derivative of this quotient, du/V -u dV/V2= 0. Rearranging this, we get, du = u dV/V (No assumptions about entropy here). Since dH = -u du, we say that udV/V = -dH = -TdS- VdP. It is now common to say that dS = 0 across the sound wave, and thus find that u2 = -V(dP/dV) at const S. For an ideal gas, this last derivative equals, PCp/VCv, so the speed of sound, u= √PVCp/Cv with the volume in terms of mass (kg/m3).

The problem comes in where we say that ∆S>0. At this point, I would say that u= -V(dH/dV) = VCp dT/dV > PVCp/Cv. Unless, I’ve made a mistake (always possible), I find that there is a small leading, non-adiabatic sound wave that goes ahead of the ordinary sound wave and is experienced only close to the source caused by mechanical energy that becomes degraded to raising T and gives rise more compression than would be expected for iso-entropic waves.

This should have some relevance to headphone design and speaker design since headphones are heard close to the ear, while speakers are heard further away. Meanwhile the recordings are made by microphones right next to the singers or instruments.

Robert E. Buxbaum, August 26, 2014

Dr. Who’s Quantum reality viewed as diffusion

It’s very hard to get the meaning of life from science because reality is very strange, Further, science is mathematical, and the math relations for reality can be re-arranged. One arrangement of the terms will suggest a version of causality, while another will suggest a different causality. As Dr. Who points out, in non-linear, non-objective terms, there’s no causality, but rather a wibbly-wobbely ball of timey-wimey stuff.

Time as a ball of wibblely wobbly timey wimey stuff.

Reality is a ball of  timey wimpy stuff, Dr. Who.

To this end, I’ll provide my favorite way of looking at the timey-wimey way of the world by rearranging the equations of quantum mechanics into a sort of diffusion. It’s not the diffusion of something you’re quite familiar with, but rather a timey-wimey wave-stuff referred to as Ψ. It’s part real and part imaginary, and the only relationship between ψ and life is that the chance of finding something somewhere is proportional Ψ*|Ψ. The diffusion of this half-imaginary stuff is the underpinning of reality — if viewed in a certain way.

First let’s consider the steady diffusion of a normal (un-quantum) material. If there is a lot of it, like when there’s perfume off of a prima donna, you can say that N = -D dc/dx where N is the flux of perfume (molecules per minute per area), dc/dx is a concentration gradient (there’s more perfume near her than near you), and D is a diffusivity, a number related to the mobility of those perfume molecules. 

We can further generalize the diffusion of an ordinary material for a case where concentration varies with time because of reaction or a difference between the in-rate and the out rate, with reaction added as a secondary accumulator, we can write: dc/dt = reaction + dN/dx = reaction + D d2c/dx2. For a first order reaction, for example radioactive decay, reaction = -ßc, and 

dc/dt = -ßc + D d2c/dx2               (1)

where ß is the radioactive decay constant of the material whose concentration is c.

Viewed in a certain way, the most relevant equation for reality, the time-dependent Schrödinger wave equation (semi-derived here), fits into the same diffusion-reaction form:

dΨ/dt = – 2iπV/h Ψ + hi/4πm d2Ψ/dx               (2)

Instead of reality involving the motion of a real material (perfume, radioactive radon, etc.) with a real concentration, c, in this relation, the material can not be sensed directly, and the concentration, Ψ, is semi -imaginary. Here, h is plank’s constant, i is the imaginary number, √-1, m is the mass of the real material, and V is potential energy. When dealing with reactions or charged materials, it’s relevant that V will vary with position (e.g. electrons’ energy is lower when they are near protons). The diffusivity term here is imaginary, hi/4πm, but that’s OK, Ψ is part imaginary, and we’d expect that potential energy is something of a destroyer of Ψ: the likelihood of finding something at a spot goes down where the energy is high.

The form of this diffusion is linear, a mathematical term that refers to equations where solution that works for Ψ will also work for 2Ψ. Generally speaking linear solutions have exp() terms in them, and that’s especially likely here as the only place where you see a time term is on the left. For most cases we can say that

Ψ = ψ exp(-2iπE/h)t               (3)

where ψ is not a function of anything but x (space) and E is the energy of the thing whose behavior is described by Ψ. If you take the derivative of equation 3 this with respect to time, t, you get

dΨ/dt = ψ (-2iπE/h) exp(-2iπE/h)t = (-2iπE/h)Ψ.               (4)

If you insert this into equation 2, you’ll notice that the form of the first term is now identical to the second, with energy appearing identically in both terms. Divide now by exp(-2iπE/h)t, and you get the following equation:

(E-V) ψ =  -h2/8π2m d2ψ/dx2                      (5)

where ψ can be thought of as the physical concentration in space of the timey-wimey stuff. ψ is still wibbly-wobbley, but no longer timey-wimey. Now ψ- squared is the likelihood of finding the stuff somewhere at any time, and E, the energy of the thing. For most things in normal conditions, E is quantized and equals approximately kT. That is E of the thing equals, typically, a quantized energy state that’s nearly Boltzmann’s constant times temperature.

You now want to check that the approximation in equation 3-5 was legitimate. You do this by checking if the length-scale implicit in exp(-2iπE/h)t is small relative to the length-scales of the action. If it is (and it usually is), You are free to solve for ψ at any E and V using normal mathematics, by analytic or digital means, for example this way. ψ will be wibbely-wobbely but won’t be timey-wimey. That is, the space behavior of the thing will be peculiar with the item in forbidden locations, but there won’t be time reversal. For time reversal, you need small space features (like here) or entanglement.

Equation 5 can be considered a simple steady state diffusion equation. The stuff whose concentration is ψ is created wherever E is greater than V, and is destroyed wherever V is greater than E. The stuff then continuously diffuses from the former area to the latter establishing a time-independent concentration profile. E is quantized (can only be some specific values) since matter can never be created of destroyed, and it is only at specific values of E that this happens in Equation 5. For a particle in a flat box, E and ψ are found, typically, by realizing that the format of ψ must be a sin function (and ignoring an infinity). For more complex potential energy surfaces, it’s best to use a matrix solution for ψ along with non-continuous calculous. This avoids the infinity, and is a lot more flexible besides.

When you detect a material in some spot, you can imagine that the space- function ψ collapses, but even that isn’t clear as you can never know the position and velocity of a thing simultaneously, so doesn’t collapse all that much. And as for what the stuff is that diffuses and has concentration ψ, no-one knows, but it behaves like a stuff. And as to why it diffuses, perhaps it’s jiggled by unseen photons. I don’t know if this is what happens, but it’s a way I often choose to imagine reality — a moving, unseen material with real and imaginary (spiritual ?) parts, whose concentration, ψ, is related to experience, but not directly experienced.

This is not the only way the equations can be rearranged. Another way of thinking of things is as the sum of path integrals — an approach that appears to me as a many-world version, with fixed-points in time (another Dr Who feature). In this view, every object takes every path possible between these points, and reality as the sum of all the versions, including some that have time reversals. Richard Feynman explains this path integral approach here. If it doesn’t make more sense than my version, that’s OK. There is no version of the quantum equations that will make total, rational sense. All the true ones are mathematically equivalent — totally equal, but differ in the “meaning”. That is, if you were to impose meaning on the math terms, the meaning would be totally different. That’s not to say that all explanations are equally valid — most versions are totally wrong, but there are many, equally valid math version to fit many, equally valid religious or philosophic world views. The various religions, I think, are uncomfortable with having so many completely different views being totally equal because (as I understand it) each wants exclusive ownership of truth. Since this is never so for math, I claim religion is the opposite of science. Religion is trying to find The Meaning of life, and science is trying to match experiential truth — and ideally useful truth; knowing the meaning of life isn’t that useful in a knife fight.

Dr. Robert E. Buxbaum, July 9, 2014. If nothing else, you now perhaps understand Dr. Who more than you did previously. If you liked this, see here for a view of political happiness in terms of the thermodynamics of free-energy minimization.

American education how do we succeed?

As the product of a top American college, Princeton University, I see that my education lacks in languages and history compared to Europeans. I can claim to know a little Latin and a little Greek, like they do, but I’m referring to Manuel Ramos and Stanos Platsis, two short people, one of Spanish descent, the other of Greek.

Americans hate math.

Americans hate math.

It was recently reported that one fourth of college-educated Americans did not know that the earth spun on an axis, a degree of science ignorance that would be inconceivable in any other country. Strange to say, despite these lacks, the US does quite well commercially, militarily, and scientifically. US productivity is the world’s highest. Our GNP and GNP per capita too is higher than virtually any other country (we got the grossest national product). How do we do it with so little education?

One part of US success is clearly imported talent, Immigration. We import Nobel chemists, Russian dancers, and German rocket scientists but we don’t import that many. They help our per-capita GNP, but the majority of our immigrants are more in the wretched refuse category. Even these appear to do better here than the colleagues they left behind. Otto von Bismark once joked that, “God protects children, drunks, and the United States of America.” But I’d like to suggest that our success is based on advantages our outlook our education provides for our more creative citizens.

Most of our successful businesses are not started by the A students, but by the C student who is able to use the little he (or she) knows. Consider the simple question of whether the earth goes round the sun. It’s an important fact, but only relevant if you can use it, as Sherlock Holmes points out. I suspect that few Europeans could use the knowledge that the earth spins (try to think of some applications; at the end of this essay I’ll provide some).

Benjamin Jowett. His students included the heads of 6 colleges and the head of Eaton

Benjamin Jowett, Master of Balliol College, Oxford.

A classic poem about European education describes Benjamin Jowett, shown at right. It goes: “The first come I, my name is Jowett. There is no knowledge, but that I know it. I am master of this college. What I don’t know isn’t knowledge.” Benjamin Jowett was Master of Balliol College, Oxford. By the time he died in 1893, his ex-student pallbearers included the heads of 6 colleges, and the head of Eaton. Most English heads of state and industry were his students directly or second-hand. All learned a passing knowledge of Greek, Latin, Plato, law, science, theology, classics, math, rhetoric, logic, and grammar. Only people so educated were deemed suited to run banks or manage backward nations like India or Rhodesia. It worked for a while but showed its limitations, e.g. in the Boer Wars.

In France and continental Europe the education system is similar to England’s under Jowett. There is a fixed set of knowledge and a fixed rate to learn it. Government and industry jobs go largely to those who’ve demonstrated their ability to give the fixed, correct answers to tests on this knowledge. In schools across France, the same page is turned virtually simultaneously in the every school– no student is left behind, but none jump ahead either. As new knowledge is integrated, the approved text books are updated and the correct answers are adjusted. Until then, the answers in the book are God’s truth, and those who master it can comfort themselves to have mastered the truth. The only people hurt are the very few dummies who see a new truth a year before the test acknowledges it. “College is a place where pebbles are polished but diamonds are dimmed.” The European system appears to benefit the many, providing useful skills (and useless tidbits) but it is oppressive to many others with forward-thinking, imaginative minds. The system appears to work best in areas that barely change year-to-year like French grammar, geometry, law, and the map of Europe. It does not work so well in music, computers, or the art of war. For these students, schooling is “another brick in the wall. For these students, the schools should teach more of how to get along without a teacher.

The American approach to education leans towards independence of thought, for good or bad. American graduates can live without the teacher, but leave school knowing no language but English, hardly and maths or science, hardly any grammar, and we can hardly find another country on a map. Teachers will take incorrect answers as correct as a way to build self-esteem, so students leave with the view that there is no such thing as truth. This model works well in music, engineering, and science where change is fast, creativity is king, and nature itself is a teacher. American graduate-schools are preeminent in these areas. In reading, history and math our graduates might well be described as galumphing ignorants.

Every now and again the US tries to correct this, by the way, and join the rest of the world. The “no child left behind” movement was a Republican-led effort to teach reading and math on the French model. It never caught on. Drugs are another approach to making American students less obstreperous, but they too work only temporarily. Despite these best efforts, American graduates leave school ignorant, but not stupid; respectful of those who can do things, and suspicious of those with lengthy degrees. We survive as managers of the most complex operations with our bumptious optimism and distain for hierarchy. As viewed from abroad, our method is to greet colleagues in a loud, cheerful voice, appoint a subordinate to “get things done,” and then get in the way until lunchtime.

In any moment of decision, the best thing you can do is the right thing, the next bet thing is the wrong thing, and the worst thing you can do is nothing. An American attitude that sometimes blows up, but works surprisingly well at times.

Often the inability to act is worse than acting wrong.

The American-educated boss will do some damage by his ignorance but it is no more than  comes from group-think: non-truths passed as truths. America stopped burning witches far sooner than Europe, and never burned Jews. America dropped nobles quicker, and transitioned to electric lights and motor cars quicker, perhaps because we put less weight on what nobles and universities did.

European scholars accepted that nobility gave one a better handle on leadership, and this held them back. Since religion was part of education, they accepted that state should have an established religion: Anglican, in England, Catholicism in France; scientific atheism now. They learned and accepted that divorce was unnecessary and that homosexuality should be punished by prison or worse. As late as the early 60s, Turing, the brilliant mathematician and computer scientist, was chemically castrated as a way to cure his homosexuality. In America our “Yankee ingenuity,” as we call it, had a tendency to blow up, too (prohibition, McCarthyism, and disco), but the problems resolved relatively soon. “Ready, fire, aim” is a European description of the American method. It’s not great, but works after a fashion.

The best option, I think, is to work together with those from “across the pond.” It worked well for us in WWI, WWII, and the American Revolution, where we benefitted from the training of Baron Von Steuben, for example. Heading into the world cup of football (fifa soccer) this week, we’re expected to lose badly due to our lack of stars, and general inability to pass, dribble, or strategize. Still, we’ve got enthusiasm, and we’ve got a German coach. The world’s bookies give us 0.05% odds, but our chances are 10 times that, I’d say: 5%. God protects our galumphing side of corn-fed ignorants when, as in the Revolution, it’s attached to German coaching.

Some practical aspects of the earth spinning: geosynchronous satellites (they only work because the earth spins), weather prediction (the spin of hurricanes is because the earth spins), cyclone lifting. It amazes me that people ever thought everything went around the earth, by the way; Mercury and Venus never appear overhead. If authorities could have been so wrong about this for so long, what might they be wrong about today?

Dr. Robert Buxbaum, June 10, 2014 I’ve also written about ADHD on Lincoln’s Gettysburg Address, on Theodore Roosevelt, and how he survived a gun shot.