Category Archives: Science: Physics, Astronomy, etc.

Most traffic deaths are from driving too slow

About 40,100 Americans lose their lives to traffic accidents every year. About 10,000 of these losses involve alcohol, and about the same number involve pedestrians, but far more people have their lives sucked away by waiting in traffic, IMHO. Hours are spent staring at a light, hoping it will change, or slowly plodding between destinations with their minds near blank. This slow loss of life is as real as the accidental type, but less dramatic.

Consider that Americans drive about 3.2 trillion miles each year. I’ll assume an average speed of 30 mph (the average speed registered on my car is 29 mph). Considering only the drivers of these vehicles, I calculate 133 billion man-hours of driving per year; that’s 15.2 million man-years or 217,000 man-lifetimes. If people were to drive a little faster, perhaps 10% faster, some 22,000 man lifetimes would be saved per year in time wasted. The simple change of raising the maximum highway speed to 80 mph from 70, I’d expect, would save half this, maybe 10,000 lifetimes. There would likely be some more accidental deaths, but not more accidents. Tiredness is a big part of highway accidents, as is highway congestion. Faster speeds decreases both, decreasing the number of accidents, but one expects there will be an increase in the deadliness of the accidents.

Highway deaths for the years before and after Nov. 1995. Most states raised speeds, but some left them unchanged.

Highway deaths for the years before and after speed limit were relaxed in Nov. 1995. At that time most states raised their speed limits, but some did not, leaving them at 65 rural, 55 urban; a few states were not included in this study because they made minor changes.

A counter to this expectation comes from the German Autobahn, the fastest highway in the world with sections that have no speed limit. German safety records show that there are far fewer accidents per km on the Autobahn, and that the fatality rate per km is about 1/3 that on other stretches of highway. This is about 1/2 the rate on US highways (see safety comparison). For a more conservative comparison, we could turn to the US experience of 1995. Before November 1995, the US federal government limited urban highway speeds to 55 mph, with 65 mph allowed only on rural stretches. When these limits were removed, several states left the speed limits in place, but many others raised their urban speed limits to 65 mph, and raised rural limits to 70 mph. Some western states went further and raised rural speed limits to 75 mph. The effect of these changes is seen on the graph above, copied from the Traffic Operations safety laboratory report. Depending on how you analyze the data, there was either a 2% jump (institute of highway safety) in highway deaths or perhaps a 5% jump. These numbers translate to a 3 or 6% jump because the states that did not raise speeds saw a 1% drop in death rates. Based on a 6% increase, I’d expect higher highway speed limits would cost some 2400 additional lives. To me, even this seems worthwhile when balanced against 10,000 lives lost to the life-sucking destruction of slow driving.

Texas has begun raising speed limits. Texans seem happy.

Texas has begun raising speed limits. So far, Texans seem happy.

There are several new technologies that could reduce automotive deaths at high speeds. One thought is to only allow high-speed driving for people who pass a high-speed test, or only for certified cars with passengers who are wearing a 5-point harness, or only on roads. More relevant to my opinion is only on roads with adequate walk-paths — many deaths involve pedestrians. Yet another thought; auto-driving cars (with hydrogen power?). Computer-aided drivers can have split second reaction times, and can be fitted with infra-red “eyes” that see through fog, or sense the motion of a warm object (pedestrian) behind an obstruction. The ability of computer systems to use this data is limited currently, but it is sure to improve.

I thought some math might be in order. The automotive current that is carried by a highway, cars/hour, can be shown to equal to the speed of the average vehicle multiplied by the number of lanes divided by the average distance between vehicles. C = v L/ d.

At low congestion, the average driving speed, v remains constant as cars enter and leave the highway. Adding cars only affects the average distance between cars, d. At some point, around rush hour, so many vehicles enter the highway that d shrinks to a distance where drivers become uncomfortable; that’s about d = 3 car lengths, I’d guess. People begin to slow down, and pretty soon you get a traffic jam — a slow-moving parking lot where you get less flow with more vehicles. This jam will last for the entirety of rush hour. One of the nice things about auto-drive cars is that they don’t get nervous, even at 2 car lengths or less at 70 mph. The computer is confident that it will brake as soon as the car in front of it brakes, maintaining a safe speed and distance where people will not. This is a big safety advantage for all vehicles on the road.

I should mention that automobile death rates vary widely between different states (see here), and even more widely between different countries. Here is some data. If you think some country’s drivers are crazy, you should know that many of the countries with bad reputations (Italy, Ireland… ) have highway death rates that are lower than ours. In other countries, in Africa and the mid-east death rates per car or mile driven are 10x, 100x, or 1000x higher than in the US. The countries have few cars and lots of people who walk down the road drunk or stoned. Related to this, I’ve noticed that old people are not bad drivers, but they drive on narrow country roads where people walk and accidents are common.

Robert Buxbaum, June 6, 2018.

What drives the jet stream

Having written on controversial, opinion things, I thought I’d take break and write about earth science: the jet stream. For those who are unfamiliar the main jet stream is a high-altitude wind blowing at about 40,000 feet (10 km) altitude at about 50° N latitude. It blows west to east at about 100 km/hr (60 mph), about 12% of the cruising of a typical jet airplane. A simple way to understand the source of the jet stream is to note that the earth spins slower (in mph) at the poles than at lower latitudes, but that the temperature difference between the poles and equator guarantees that air at high altitude is always traveling toward the poles from the lower latitudes.

Consider that the earth is about 40,000 km is circumference and turns once every 24 hours. This suggests a rotation speed of 1667 km/hr at the equator. At any higher latitude the speed is 1667 cos latitude. Thus it’s 1070 km/hr at 50° latitude, 0 km/hr at the north pole; 1667km/hr cos 50°= 1070 km/hr.

Idealize north-south circulation of air around our globe.

Idealized north-south circulation of air around our globe.

It’s generally colder at the poles than it is at lower latitudes — that is nearer the equator (here’s why). This creates a north-south wind where the air becomes more compact as it cools in northern climate (50°latitude  and further north), and this creates a vacuum at high altitudes and a high pressure zone at low altitudes. The result is a high altitude flow of air towards north, and a flow of low altitude air south, a process that is described by the idealized drawing at right.

At low altitudes in Detroit (where I am) we experience winds mostly from the north and from the east. Winds come from the east — or appear to — because of the rotation of the earth. The air that flows down from Canada is moving west to east at a slower speed than Detroit is moving west to east. We experience this as an easterly wind. At higher altitudes, the pattern is reversed. At 9 to 12 km altitudes, an airplane would experience winds mostly from the south-west. Warm air from lower latitudes is moving eastward at 1200 or more km/hr because that’s the speed of the earth. As it moves north, it discovers that the land is moving eastward at a much slower speed, and the result is the jet stream. The maximum speed of the jet stream is about 200 km/hr, the difference in the earth’s east-speed between that at 40°N and at 50°N, while the typical speed is about half of that, 100 km/hr. I’d attribute this slower speed to friction or air mixing.

One significance of the jet stream is that it speeds west-east air-traffic, e.g. flights from Japan to the US or from the US to Europe. Airlines flying west to east try to fly at the latitude and altitude of the jet stream to pick up speed. Planes flying the other way go closer to the pole and/or at different altitudes to avoid having the jet stream slowing them down, or to benefit from other prevailing winds.

I note that Hurricanes are driven by the same forces as the jet stream, just more localized. Tornados are the same, just more localized. A localized flow of this sort can pick stuff up here’s how they pick stuff upRobert Buxbaum, May 22, 2018

Alkaline batteries have second lives

Most people assume that alkaline batteries are one-time only, throwaway items. Some have used rechargeable cells, but these are Ni-metal hydride, or Ni-Cads, expensive variants that have lower power densities than normal alkaline batteries, and almost impossible to find in stores. It would be nice to be able to recharge ordinary alkaline batteries, e.g. when a smoke alarm goes off in the middle of the night and you find you’re out, but people assume this is impossible. People assume incorrectly.

Modern alkaline batteries are highly efficient: more efficient than even a few years ago, and that always suggests reversibility. Unlike the acid batteries you learned about in highschool chemistry class (basic chemistry due to Volta) the chemistry of modern alkaline batteries is based on Edison’s alkaline car batteries. They have been tweaked to an extent that even the non-rechargeable versions can be recharged. I’ve found I can reliably recharge an ordinary alkaline cell, 9V, at least once using the crude means of a standard 12 V car battery charger by watching the amperage closely. It only took 10 minutes. I suspect I can get nine lives out of these batteries, but have not tried.

To do this experiment, I took a 9 V alkaline that had recently died, and finding I had no replacement, I attached it to a 6 Amp, 12 V, car battery charger that I had on hand. I would have preferred to use a 2 A charger and ideally a charger designed to output 9-10 V, but a 12 V charger is what I had available, and it worked. I only let it charge for 10 minutes because, at that amperage, I calculated that I’d recharged to the full 1 Amp-hr capacity. Since the new alkaline batteries only claimed 1 amp hr, I figured that more charge would likely do bad things, even perhaps cause the thing to blow up.  After 5 minutes, I found that the voltage had returned to normal and the battery worked fine with no bad effects, but went for the full 10 minutes. Perhaps stopping at 5 would have been safer.

I changed for 10 minutes (1/6 hour) because the battery claimed a capacity of 1 Amp-hour when new. My thought was 1 amp-hour = 1 Amp for 1 hour, = 6 Amps for 1/6 hour = ten minutes. That’s engineering math for you, the reason engineers earn so much. I figured that watching the recharge for ten minutes was less work and quicker than running to the store (20 minutes). I used this battery in my firm alarm, and have tested it twice since then to see that it works. After a few days in my fire alarm, I took it out and checked that the voltage was still 9 V, just like when the battery was new. Confirming experiments like this are a good idea. Another confirmation occurred when I overcooked some eggs and the alarm went off from the smoke.

If you want to experiment, you can try a 9V as I did, or try putting a 1.5 volt AA or AAA battery in a charger designed for rechargeables. Another thought is to see what happens when you overcharge. Keep safe: do this in a wood box outside at a distance, but I’d like to know how close I got to having an exploding energizer. Also, it would be worthwhile to try several charge/ discharge cycles to see how the energy content degrades. I expect you can get ~9 recharges with a “non-rechargeable” alkaline battery because the label says: “9 lives,” but even getting a second life from each battery is a significant savings. Try using a charger that’s made for rechargeables. One last experiment: If you’ve got a cell phone charger that works on a car battery, and you get the polarity right, you’ll find you can use a 9V alkaline to recharge your iPhone or Android. How do I know? I judged a science fair not long ago, and a 4th grader did this for her science fair project.

Robert Buxbaum, April 19, 2018. For more, semi-dangerous electrochemistry and biology experiments.

What drives the gulf stream?

I’m not much of a fan of todays’ kids’ science books because they don’t teach science IMHO. They have nice pictures and a few numbers; almost no equations, and lots of words. You can’t do science that way. On the odd occasion that they give the right answer to some problem, the lack of math means the kid has no way of understanding the reasoning, and no reason to believe the answer. Professional science articles on the web are bad in the opposite direction: too many numbers and for math, hey rely on supercomputers. No human can understand the outcome. I like to use my blog to offer science with insight, the type you’d get in an old “everyman science” book.

In previous posts, I gave answers to why the sky is blue, why it’s cold at the poles, why it’s cold on mountains, how tornadoes pick stuff up, and why hurricanes blow the way they do. In this post, we’ll try to figure out what drives the gulf-stream. The main argument will be deduction — disproving things that are not driving the gulf stream to leave us with one or two that could. Deduction is a classic method of science, well presented by Sherlock Holmes.

The gulf stream. The speed in the white area is ≥ 0.5 m/s (1.1 mph.).

The gulf stream. The speed in the white area is ≥ 0.5 m/s (1.1 mph.).

For those who don’t know, the Gulf stream is a massive river of water that runs within the Atlantic ocean. As shown at right, it starts roughly at the end of Florida, runs north to the Carolinas, and then turns dramatically east towards Spain. Flowing east, It’s about 150 miles wide, but only about 62 miles (100 km) when flowing along the US coast. According to some of the science books of my youth this massive flow was driven by temperature according to others, by salinity (whatever that means), and yet other books of my youth wind. My conclusion: they had no clue.

As a start to doing the science here, it’s important to fill in the numerical information that the science books left out. The Gulf stream is roughly 1000 meters deep, with a typical speed of 1 m/s (2.3 mph). The maximum speed is the surface water as the stream flows along the US coast. It is about 2.5 metres per second (5.6 mph), see map above.

From the size and the speed of the Gulf Stream, we conclude that land rivers are not driving the flow. The Mississippi is a big river with an outflow point near the head waters of the gulf stream, but the volume of flow is vastly too small. The volume of the gulf stream is roughly

Q=wdv = 100,000 x 1000 x .5 =  50 million m3/s = 1.5 billion cubic feet/s.

This is about 2000 times more flow than the volume flow of the Mississippi, 18,000 m3/s. The great difference in flow suggests the Mississippi could not be the driving force. The map of flow speeds (above) also suggest rivers do not drive the flow. The Gulf Stream does not flow at its maximum speed near the mouth of any river.  We now look for another driver.

Moving on to temperature. Temperature drives the whirl of hurricanes. The logic for temperature driving the gulf stream is as follows: it’s warm by the equator and cold at the poles; warm things expand and as water flows downhill, the polls will always be downhill from the equator. Lets put some math in here or my explanation will be lacking. First lets consider how much hight difference we might expect to see. The thermal expansivity of water is about 2x 10-4 m/m°C (.0002/°C) in the desired temperature range). To calculate the amount of expansion we multiply this by the depth of the stream, 1000m, and the temperature difference between two points, eg. the end of Florida to the Carolina coast. This is 5°C (9°F) I estimate. I calculate the temperature-induced seawater height as:

∆h (thermal) ≈ 5° x .0002/° x 1000m = 1 m (3.3 feet).

This is a fair amount of height. It’s only about 1/100 the height driving the Mississippi river, but it’s something. To see if 1 m is enough to drive the Gulf flow, I’ll compare it to the velocity-head. Velocity-head is a concept that’s useful in plumbing (I ran for water commissioner). It’s the potential energy height equivalent of any kinetic energy — typically of a fluid flow. The kinetic energy for any velocity v and mass of water, m is 1/2 mv2 . The potential energy equivalent is mgh. Combine the above and remove the mass terms, and we have:

∆h (velocity) = v2/2g.

Where g is the acceleration of gravity. Let’s consider  v = 1 m/s and g= 9.8 m/s2.≤ 0.05 m ≈ 2 inches. This is far less than the driving force calculated above. We have 5x more driving force than we need, but there is a problem: why isn’t the flow faster? Why does the Mississippi move so slowly when it has 100 times more head.

To answer the above questions, and to check if heat could really drive the Gulf Stream, we’ll check if the flow is turbulent — it is. A measure of how turbulent is based on something called the Reynolds number, Re#, it’s the ratio of kinetic energy and viscous loss in a fluid flow. Flows are turbulent if this ratio is more than 3000, or so;

Re# = vdρ/µ.

In the above, v is velocity, say 1 m/s, d is depth, 1000m, ρ = density, 1000 kg/m3 for water, and  0.00133 Pa∙s is the viscosity of water. Plug in these numbers, and we find a RE# = 750 million: this flow will be highly turbulent. Assuming a friction factor of 1/20 (.05), e find that we’d expect complete mixing 20 depths or 20 km. We find we need the above 0.05 m of velocity height to drive every 20 km of flow up the US coast. If the distance to the Carolina coast is 1000 km we need 1000*.05m/20 = 1 meter, that’s just about the velocity-head that the temperature difference would suggest. Temperature is thus a plausible driving force for 0.5 m/s, though not likely for the faster 2.5 m/s flow seen in the center of the stream. Turbulent flow is a big part of figuring the mpg of an automobile; it becomes rapidly more important at high speeds.

World sea salinity

World sea salinity. The maximum and minimum are in the wrong places.

What about salinity? For salinity to work, the salinity would have to be higher at the end of the flow. As a model of the flow, we might imagine that we freeze arctic seawater, and thus we concentrate salt in the seawater just below the ice. The heavy, saline water would flow down to the bottom of the sea, and then flow south to an area of low salinity and low pressure. Somewhere in the south, the salinity would be reduced by rains. If evaporation were to exceed the rains, the flow would go in the other direction. Sorry to say, I see no evidence of any of this. For one the end of the Gulf Stream is not that far north; there is no freezing, For two other problems: there are major rains in the Caribbean, and rains too in the North Atlantic. Finally, while the salinity head is too small. Each pen of salinity adds about 0.0001g/cc, and the salinity difference in this case is less than 1 ppm, lets say 0.5ppm.

h = .0001 x 0.5 x 1000 = 0.05m

I don’t see a case for northern-driven Gulf-stream flow caused by salinity.

Surface level winds in the Atlantic.

Surface level winds in the Atlantic. Trade winds in purple, 15-20 mph.

Now consider winds. The wind velocities are certainly enough to produce 5+ miles per hour flows, and the path of flows is appropriate. Consider, for example, the trade winds. In the southern Caribbean, they blow steadily from east to west slightly above the equator at 15 -20 mph. This could certainly drive a circulation flow of 4.5 mph north. Out of the Caribbean basin and along the eastern US coat the trade winds blow at 15-50 mph north and east. This too would easily drive a 4.5 mph flow.  I conclude that a combination of winds and temperature are the most likely drivers of the gulf stream flow. To quote Holmes, once you’ve eliminated the impossible, whatever remains, however improbable, must be the truth.

Robert E. Buxbaum, March 25, 2018. I used the thermal argument above to figure out how cold it had to be to freeze the balls off of a brass monkey.

Beyond oil lies … more oil + price volatility

One of many best selling books by Kenneth Deffeyes

One of many best-selling books by Kenneth Deffeyes

While I was at Princeton, one of the most popular courses was geology 101 taught by Dr. Kenneth S. Deffeyes. It was a sort of “Rocks for Jocks,” but had an unusual bite since Dr. Deffeyes focussed particularly on the geology of oil. Deffeyes had an impressive understanding of oil and oil production, and one outcome of this impressive understanding was his certainty that US oil production had peaked in 1970, and that world oil was about to run out too. The prediction that US oil production had peaked was not original to him. It was called Hubbert’s peak after King Hubbert who correctly predicted (rationalized?) the date, but published it only in 1971. What Deffeyes added to Hubbard’s analysis was a simplified mathematical justification and a new prediction: that world oil production would peak in the 1980s, or 2000, and then run out fast. By 2005, the peak date was fixed to November 24, of the same year: Thanksgiving day 2005 ± 3 weeks.

As with any prediction of global doom, I was skeptical, but generally trusted the experts, and virtually every experts was on board to predict gloom in the near future. A British group, The Institute for Peak Oil picked 2007 for the oil to run out, and the several movies expanded the theme, e.g. Mad Max. I was convinced enough to direct my PhD research to nuclear fusion engineering. Fusion being presented as the essential salvation for our civilization to survive beyond 2050 years or so. I’m happy to report that the dire prediction of his mathematics did not come to pass, at least not yet. To quote Yogi Berra, “In theory, theory is just like reality.” Still I think it’s worthwhile to review the mathematical thinking for what went wrong, and see if some value might be retained from the rubble.

proof of peak oilDeffeyes’s Maltheisan proof went like this: take a year-by year history of the rate of production, P and divide this by the amount of oil known to be recoverable in that year, Q. Plot this P/Q data against Q, and you find the data follows a reasonably straight line: P/Q = b-mQ. This occurs between 1962 and 1983, or between 1983 and 2005. Fro whichever straight line you pick, m and b are positive. Once you find values for m and b that you trust, you can rearrange the equation to read,

P = -mQ2+ bQ

You the calculate the peak of production from this as the point where dP/dQ = 0. With a little calculus you’ll see this occurs at Q = b/2m, or at P/Q = b/2. This is the half-way point on the P/Q vs Q line. If you extrapolate the line to zero production, P=0, you predict a total possible oil production, QT = b/m. According to this model this is always double the total Q discovered by the peak. In 1983, QT was calculated to be 1 trillion barrels. By May of 2005, again predicted to be a peak year, QT had grown to two trillion barrels.

I suppose Deffayes might have suspected there was a mistake somewhere in the calculation from the way that QT had doubled, but he did not. See him lecture here in May 2005; he predicts war, famine, and pestilence, with no real chance of salvation. It’s a depressing conclusion, confidently presented by someone enamored of his own theories. In retrospect, I’d say he did not realize that he was over-enamored of his own theory, and blind to the possibility that the P/Q vs Q line might curve upward, have a positive second derivative.

Aside from his theory of peak oil, Deffayes also had a theory of oil price, one that was not all that popular. It’s not presented in the YouTube video, nor in his popular books, but it’s one that I still find valuable, and plausibly true. Deffeyes claimed the wildly varying prices of the time were the result of an inherent quay imbalance between a varying supply and an inelastic demand. If this was the cause, we’d expect the price jumps of oil up and down will match the way the wait-line at a barber shop gets longer and shorter. Assume supply varies because discoveries came in random packets, while demand rises steadily, and it all makes sense. After each new discovery, price is seen to fall. It then rises slowly till the next discovery. Price is seen as a symptom of supply unpredictability rather than a useful corrective to supply needs. This view is the opposite of Adam Smith, but I think he’s not wrong, at least in the short term with a necessary commodity like oil.

Academics accepted the peak oil prediction, I suspect, in part because it supported a Marxian remedy. If oil was running out and the market was broken, then our only recourse was government management of energy production and use. By the late 70s, Jimmy Carter told us to turn our thermostats to 65. This went with price controls, gas rationing, and a 55 mph speed limit, and a strong message of population management – birth control. We were running out of energy, we were told because we had too many people and they (we) were using too much. America’s grown days were behind us, and only the best and the brightest could be trusted to manage our decline into the abyss. I half believed these scary predictions, in part because everyone did, and in part because they made my research at Princeton particularly important. The Science fiction of the day told tales of bold energy leaders, and I was ready to step up and lead, or so I thought.

By 2009 Dr. Deffayes was being regarded as chicken little as world oil production continued to expand.

By 2009 Dr. Deffayes was being regarded as chicken little as world oil production continued to expand.

I’m happy to report that none of the dire predictions of the 70’s to 90s came to pass. Some of my colleagues became world leaders, the rest because stock brokers with their own private planes and SUVs. As of my writing in 2018, world oil production has been rising, and even King Hubbert’s original prediction of US production has been overturned. Deffayes’s reputation suffered for a few years, then politicians moved on to other dire dangers that require world-class management. Among the major dangers of today, school shootings, Ebola, and Al Gore’s claim that the ice caps will melt by 2014, flooding New York. Sooner or later, one of these predictions will come true, but the lesson I take is that it’s hard to predict change accurately.

Just when you thought US oil had beed depleted for good, production began rising. It's now higher than the 1970 peak.

Just when you thought US oil was depleted, production began rising. We now produce more than in 1970.

Much of the new oil production you’ll see on the chart above comes from tar-sands, oil the Deffeyes  considered unrecoverable, even while it was being recovered. We also  discovered new ways to extract leftover oil, and got better at using nuclear electricity and natural gas. In the long run, I expect nuclear electricity and hydrogen will replace oil. Trees have a value, as does solar. As for nuclear fusion, it has not turned out practical. See my analysis of why.

Robert Buxbaum, March 15, 2018. Happy Ides of March, a most republican holiday.

Yogurt making for kids

Yogurt making is easy, and is a fun science project for kids and adults alike. It’s cheap, quick, easy, reasonably safe, and fairly useful. Like any real science, it requires mathematical thinking if you want to go anywhere really, but unlike most science, you can get somewhere even without math, and you can eat the experiments. Yogurt making has been done for centuries, and involves nothing more than adding some yogurt culture to a glass of milk and waiting. To do this the traditional way, you wait with the glass sitting outside of any refrigeration (they didn’t have refrigeration in the olden days). After a few days, you’ll have tasty yogurt. You can get taster yogurt if you add flavors. In one of my most successful attempts at flavoring, I added 1/2 ounce of “skinny syrup” (toffee flavor) to a glass of milk. The results were most satisfactory, IMHO.

My latest batch of home-made flavored yogurt, made in a warm spot behind this urn.

My latest batch of home-made flavored yogurt, made in a warm spot behind this coffee urn.

Now to turn yogurt-making into a science project. We’ll begin with a hypothesis. I generally tell people to not start with a hypothesis, (it biases your thinking), but here I will make an exception as I have a peculiarly non-biased hypothesis to suggest. Besides, most school kids are told they need one. My hypothesis is that there must be better ways to make yogurt and worse ways. A hypothesis should be avoided if it contains any unfounded assumptions, or if it points to a particular answer — especially an answer that no one would care about.

As with all science you’ll want to take numerical data of cause and effect. I’d suggest that temperature data is worth taking. The yogurt-making bacteria is called lactose thermophillis, and this suggests that warm temperatures will be good (lact = milk in Latin, thermophilic = loving heat). Also making things interesting is the suspicion that if you make things too warm, you’ll cook your organisms and you won’t get any yogurt. I’ve had this happen, both with over-heat and under-heat. My first attempt was to grow yogurt in the refrigerator, but I got no results. I then tried the kitchen counter and got yogurt, and then I heated things a bit more by growing next to a coffee urn, and got better yogurt; yet more heat and nothing.

For a science project, you might want to make a few batches of yogurt, at least 5, and these should be made at 2-3 different temperatures. If temperature is a cause for the yogurt to come out better or worse, you’ll need to be able to measure how much “better”? You may choose to study taste, and that’s important, but it’s hard to quantify, so that should not be the whole experiment. I would begin by testing thickness, or the time to a get some fixed degree of thickness; I’d measure thickness by seeing if a small weight sinks. A penny is a cheap, small weight, and I know it sinks in milk, but not in yogurt. You’ll want to wash your penny first, or no one will eat the yogurt. I used hot water from the urn to clean and sterilize my pennies.

Another thing that is worth testing is the effect of using different milks: whole milk, 2%, 1% or skim; goat milk, or almond milk. You can also try adding stuff to it, or starting with different starter cultures, or different amounts. Keep numerical records of these choices, then keep track of how they effect how long it takes for the gel to form, and how the stuff looks or tastes to you. Before you know it, you’ll have some very good product at half the price of the stuff in the store. If you really want to move forward fast, you might apply semi-random statistics to your experimental choices. Good luck.

Robert Buxbaum, March 2, 2018. My latest observation: what happens if you leave the yogurt to mold too long? It doesn’t get moldy, perhaps the lactic acid formed kills germs (?), but the yogurt separated into curds and whey. I poured off the whey, the unappealing, bitter yellow liquid. The thick white remainder is called “Greek” yogurt. I’m not convinced this tastes better, or is healthier, BTW.

Keeping your car batteries alive.

Lithium-battery cost and performance has improved so much that no one uses Ni-Cad or metal hydride batteries any more. These are the choice for tools, phones, and computers, while lead acid batteries are used for car starting and emergency lights. I thought I’d write about the care and trade-offs of these two remaining options.

As things currently stand, you can buy a 12 V, lead-acid car battery with 40 Amp-h capacity for about $95. This suggests a cost of about $200/ kWh. The price rises to $400/kWh if you only discharge half way (good practice). This is cheaper than the per-power cost of lithium batteries, about $500/ kWh or $1000/ kWh if you only discharge half-way (good practice), but people pick lithium because (1) it’s lighter, and (2) it’s generally longer lasting. Lithium generally lasts about 2000 half-discharge cycles vs 500 for lead-acid.

On the basis of cost per cycle, lead acid batteries would have been replaced completely except that they are more tolerant of cold and heat, and they easily output the 400-800 Amps needed to start a car. Lithium batteries have problems at these currents, especially when it’s hot or cold. Lithium batteries deteriorate fast in the heat too (over 40°C, 105°F), and you can not charge a lithium car battery at more than 3-4 Amps at temperatures below about 0°C, 32°F. At higher currents, a coat of lithium metal forms on the anode. This lithium can react with water: 2Li + H2O –> Li2O + H2, or it can form dendrites that puncture the cell separators leading to fire and explosion. If you charge a lead acid battery too fast some hydrogen can form, but that’s much less of a problem. If you are worried about hydrogen, we sell hydrogen getters and catalysts that remove it. Here’s a description of the mechanisms.

The best thing you can do to keep a lead-acid battery alive is to keep it near-fully charged. This can be done by taking long drives, by idling the car (warming it up), or by use of an external trickle charger. I recommend a trickle charger in the winter because it’s non-polluting. A lead-acid battery that’s kept at near full charge will give you enough charge for 3000 to 5000 starts. If you let the battery completely discharge, you get only 50 or so deep cycles or 1000 starts. But beware: full discharge can creep up on you. A new car battery will hold 40 Ampere-hours of current, or 65,000 Ampere-seconds if you half discharge. Starting the car will take 5 seconds of 600 Amps, using 3000 Amp-s or about 5% of the battery’s juice. The battery will recharge as you drive, but not that fast. You’ll have to drive for at least 500 seconds (8 minutes) to recharge from the energy used in starting. But in the winter it is common that your drive will be shorter, and that a lot of your alternator power will be sent to the defrosters, lights, and seat heaters. As a result, your lead-acid battery will not totally charge, even on a 10 minute drive. With every week of short trips, the battery will drain a little, and sooner or later, you’ll find your battery is dead. Beware and recharge, ideally before 50% discharge

A little chemistry will help explain why full discharging is bad for battery life (for a different version see Wikipedia). For the first half discharge of a lead-acid battery, the reaction Is:

Pb + 2PbO2 + 2H2SO4  –> PbSO4 + Pb2O2SO4 + 2H2O.

This reaction involves 2 electrons and has a -∆G° of >394 kJ, suggesting a reversible voltage more than 2.04 V per cell with voltage decreasing as H2SO4 is used up. Any discharge forms PbSO4 on the positive plate (the lead anode) and converts lead oxide on the cathode (the negative plate) to Pb2O2SO4. Discharging to more than 50% involves this reaction converting the Pb2O2SO4 on the cathode to PbSO4.

Pb + Pb2O2SO4 + 2H2SO4  –> 2PbSO4 + 2H2O.

This also involves two electrons, but -∆G < 394 kJ, and voltage is less than 2.04 V. Not only is the voltage less, the maximum current is less. As it happens Pb2O2SO4 is amorphous, adherent, and conductive, while PbSO4 is crystalline, not that adherent, and not-so conductive. Operating at more than 50% results in less voltage, increased internal resistance, decreased H2SO4 concentrations, and lead sulfate flaking off the electrode. Even letting a battery sit at low voltage contributes to PbSO4 flaking off. If the weather is cold enough, the low concentration H2SO4 freezes and the battery case cracks. My advice: Get out your battery charger and top up your battery. Don’t worry about overcharging; your battery charger will sense when the charge is complete. A lead-acid battery operated at near full charge, between 67 and 100% will provide 1500 cycles, about as many as lithium. 

Trickle charging my wife's car. Good for battery life. At 6 Amps, expect this to take 3-6 hours.

Trickle charging my wife’s car: good for battery life. At 6 Amps, expect a full charge to take 6 hours or more. You might want to recharge the battery in your emergency lights too. 

Lithium batteries are the choice for tools and electric vehicles, but the chemistry is different. For longest life with lithium batteries, they should not be charged fully. If you change fully they deteriorate and self-discharge, especially when warm (100°F, 40°C). If you operate at 20°C between 75% and 25% charge, a lithium-ion battery will last 2000 cycles; at 100% to 0%, expect only 200 cycles or so.

Tesla cars use lithium batteries of a special type, lithium cobalt. Such batteries have been known to explode, but and Tesla adds sophisticated electronics and cooling systems to prevent this. The Chevy Volt and Bolt use lithium batteries too, but they are less energy-dense. In either case, assuming $1000/kWh and a 2000 cycle life, the battery cost of an EV is about 50¢/kWh-cycle. Add to this the cost of electricity, 15¢/kWh including the over-potential needed to charge, and I find a total cost of operation of 65¢/kWh. EVs get about 3 miles per kWh, suggesting an energy cost of about 22¢/mile. By comparison, a 23 mpg car that uses gasoline at $2.80 / gal, the energy cost is 12¢/mile, about half that of the EVs. For now, I stick to gasoline for normal driving, and for long trips, suggest buses, trains, and flying.

Robert Buxbaum, January 4, 2018.

Why is it hot at the equator, cold at the poles?

Here’s a somewhat mathematical look at why it is hotter at the equator that at the poles. This is high school or basic college level science, using trigonometry (pre calculus), a slight step beyond the basic statement that the sun hits down more directly at the equator than at the poles. That’s the kid’s explanation, but we can understand better if we add a little math.

Solar radiation hits Detroit at an angle, as a result less radiation power hits per square meter of Detroit.

Solar radiation hits Detroit or any other non-equator point at an angle, As a result, less radiation power hits per square meter of land.

Lets use the diagram at right and trigonometry to compare the amount of sun-energy that falls on a square meter of land at the equator (0 latitude) and in a city at 42.5 N latitude (Detroit, Boston, and Rome are at this latitude). In each case, let’s consider high-noon on March 21 or September 20. These are the two equinox days, the only days each year when the day and night are equal length, and the only times when it is easy to calculate the angle of the sun as it deviates from the vertical by exactly the latitude on the days and times.

More specifically the equator is zero latitude, so on the equator at high-noon on the equinox, the sun will shine from directly overhead, or 0° from the vertical. Since the sun’s power in space is 1050 W/m2, every square meter of equator can expect to receive 1050 W of sun-energy, less the amount reflected off clouds and dust, or scattered off or air molecules (air scattering is what makes the sky blue). Further north, Detroit, Boston, and Rome sit at 42.5 latitude. At noon on March 31 the sun will strike earth at 42.5° from the vertical as shown in the lower figure above. From trigonometry, you can see that each square meter of these cities will receive cos 42.5 as much power as a square meter at the equator, except for any difference in clouds, dust, etc. Without clouds etc. that would be 1050 cos 42.5 = 774 W. Less sun power hits per square meter because each square meter is tilted. Earlier and later in the day, each spot will get less sunlight than at noon, but the proportion is the same, at least on one of the equinox days.

To calculate the likely temperature in Detroit, Boston, or Rome, I will use a simple energy balance. Ignoring heat storage in the earth for now, we will say that the heat in equals the heat out. We now ignore heat transfer by way of winds and rain, and approximate to say that the heat out leaves by black-body radiation alone, radiating into the extreme cold of space. This is not a very bad approximation since Black body radiation is the main temperature removal mechanism in most situations where large distances are involved. I’ve discussed black body radiation previously; the amount of energy radiated is proportional to luminosity, and to T4, where T is the temperature as measured in an absolute temperature scale, Kelvin or Rankin. Based on this, and assuming that the luminosity of the earth is the same in Detroit as at the equator,

T Detroit / Tequator  = √√ cos 42.5 = .927

I’ll now calculate the actual temperatures. For American convenience, I’ll choose to calculation in the Rankin Temperature scale, the absolute Fahrenheit scale. In this scale, 100°F = 560°R, 0°F = 460°R, and the temperature of space is 0°R as a good approximation. If the average temperature of the equator = 100°F = 38°C = 560°R, we calculate that the average temperature of Detroit, Boston, or Rome will be about .927 x 560 = 519°R = 59°F (15°C). This is not a bad prediction, given the assumptions. We can expect the temperature will be somewhat lower at night as there is no light, but the temperature will not fall to zero as there is retained heat from the day. The same reason, retained heat, explains why it is warmer will be warmer in these cities on September 20 than on March 31.

In the summer, these cities will be warmer because they are in the northern hemisphere, and the north pole is tilted 23°. At the height of summer (June 21) at high noon, the sun will shine on Detroit at an angle of 42.5 – 23° = 19.5° from the vertical. The difference in angle is why these cities are warmer on that day than on March 21. The equator will be cooler on that day (June 21) than on March 21 since the sun’s rays will strike the equator at 23° from the vertical on that day. These  temperature differences are behind the formation of tornadoes and hurricanes, with a tornado season in the US centering on May to July.

When looking at the poles, we find a curious problem in guessing what the average temperature will be. At noon on the equinox, the sun comes in horizontally, or at 90°from the vertical. We thus expect there is no warming power at all this day, and none for the six months of winter either. At first glance, you’d think the temperature at the poles would be zero, at least for six months of the year. It isn’t zero because there is retained heat from the summer, but still it makes for a more-difficult calculation.

To figure an average temperature of the poles, lets remember that during the 6 month summer the sun shines for 24 hours per day, and that the angle of the sun will be as high as 23° from the horizon, or 67° from the vertical for all 24 hours. Let’s assume that the retained heat from the summer is what keeps the temperature from falling too low in the winter and calculate the temperature at an .

Let’s assume that the sun comes in at the equivalent of 25° for the sun during the 6 month “day” of the polar summer. I don’t look at equinox, but rather the solar day, and note that the heating angle stays fixed through each 24 hour day during the summer, and does not decrease in the morning or as the afternoon wears on. Based on this angle, we expect that

TPole / Tequator  = √√ cos 65° = .806

TPole = .806 x 560°R = 452°R = -8°F (-22°C).

This, as it happens is 4° colder than the average temperature at the north pole, but not bad, given the assumptions. Maybe winds and water currents account for the difference. Of course there is a large temperature difference at the pole between the fall equinox and the spring equinox, but that’s to be expected. The average is, -4°F, about the temperature at night in Detroit in the winter.

One last thing, one that might be unexpected, is that temperature at the south pole is lower than at the north pole, on average -44°F. The main reason for this is that the snow on south pole is quite deep — more than 1 1/2 miles deep, with some rock underneath. As I showed elsewhere, we expect that, temperatures are lower at high altitude. Data collected from cores through the 1 1/2 mile deep snow suggest (to me) chaotic temperature change, with long ice ages, and brief (6000 year) periods of warm. The ice ages seem far worse than global warming.

Dr. Robert Buxbaum, December 30, 2017

Penicillin, cheese allergy, and stomach cancer

penecillin molecule

The penicillin molecule is a product of the penicillin mold

Many people believe they are allergic to penicillin — it’s the most common perceived drug allergy — but several studies have shown that most folks who think they are allergic are not. Perhaps they once were, but when people who thought they were allergic were tested, virtually none showed allergic reaction. In a test of 146, presumably allergic patients at McMaster University, only two had their penicillin allergy confirmed; 98.6% of the patients tested negative. A similar study at the Mayo Clinic tested 384 pre-surgical patients with a history of penicillin allergy; 94% tested negative. They were given clearance to receive penicillin antibiotics before, during, and after surgery. Read a summary here.

08

Orange showing three different strains of the penicillin mold; some of these are toxic.

This is very good news. Penicillin is a low-cost, low side-effect antibiotic, effective against many diseases including salmonella, botulism, gonorrhea, and scarlet fever. The penicillin molecule is a common product of nature, produced by a variety of molds, e.g. on the orange at right, and in cheese. It is thus something people have been exposed to, whether they realize it or not.

Penicillin allergy is a deadly danger for the few who really are allergic, and it’s worthwhile to find out if that means you. The good news: that penicillin is found in common cheeses suggests, to me, a simple test for penicillin allergy. Anyone who suspects penicillin allergy and does not have a general dairy allergy can try eating appropriate cheese: brie, blue, camembert, or Stilton. That is any of the cheeses made with penicillin molds. If you don’t break out in a rash or suffer stomach cramps, you’re very likely not allergic to penicillin.

There is some difference between cheeses, so if you have problems with Roquefort, but not brie or camembert, there’s still a good chance you’re not allergic to penicillin. Brie and camembert have a white fuzzy mold coat of Penicillium camemberti. This mold exudes penicillin — not in enough quantity to cure gonorrhea, but enough to give taste and avoid spoilage, and enough to test for allergy. Danish blue and Roquefort, shown below, have a different look and a sharper flavor . They’re made with blue-green, Penicillium roqueforti. This mold produces penicillin, but also a small amount of neurotoxin, roquefortine C. It’s not enough to harm most people, but it could cause an allergic reaction to folks who are not allergic to penicillin. Don’t eat a moldy orange, by the way; some forms of the mold produce a lot of neurotoxin.

For people who are not allergic, a thought I had is that one could, perhaps treat heartburn or ulcers with cheese; perhaps even cancer? H-Pylori, the bacteria associated with heartburn, is effectively treated by amoxicillin, a penicillin variant. If a penicillin variant kills the bacteria, it seems plausible that penicillin cheese might too. And since amoxicillin, is found to reduce the risk of gastric cancer, it’s reasonable to expect that penicillin or penicillin cheese might be cancer-protective. To my knowledge, this has never been studied, but it seems worth considering. The other, standard treatment for heartburn, pantoprazole / Protonix, is known to cause osteoporosis, and increase the risk of cancer, and it doesn’t taste as good as cheese.

A culture of Penicillium roqueforti. Most people are not allergic to it.

The blue in blue cheese is Penicillium roqueforti. Most people are not allergic.

Penicillin was discovered by Alexander Fleming, who noticed that a single spore of the mold killed the bacteria near it on a Petrie dish. He tried to produce significant quantities of the drug from the mold with limited success, but was able to halt disease in patients, and was able to interest others who had more skill in large-scale fungus growing. Kids looking for a good science fair project, might consider penicillin growing, penicillin allergy, treatment of stomach ailments using cheese, or anything else related to the drug. Three Swedish journals declared that penicillin was the most important discovery of the last 1000 years. It would be cool if the dilute form, the one available in your supermarket, could be shown to treat heartburn and/or cancer. Another drug you could study is Lysozyme, a chemical found in tears, in saliva, and in human milk (but not in cow milk). Alexander Fleming found that tears killed bacteria, as did penicillin. Lysozyme, the active ingredient, is currently used to treat animals, but not humans.

Robert Buxbaum, November 9, 2017. Since starting work on this essay I’ve been eating blue cheese. It tastes good and seems to cure heartburn. As a personal note: my first science fair project (4th grade) involved growing molds on moistened bread. For an incubator, I used the underside of our home radiator. The location kept my mom from finding the experiment and throwing it out.

magnetic separation of air

As some of you will know, oxygen is paramagnetic, attracted slightly by a magnet. Oxygen’s paramagnetism is due to the two unpaired electrons in every O2 molecule. Oxygen has a triple-bond structure as discussed here (much of the chemistry you were taught is wrong). Virtually every other common gas is diamagnetic, repelled by a magnet. These include nitrogen, water, CO2, and argon — all diamagnetic. As a result, you can do a reasonable job of extracting oxygen from air by the use of a magnet. This is awfully cool, and could make for a good science fair project, if anyone is of a mind.

But first some math, or physics, if you like. To a good approximation the magnetization of a material, M = CH/T where M is magnetization, H is magnetic field strength, C is the Curie constant for the material, and T is absolute temperature.

Ignoring for now, the difference between entropy and internal energy, but thinking only in terms of work derived by lowering a magnet towards a volume of gas, we can say that the work extracted, and thus the decrease in energy of the magnetic gas is ∫∫HdM  = MH/2. At constant temperature and pressure, we can say ∆G = -CH2/2T.

The maximum magnetization you’re likely to get with any permanent magnet (not achieved to date) is about 50 Tesla, or 40,000 ampere meters. At 20°C, the per-mol, magnetic susceptibility of oxygen is 1.34×10−6  This suggests that the Curie constant is 1.34 ×10−6 x 293 = 3.93 ×10−4. Applying this value to oxygen in a 50 Tesla magnet at 20°C, we find the energy difference, ∆G is 1072 J/mole = RT ln ß where ß is a concentration ratio factor between the O2 content of the magnetized and un-magnetized gas, C1/C2 =ß

At room temperature, 298K ß = 1.6, and thus we find that the maximum oxygen concentration you’re likely to get is about 1.6 x 21% = 33%. It’s slightly more than this due to nitrogen’s diamagnetism, but this effect is too small the matter. What does matter is that 33% O2 is a good amount for a variety of medical uses.

I show below my simple design for a magnetic O2 concentrator. The dotted line is a permeable membrane of no selectivity – with a little O2 permeability the design will work better. All you need is a blower or pump. A coffee filter could serve as a membrane.bux magneitc air separator

This design is as simple as the standard membrane-based O2 concentrator – those based on semi-permeable membranes, but this design should require less pressure differential — just enough to overcome the magnet. Less pressure means the blower should be smaller, and less noisy, with less energy use.  I figure this could be really convenient for people who need portable oxygen. With current magnets it would take 4-5 stages or low temperatures to reach this concentration, still this design could have commercial use, I’d think.

On the theoretical end, an interesting thing I find concerns the effect on the entropy of the magnetic oxygen. (Please ignore this paragraph if you have not learned statistical thermodynamics.) While you might imagine that magnetization decreases entropy, other-things being equal because the molecules are somewhat aligned with the field, temperature and pressure being fixed, I’ve come to realize that entropy is likely higher. A sea of semi-aligned molecules will have a slightly higher heat capacity than nonaligned molecules because the vibrational Cp is higher, other things being equal. Thus, unless I’m wrong, the temperature of the gas will be slightly lower in the magnetic area than in the non-magnetic field area. Temperature and pressure are not the same within the separator as out, by the way; the blower is something of a compressor, though a much less-energy intense one than used for most air separators. Because of the blower, both the magnetic and the non magnetic air will be slightly warmer than in the surround (blower Work = ∆T/Cp). This heat will be mostly lost when the gas leaves the system, that is when it flows to lower pressure, both gas streams will be, essentially at room temperature. Again, this is not the case with the classic membrane-based oxygen concentrators — there the nitrogen-rich stream is notably warm.

Robert E. Buxbaum, October 11, 2017. I find thermodynamics wonderful, both as science and as an analog for society.