Category Archives: Science: Physics, Astronomy, etc.

Zombie invasion model for surviving plagues

Imagine a highly infectious, people-borne plague for which there is no immunization or ready cure, e.g. leprosy or small pox in the 1800s, or bubonic plague in the 1500s assuming that the carrier was fleas on people (there is a good argument that people-fleas were the carrier, not rat-fleas). We’ll call these plagues zombie invasions to highlight understanding that there is no way to cure these diseases or protect from them aside from quarantining the infected or killing them. Classical leprosy was treated by quarantine.

I propose to model the progress of these plagues to know how to survive one, if it should arise. I will follow a recent paper out of Cornell that highlighted a fact, perhaps forgotten in the 21 century, that population density makes a tremendous difference in the rate of plague-spread. In medieval Europe plagues spread fastest in the cities because a city dweller interacted with far more people per day. I’ll attempt to simplify the mathematics of that paper without losing any of the key insights. As often happens when I try this, I’ve found a new insight.

Assume that the density of zombies per square mile is Z, and the density of susceptible people is S in the same units, susceptible population per square mile. We define a bite transmission likelihood, ß so that dS/dt = -ßSZ. The total rate of susceptibles becoming zombies is proportional to the product of the density of zombies and of susceptibles. Assume, for now, that the plague moves fast enough that we can ignore natural death, immunity, or the birth rate of new susceptibles. I’ll relax this assumption at the end of the essay.

The rate of zombie increase will be less than the rate of susceptible population decrease because some zombies will be killed or rounded up. Classically, zombies are killed by shot-gun fire to the head, by flame-throwers, or removed to leper colonies. However zombies are removed, the process requires people. We can say that, dR/dt = kSZ where R is the density per square mile of removed zombies, and k is the rate factor for killing or quarantining them. From the above, dZ/dt = (ß-k) SZ.

We now have three, non-linear, indefinite differential equations. As a first step to solving them, we set the derivates to zero and calculate the end result of the plague: what happens at t –> ∞. Using just equation 1 and setting dS/dt= 0 we see that, since ß≠0, the end result is SZ =0. Thus, there are only two possible end-outcomes: either S=0 and we’ve all become zombies or Z=0, and all the zombies are all dead or rounded up. Zombie plagues can never end in mixed live-and-let-live situations. Worse yet, rounded up zombies are dangerous.

If you start with a small fraction of infected people Z0/S0 <<1, the equations above suggest that the outcome depends entirely on k/ß. If zombies are killed/ rounded up faster than they infect/bite, all is well. Otherwise, all is zombies. A situation like this is shown in the diagram below for a population of 200 and k/ß = .6

FIG. 1. Example dynamics for progress of a normal disease and a zombie apocalypse for an initial population of 199 unin- fected and 1 infected. The S, Z, and R populations are shown in (blue, red, black respectively, with solid lines for the zombie apocalypse, and lighter lines for the normal plague. t= tNß where N is the total popula- tion. For both models the k/ß = 0.6 to show similar evolutions. In the SZR case, the S population disap- pears, while the SIR is self limiting, and only a fraction of the population becomes infected.

Fig. 1, Dynamics of a normal plague (light lines) and a zombie apocalypse (dark) for 199 uninfected and 1 infected. The S and R populations are shown in blue and black respectively. Zombie and infected populations, Z and I , are shown in red; k/ß = 0.6 and τ = tNß. With zombies, the S population disappears. With normal infection, the infected die and some S survive.

Sorry to say, things get worse for higher initial ratios,  Z0/S0 >> 0. For these cases, you can kill zombies faster than they infect you, and the last susceptible person will still be infected before the last zombie is killed. To analyze this, we create a new parameter P = Z + (1 – k/ß)S and note that dP/dt = 0 for all S and Z; the path of possible outcomes will always be along a path of constant P. We already know that, for any zombies to survive, S = 0. We now use algebra to show that the final concentration of zombies will be Z = Z0 + (1-k/ß)S0. Free zombies survive so long as the following ratio is non zero: Z0/S0 + 1- k/ß. If Z0/S0 = 1, a situation that could arise if a small army of zombies breaks out of quarantine, you’ll need a high kill ratio, k/ß > 2 or the zombies take over. It’s seen to be harder to stop a zombie outbreak than to stop the original plague. This is a strong motivation to kill any infected people you’ve rounded up, a moral dilemma that appears some plague literature.

Figure 1, from the Cornell paper, gives a sense of the time necessary to reach the final state of S=0 or Z=0. For k/ß of .6, we see that it takes is a dimensionless time τ of 25 or to reach this final, steady state of all zombies. Here, τ= t Nß and N is the total population; it takes more real time to reach τ= 25 if N is high than if N is low. We find that the best course in a zombie invasion is to head for the country hoping to find a place where N is vanishingly low, or (better yet) where Z0 is zero. This was the main conclusion of the Cornell paper.

Figure 1 also shows the progress of a more normal disease, one where a significant fraction of the infected die on their own or develop a natural immunity and recover. As before, S is the density of the susceptible, R is the density of the removed + recovered, but here I is the density of those Infected by non-zombie disease. The time-scales are the same, but the outcome is different. As before, τ = 25 but now the infected are entirely killed off or isolated, I =0 though ß > k. Some non-infected, susceptible individuals survive as well.

From this observation, I now add a new conclusion, not from the Cornell paper. It seems clear that more immune people will be in the cities. I’ve also noted that τ = 25 will be reached faster in the cities, where N is large, than in the country where N is small. I conclude that, while you will be worse off in the city at the beginning of a plague, you’re likely better off there at the end. You may need to get through an intermediate zombie zone, and you will want to get the infected to bury their own, but my new insight is that you’ll want to return to the city at the end of the plague and look for the immune remnant. This is a typical zombie story-line; it should be the winning strategy if a plague strikes too. Good luck.

Robert Buxbaum, April 21, 2015. While everything I presented above was done with differential calculus, the original paper showed a more-complete, stochastic solution. I’ve noted before that difference calculus is better. Stochastic calculus shows that, if you start with only one or two zombies, there is still a chance to survive even if ß/k is high and there is no immunity. You’ve just got to kill all the zombies early on (gun ownership can help). Here’s my statistical way to look at this. James Sethna, lead author of the Cornell paper, was one of the brightest of my Princeton PhD chums.

Much of the chemistry you learned is wrong

When you were in school, you probably learned that understanding chemistry involved understanding the bonds between atoms. That all the things of the world were made of molecules, and that these molecules were fixed proportion combinations of the chemical elements held together by one of the 2 or 3 types of electron-sharing bonds. You were taught that water was H2O, that table salt was NaCl, that glass was SIO2, and rust was Fe2O3, and perhaps that the bonds involved an electron transferring between an electron-giver: H, Na, Si, or Fe… to an electron receiver: O or Cl above.

Sorry to say, none of that is true. These are fictions perpetrated by well-meaning, and sometime ignorant teachers. All of the materials mentioned above are grand polymers. Any of them can have extra or fewer atoms of any species, and as a result the stoichiometry isn’t quite fixed. They are not molecules at all in the sense you knew them. Also, ionic bonds hardly exist. Not in any chemical you’re familiar with. There are no common electron compounds. The world works, almost entirely on covalent, shared bonds. If bonds were ionic you could separate most materials by direct electrolysis of the pure compound, but you can not. You can not, for example, make iron by electrolysis of rust, nor can you make silicon by electrolysis of pure SiO2, or titanium by electrolysis of pure TiO. If you could, you’d make a lot of money and titanium would be very cheap. On the other hand, the fact that stoichiometry is rarely fixed allows you to make many useful devices, e.g. solid oxide fuel cells — things that should not work based on the chemistry you were taught.

Iron -zinc forms compounds, but they don't have fixed stoichiometry. As an example the compound at 60 atom % Zn is, I guess Zn3Fe2, but the composition varies quite a bit from there.

Iron -zinc forms compounds, but they don’t have fixed stoichiometry. As an example the compound at 68-80 atom% Zn is, I guess Zn7Fe3 with many substituted atoms, especially at temperatures near 665°C.

Because most bonds are covalent many compounds form that you would not expect. Most metal pairs form compounds with unusual stoicheometric composition. Here, for example, is the phase diagram for zinc and Iron –the materials behind galvanized sheet metal: iron that does not rust readily. The delta phase has a composition between 85 and 92 atom% Zn (8 and 15 a% iron): Perhaps the main compound is Zn5Fe2, not the sort of compound you’d expect, and it has a very variable compositions.

You may now ask why your teachers didn’t tell you this sort of stuff, but instead told you a pack of lies and half-truths. In part it’s because we don’t quite understand this ourselves. We don’t like to admit that. And besides, the lies serve a useful purpose: it gives us something to test you on. That is, a way to tell if you are a good student. The good students are those who memorize well and spit our lies back without asking too many questions of the wrong sort. We give students who do this good grades. I’m going to guess you were a good student (congratulations, so was I). The dullards got confused by our explanations. They asked too many questions, and asked, “can you explain that again? Or why? We get mad at these dullards and give them low grades. Eventually, the dullards feel bad enough about themselves to allow themselves to be ruled by us. We graduates who are confident in our ignorance rule the world, but inventions come from the dullards who don’t feel bad about their ignorance. They survive despite our best efforts. A few more of these folks survive in the west, and especially in America, than survive elsewhere. If you’re one, be happy you live here. In most countries you’d be beheaded.

Back to chemistry. It’s very difficult to know where to start to un-teach someone. Lets start with EMF and ionic bonds. While it is generally easier to remove an electron from a free metal atom than from a free non-metal atom, e.g. from a sodium atom instead of oxygen, removing an electron is always energetically unfavored, for all atoms. Similarly, while oxygen takes an extra electron easier than iron would, adding an electron is energetically unfavored. The figure below shows the classic ion bond, left, and two electron sharing options (center right) One is a bonding option the other anti-bonding. Nature prefers this to electron sharing to ionic bonds, even with blatantly ionic elements like sodium and chlorine.

Bond options in NaCl. Note that covalent is the stronger bond option though it requires less ionization.

Bond options in NaCl. Note that covalent is the stronger bond option though it requires less ionization.

There is a very small degree of ionic bonding in NaCl (left picture), but in virtually every case, covalent bonds (center) are easier to form and stronger when formed. And then there is the key anti-bonding state (right picture). The anti bond is hardly ever mentioned in high school or college chemistry, but it is critical — it’s this bond that keeps all mater from shrinking into nothingness.

I’ve discussed hydrogen bonds before. I find them fascinating since they make water wet and make life possible. I’d mentioned that they are just like regular bonds except that the quantum hydrogen atom (proton) plays the role that the electron plays. I now have to add that this is not a transfer, but a covalent spot. The H atom (proton) divides up like the electron did in the NaCl above. Thus, two water molecules are attracted by having partial bits of a proton half-way between the two oxygen atoms. The proton does not stay put at the center, there, but bobs between them as a quantum cloud. I should also mention that the hydrogen bond has an anti-bond state just like the electron above. We were never “taught” the hydrogen bond in high school or college — fortunately — that’s how I came to understand them. My professors, at Princeton saw hydrogen atoms as solid. It was their ignorance that allowed me to discover new things and get a PhD. One must be thankful for the folly of others: without it, no talented person could succeed.

And now I get to really weird bonds: entropy bonds. Have you ever noticed that meat gets softer when its aged in the freezer? That’s because most of the chemicals of life are held together by a sort of anti-bond called entropy, or randomness. The molecules in meat are unstable energetically, but actually increase the entropy of the water around them by their formation. When you lower the temperature you case the inherent instability of the bonds to cause them to let go. Unfortunately, this happens only slowly at low temperatures so you’ve got to age meat to tenderize it.

A nice thing about the entropy bond is that it is not particularly specific. A consequence of this is that all protein bonds are more-or-less the same strength. This allows proteins to form in a wide variety of compositions, but also means that deuterium oxide (heavy water) is toxic — it has a different entropic profile than regular water.

Robert Buxbaum, March 19, 2015. Unlearning false facts one lie at a time.

Brass monkey cold

In case it should ever come up in conversation, only the picture at left shows a brass monkey. The other is a bronze statue of some sort of a primate. A brass monkey is a rack used to stack cannon balls into a face centered pyramid. A cannon crew could fire about once per minute, and an engagement could last 5 hours, so you could hope to go through a lot of cannon balls during an engagement (assuming you survived).

A brass monkey cannonball holder. The classic monkeys were 10 x 10 and made of navy brass.

Small brass monkey. The classic monkey might have 9 x 9 or 10×10 cannon balls on the lower level.

Bronze sculpture of a primate playing with balls -- but look what the balls are sitting on: it's a surreal joke.

Bronze sculpture of a primate playing with balls — but look what the balls are sitting on: it’s a dada art joke.

But brass monkeys typically show up in conversation in terms of it being cold enough to freeze the balls off of a brass monkey, and if you imagine an ornamental statue, you’d never guess how cold could that be. Well, for a cannonball holder, the answer has to do with the thermal expansion of metals. Cannon balls were made of iron and the classic brass monkey was made of brass, an alloy with a much-greater thermal expansion than iron. As the temperature drops, the brass monkey contracts more than the iron balls. When the drop is enough the balls will fall off and roll around.

The thermal expansion coefficient of brass is 18.9 x 10-6/°C while the thermal expansion coefficient of iron is 11.7 x10-6/°C. The difference is 7.2×10-6/°C; this will determine the key temperature. Now consider a large brass monkey, one with 400 x 400 holes on the lower level, 399 x 399 at the second, and so on. Though it doesn’t affect the result, we’ll consider a monkey that holds 12 lb cannon balls, a typical size of 1750 -1830. Each 12 lb ball is 4.4″ in diameter at room temperature, 20°C in those days. At 20°C, this monkey is about 1760″ wide. The balls will fall off when the monkey shrinks more than the balls by about 1/3 of a diameter, 1.5″.

We can calculate ∆T, the temperature change, °C, that is required to lower the width-difference by 1.5″ as follows:

kepler conjecture, brass monkey

-1.5″ = ∆T x 1760″ x 7.2 x10-6

We find that ∆T = -118°C. The temperature where this happens is 118 degrees cooler than 20°C, or -98°C. That’s a temperature that you could, perhaps reach on the South Pole or maybe deepest Russia. It’s not likely to be a problem, especially with a smaller brass monkey.

Robert E. Buxbaum, February 21, 2015 (modified Apr. 28, 2021). Some fun thoughts: Convince yourself that the key temperature is independent of the size of the cannon balls. That is, that I didn’t need to choose 12 pounders. A bit more advanced, what is the equation for the number of balls on any particular base-size monkey. Show that the packing density is no more efficient if the bottom lawyer were an equilateral triangle, and not a square. If you liked this, you might want to know how much wood a woodchuck chucks if a woodchuck could chuck wood, or on the relationship between mustaches and WWII diplomacy.

you are what you eat?

The simplest understanding of this phrase is that you should eat good, healthy foods to be healthy, and that this will make you healthy in body and mind.

The author of the study published this book against GM foods simultaneously with release of his paper.

The author of this book against unhealthy foods faked his analysis to support the book.

Clearly there is some truth to this. Crazy people look crazy and often eat crazy. Even ‘normal’ people, if they eat too much are likely to become fat, lazy, and sick. There is a socio- economic effect (fat people earn less), and a physiological evidence that gut bacteria affects anxiety and depression (at least in rats). My sense here is at the diet extremes though. There is little, or no evidence to suggest you can make yourself more intelligent (or kind or good) by eating more of the right stuff, or just the right foods in just the right amounts. A better diet can make you look better, but there is a core lie at work when you extend this to imply that the real you is your body, or so tied to your body that a healthy mind can not be found in a sickly body. But most evidence is that the mind is the real you, and (following Socrates) that beautiful minds are found in sickly bodies. I’ve seen few (basically, no) healthy poets, writers, or great artists. Neither are there scientists of note (that I can recall) who lived without smoking, drinking, and any bad habits. Many creative people did drugs. George Orwell smoked cigarette, and died of TB, but wrote well to the end. There is no evidence that bad writing or thinking can be improved by health foods. Stupid is as stupid does, and many healthy people are clearly dolts.

Not that it’s always clear what constitutes good health, or what constitutes good food for health, or what constitutes a good mind. Skinny people may be admired and may earn more, but it is not clear they are healthy. Yule Gibbons, the natural food guru died young of stomach cancer. Adele Davis, another the author of “eat right to be healthy,” died of brain cancer. And Jim Fix, “the running doctor” died young of a heat attack while running. Their health foods may have killed them, and that unhealthy foods, like chocolate and coffee can be good for you. It’s likely a question of balance. While a person will feel better who dresses well, the extreme is probably no good. Very often, a person is drawn after his self-image to be the person he pretends. Show me a man who eats only vegetarian, and I’ll show you someone who sees himself as spiritual, or wants to be seen as spiritual. And that man is likely to be drawn to acting spiritual. Among the vegetarians you find Einstein, George B. Shaw, and Gandhi, people who may have been spiritual from the start, but may have been kept to spirituality from their diets. You also find Hitler: spirituality can take all sorts of forms.

Ward Sullivan in the New Yorker

Ward Sullivan in the New Yorker. People eat, drink, and dress like who they are. And people become like those they eat drink and dress like.

Choice of diet also helps select the people you run into. If you eat vegetarian, you’re likely to associate with other vegetarians, and you will likely behave like them. If you eat Chinese, Greek, or Mexican food, you’re likely to associate with these communities and behave like them. Similarly, an orthodox Jew or Moslem is tied to his community with every dinner and every purchase from the kosher or halal store.

And now we come to the bizarre science of bio-systems. Each person is a complex bio-system, with more non-human DNA than human, and more non-human cells than human. A person has a vast army of bugs on him, and a similarly vast pool of bugs within him. Recent research suggests that what we eat affects this bio-system, and through it our mental state. For whatever the mechanism, show me someone who drinks only 30 year Scotch or 40-year-old French wine, and I’ll show you a food snob. By contrast, show me someone who eats good, cheap food, and drinks good, cheap wine or Scotch (Lauder’s or Dewar’s), and I’ll show you a decent person very much like myself, a clever man who either is a man of the people or who wants to be known as one.”Dis-moi ce que tu manges, je te dirai ce que tu es.” [Tell me what you eat and I will tell you what you are].

Robert E. Buxbaum, February, 2015. My 16-year-old daughter asked me to write on this topic. Perhaps she didn’t know what it meant, or how true I thought it was, or perhaps she liked my challenges of being 16.

Is college worth no cost?

While a college degree gives most graduates a salary benefit over high school graduates, a study by the Bureau of Labor statistics indicates that the benefits disappear if you graduate in the bottom 25% of your class. Worse yet, if you don’t graduate at all you can end up losing salary money, especially if you go into low-paying fields like child development or physical sciences.

Salary benefits of a college degree are largely absent if you graduate in the bottom 25% of your class.

The average college graduate earns significantly more than a high school grad, but not if you attend a pricy school, or graduate in the bottom 1/4 of your class, or have the wrong major.

Most people realize there is a great earnings difference depending on your field of study with graduates in engineering and medicine doing fairly well financially and even top graduates in child development or athletic sciences barely able to justify the college and opportunity costs (worse if they go to an expensive college), but what isn’t always realized is that not all those who enter these fields graduate. For them there is a steep loss when the four (or more) years of lost income are considered.

risk premium in wages

If you don’t graduate or get only an AA or 2 year degree the increase in wages is minimal, and you lose time working and whatever your costs of education. The loss is particularly high if you study social science fields at an expensive college, and don’t graduate, or if you graduate in the bottom of your class.

A report from the New York Federal Reserve finds that the highest pay major is petroleum engineering, mid-career salary $176,300/yr, and the bottom is child development, mid-career salary $36,400/yr (click to check on your major). I’m not sure most students or advisors are aware of the steep salary difference, or that college can have a salary down-side if one picks the wrong major, or does not complete the degree. In terms of earnings, you might be better off avoiding even a free college degree in these areas unless you’re fairly sure you’ll complete the degree, or you really want to work in these fields.

Top earning majors Fed Reserve and Majors that pay you back.

Top earning majors: Majors that pay.

Of course college can provide more than money: knowledge, for instance, and learning: the ability to reason better. But these benefits are likely lost if you don’t work at it, or don’t go in a field you love. They can also come to those who study hard in self-taught reading. In either case, it is the work habits that will make you grow as a person, and leave you more employable. Tough colleges add a lot by exposure to new people and new ways of thinking about great books, and by forced experience in writing essays — but these benefits too are work-dependent and college dependent. If you work hard understanding a great book it will show. If you didn’t work at it, or only exposed yourself to easier fare, that too will show.

As students don’t like criticism, and as good criticism is hard to give — and harder to give well, many less-demanding colleges ,give little or no critical feedback, especially for disadvantaged students. This disadvantages them even more as criticism is an important part of learning. If all you get is a positive experience, a nice campus, and a dramatic graduation, this is not learning. Nor is it necessarily worth 4-5 years of your life.

As a comic take on the high time-cost of a liberal arts education, “Father” Guido Sarduchi, of Saturday Night LIve, describes his “5 minute college experience.” To a surprising extent, it provides everything you’ll remember of 4 year college experience in 5 minutes, including math, history, political science, and language (Spanish).For those who are not sure they will complete a liberal arts education, Father Sarduchi’s 5 minutes may be a better investment than a free 4 years in community college.

Robert. E. Buxbaum. January 21-22, 2015. My sense is that the better part of education is what you get when you don’t get what you want.

Can you spot the man-made climate change?

As best I can tell, the only constant in climate is change, As an example, the record of northern temperatures for the last 10,000 years, below, shows nothing but major ups and downs following the end of the last ice age 9500 years ago. The only pattern, if you call it a pattern, is fractal chaos. Anti-change politicos like to concentrate on the near-recent 110 years from 1890 to 2000. This is the small up line at the right, but they ignore the previous 10000 or more, ignore the fact that the last 17 years show no change, and ignore the variation within the 100 years (they call it weather). I find I can not spot the part of the change that’s man-made.

10,000 years of climate change based on greenland ice cores. Ole Humlum – Professor, University of Oslo Department of Geosciences.

10,000 years of northern climate temperatures based on Greenland ice cores. Dr. Ole Humlum, Dept. of Geosciences, University of Oslo. Can you spot the part of the climate change that’s man-made?

Jon Stewart makes the case for man-made climate change.

Steven Colbert makes his case for belief: If you don’t believe it you’re stupid.

Steven Colbert makes the claim that man-made climate change is so absolutely apparent that all the experts agree, and that anyone who doubts is crazy, stupid, or politically motivated (he, of course is not). Freeman Dyson, one of the doubters, is normally not considered crazy or stupid. The approach reminds me of “the emperor’s new clothes.” Only the good, smart people see it. The same people used to call it “Global Warming” based on a model prediction of man-made warming. The name was changed to “climate change” since the planet isn’t warming. The model predicted strong warming in the upper atmosphere, but that isn’t happening either; ski areas are about as cold as ever (we’ve got good data from ski areas).

I note that the climate on Jupiter has changed too in the last 100 years. A visible sign of this is that the great red spot has nearly disappeared. But it’s hard to claim that’s man-made. There’s a joke here, somewhere.

Jupiter's red spot has shrunk significantly. Here it is now. NASA

Jupiter’s red spot has shrunk significantly. Here it is now. NASA

As a side issue, it seems to me that some global warming could be a good thing. The periods that were warm had peace and relative plenty, while periods of cold, like the little ice age, 500 years ago were times of mass starvation and plague. Similarly, things were a lot better during the medieval warm period (1000 AD) than during the dark ages 500-900 AD. The Roman warm period (100 BC-50 AD) was again warm and (relatively) civilized. Perhaps we owe some of the good food production of today to the warming shown on the chart above. Civilization is good. Robert E. Buxbaum January 14, 2015. (Corrected January 19; I’d originally labeled Steven Colbert as Jon Stewart)

 

Our expanding, black hole universe

In a previous post I showed a classical derivation of the mass-to-size relationship for black -holes and gave evidence to suggest that our universe (all the galaxies together) constitute a single, large black hole. Everything is inside the black hole and nothing outside but empty space — We can tell this because you can see outside from inside a black hole — it’s only others, outside who can not see in (Finkelstein, Phys Rev. 1958). Not that there appear to be others outside the universe, but if they were, they would not be able to see us.

In several ways having a private, black hole universe is a gratifying thought. It provides privacy and a nice answer to an easily proved conundrum: that the universe is not infinitely big. The black hole universe that ends as the math requires, but not with a brick wall, as i the Hitchhiker’s guide (one of badly-laid brick). There are one or two problems with this nice tidy solution. One is that the universe appears to be expanding, and black holes are not supposed to expand. Further, the universe appears to be bigger than it should be, suggesting that it expanded faster than the speed of light at some point. its radius now appears to be 40-46 billion light years despite the universe appearing to have started as a point some 14 billion years ago. That these are deeply disturbing questions does not stop NASA and Nova from publishing the picture below for use by teachers. This picture makes little sense, but it’s found in Wikipedia and most, newer books.

Standard picture of the big bang theory. Expansions, but no contractions.

Standard picture of the big bang theory: A period of faster than light expansion (inflation) then light-speed, accelerating expansion. NASA, and Wikipedia.

We think the creation event occurred some 14 billion years ago because we observe that the majority of galaxies are expanding from us at a rate proportional to their distance from us. From this proportionality between the rate of motion and the distance from us, we conclude that we were all in one spot some 14 billion years ago. Unfortunately, some of the most distant galaxies are really dim — dimmer than they would be if they were only 14 billion light years away. The model “explains this” by a period of inflation, where the universe expanded faster than the speed of light. The current expansion then slowed, but is accelerating again; not slowing as would be expected if it were held back by gravity of the galaxies. Why hasn’t the speed of the galaxies slowed, and how does the faster-than-light part work? No one knows. Like Dr. Who’s Tardis, our universe is bigger on the inside than seems possible.

Einstein's preferred view of the black-hole universe is one that expands and contracts at some (large) frequency. It could explain why the universe is near-uniform.

Einstein’s oscillating universe: it expands and contracts at some (large) frequency. Oscillations would explain why the universe is near-uniform, but not why it’s so big or moving outward so fast.

Einstein’s preferred view was of an infinite space universe where the mass within expands and contracts. He joked that two things were infinite, the universe and stupidity… see my explanation... In theory, gravity could drive the regular contractions to an extent that would turn entropy backward. Einstein’s oscillating model would explain how the universe is reasonably stable and near-uniform in temperature, but it’s not clear how his universe could be bigger than 14 billion light years across, or how it could continue to expand as fast as it does. A new view, published this month suggests that there are two universes, one going forward in time the other backward. The backward in time part of the universe could be antimatter, or regular matter going anti entropy (that’s how I understand it — If it’s antimatter, we’d run into the it all the time). Random other ideas float through the physics literature: that we’re connected to other space through a black hole/worm hole, perhaps to many other universes by many worm holes in fractal chaos, see for example, Physics Reports, 1992.

The forward-in-time expansion part of the two universes model.

The forward-in-time expansion part of the two universes model. This drawing, like the first, is from NASA.

For all I know, there are these many black hole  tunnels to parallel universes. Perhaps the universal constant and all these black-hole tunnels are windows on quantum mechanics. At some point the logic of the universe seems as perverse as in the Hitchhiker guide.

Something I didn’t mention yet is the Higgs boson, the so-called God particle. As in the joke, it’s supposed to be responsible for mass. The idea is that all particles have mass only by interaction with these near-invisible Higgs particles. Strong interactions with the Higgs are what make these particles heavier, while weaker – interacting particles are perceived to have less gravity and inertia. But this seems to me to be the theory that Einstein’s relativity and the 1919 eclipse put to rest. There is no easy way for a particle model like this to explain relativistic warping of space-time. Without mass being able to warp space-time you’d see various degrees of light bending around the sun, and preferential gravity in the direction of our planet’s motion: things we do not see. We’re back in 1900, looking for some plausible explanation for the uniform speed of light and Lawrence contraction of space.As likely an explanation as any the_hitchhikers_guide_to_the_galaxy

Dr. r µ ßuxbaum. December 20, 2014. The  meaning of the universe could be 42 for all I know, or just pickles down the worm hole. No religion seems to accept the 14 billion year old universe, and for all I know the God of creation has a wicked sense of humor. Carry a towel and don’t think too much.

Statistics of death and taxes — death on tax day

Strange as it seems, Americans tend to die in road accidents on tax-day. This deadly day is April 15 most years, but on some years April 15th falls out on a weekend and the fatal tax day shifts to April 16 or 17. Whatever weekday it is, about 8% more people die on the road on tax day than on the same weekday a week earlier or a week later; data courtesy of the US highway safety bureau and two statisticians, Redelmeier and Yarnell, 2014.

Forest plot of individuals in fatal road crashes over 30 years. X-axis shows relative increase in risk on tax days compared to control days expressed as odds ratio. Y-axis denotes subgroup (results for full cohort in final row). Column data are counts of individuals in crashes. Analytic results expressed with 95% confidence intervals setting control days as referent. Results show increased risk on tax day for full cohort, similar increase for 25 of 27 subgroups, and all confidence intervals overlapping main analysis. Recall that odds ratios are reliable estimates of relative risk when event rates are low from an individual driver’s perspective.

Forest plot of individuals in fatal road crashes for the 30 years to 2008  on US highways (Redelmeier and Yarnell, 2014). X-axis shows relative increase in risk on tax days compared to control days expressed as odds ratio. Y-axis denotes subgroup (results for full cohort in final row). Column data are counts of individuals in crashes (there are twice as many control days as tax days). Analytic results are 95% confidence intervals based on control days as referent. Dividing the experimental subjects into groups is a key trick of experimental design.

To confirm that the relation isn’t a fluke, the result of well-timed ice storms or football games, the traffic death data was down into subgroups by time, age, region etc– see figure. Each groups showed more deaths than on the average of the day a week before and after.

The cause appears unrelated to paying the tax bill, as such. The increase is near equal for men and women; with alcohol and without, and for those over 18 and under (presumably those under 18 don’t pay taxes). The death increase isn’t concentrated at midnight either, as might be expected if the cause were people rushing to the post office. The consistency through all groups suggests this is not a quirk of non-normal data, nor a fluke but a direct result of  tax-day itself.Redelmeier and Yarnell suggest that stress — the stress of thinking of taxes — is the cause.

Though stress seems a plausible explanation, I’d like to see if other stress-related deaths are more common on tax day — heart attack or stroke. I have not done this, I’m sorry to say, and neither have they. General US death data is not tabulated day by day. I’ve done a quick study of Canadian tax-day deaths though (unpublished) and I’ve found that, for Canadians, Canadian tax day is even more deadly than US tax day is for Americans. Perhaps heart attack and stroke data is available day by day in Canada (?).

Robert Buxbaum, December 12, 2014. I write about all sorts of stuff. Here’s my suggested, low stress income tax structure, and a way to reduce/ eliminate income taxes: tariffs– they worked till the Civil war. Here’s my thought on why old people have more fatal car accidents per mile driven.

Seniors are not bad drivers.

Seniors cause accidents, but need to get places too

Seniors are often made fun of for confusion and speeding, but it’s not clear they speed, and it is clear they need to get places. Would reduced speed limits help them arrive alive?

Seniors have more accidents per-mile traveled than middle age drivers. As shown on the chart below, older Canadians, 75+, get into seven times more fatal accidents per mile than 35 to 55 year olds. At first glance, this would suggest they are bad drivers who should be kept from the road, or at least made to drive slower. But I’m not so sure they are bad drivers, and am pretty certain that lower speed limits should not be generally imposed. I suspect that a lot of the problem comes from the a per-mile basis comparison with folks who drive long distances on the same superhighways instead of longer, leisurely drives on country roads. I suspect that, on a per-hour basis, the seniors would look a lot safer, and on a per highway-mile basis they might look identical to younger drivers.

Canadian Vehicle Survey, 2001, Statistics Canada, includes drivers of light duty vehicles.

Deaths per billion km. Canadian Vehicle Survey, 2001, Statistics Canada, includes light duty vehicles.

Another source of misunderstanding, I find, is that comparisons tend to overlook how very low the accident rates are. The fatal accent rate for 75+ year old drivers sounds high when you report it as 20 deaths per billion km. But that’s 50,000,000 km between fatalities, or roughly one fatality for each 1300 drives around the earth. In absolute terms it’s nothing to worry about. Old folks driving provides far fewer deaths per km than 12-29 year olds walking, and fewer deaths per km than for 16-19 year olds driving.

When starting to research this essay, I thought I’d find that the high death rates were the result of bad reaction times for the elderly. I half expected to find that reduced speed limits for them helped. I’ve not found any data directly related to reduced speeds, but now think that lowered speed limits would not help them any more than anyone else. I note that seniors drive for pleasure more than younger folks and do a lot more short errand drives too — to the stores, for example. These are places where accidents are more common. By contrast, 40 to 70 year olds drive more miles on roads that are relatively safe.

Don't walk, especially if you're old.

Don’t walk, especially if you’re old. Netherlands data, 2001-2005 fatalities per billion km.

The Netherlands data above suggest that any proposed solution should not involve getting seniors out of their cars. Not only do seniors find walking difficult, statistics suggest walking is 8 to 10 times more dangerous than driving, and bicycling is little better. A far better solution, I suspect, is reduced speeds for everyone on rural roads. If you’re zipping along a one-lane road at the posted 40, 55, or 60 mph and someone backs out of a driveway, you’re toast. The high posted speeds on these roads pose a particular danger to bicyclists and motorcyclists of all ages – and these are folks who I suspect drive a lot on the rural roads. I suspect that a 5 mph reduction would do quite a lot.

For automobiles on super-highways, it may be worthwhile to increase the speed limits. As things are now, the accident fatality rates are near zero, and the main problem may be the time wasted behind the wheel – driving from place to place. I suspect that an automobile speed limit raise to 80 mph would make sense on most US and Canadian superhighways; it’s already higher on the Autobahn in Germany.

Robert Buxbaum, November 24, 2014. Expect an essay about death on tax-day, coming soon. I’ve also written about marijuana, and about ADHD.

A simple, classical view of and into black holes

Black holes are regions of the universe where gravity is so strong that light can not emerge. And, since the motion of light is related to the fundamental structure of space and time, they must also be regions where space curves on itself, and where time appears to stop — at least as seen by us, from outside the black hole. But what does space-time look like inside the black hole.

NASA's semi-useless depiction of a black hole -- one they created for educators. I'm not sure what you're supposed to understand from this.

NASA’s semi-useless depiction of a black hole — one they created for educators. Though it’s sort of true, I’m not sure what you’re supposed to understand from this. I hope to present a better version.

From our outside perspective, an object tossed into a black hole will appear to move slower as it approaches the hole, and at the hole horizon it will appear to have stopped. From the inside of the hole, the object appears to just fall right in. Some claim that tidal force will rip it apart, but I think that’s a mistake. Here’s a simple, classical way to calculate the size of a black hole, and to understand why things look like they do and do what they do.

Lets begin with light, and accept, for now, that light travels in particle form. We call these particles photons; they have both an energy and a mass, and mostly move in straight lines. The energy of a photon is related to its frequency by way of Plank’s constant. E = hν, where E is the photon energy, h is Plank’s constant and ν is frequency. The photon mass is related to its energy by way of the formula m=E/c2, a formula that is surprisingly easy to derive, and often shown as E= mc2. The version that’s relevant to photons and black holes is:

m =  hν/c2.

Now consider that gravity affects ν by affecting the energy of the photon. As a photon goes up, the energy and frequency goes down as energy is lost. The gravitational force between a star, mass M, and this photon, mass m, is described as follows:

F = -GMm/r2

where F is force, G is the gravitational constant, and r is the distance of the photon from the center of the star and M is the mass of the star. The amount of photon energy lost to gravity as it rises from the surface is the integral of the force.

∆E = – ∫Fdr = ∫GMm/r2 dr = -GMm/r

Lets consider a photon of original energy E° and original mass m°= E°/c2. If ∆E = m°c2, all the energy of the original photon is lost and the photon disappears. Now, lets figure out the height, r° such that all of the original energy, E° is lost in rising away from the center of a star, mass M. That is let calculate the r for which ∆E = -E°. We’ll assume, for now, that the photon mass remains constant at m°.

E° = GMm°/r° = GME°/c2r°.

We now eliminate E° from the equation and solve for this special radius, r°:

r° =  GM/c2.

This would be the radius of a black hole if space didn’t curve and if the mass of the photon didn’t decrease as it rose. While neither of these assumptions is true, the errors nearly cancel, and the true value for r° is double the size calculated this way.

r° = 2GM/c2

r° = 2.95 km (M/Msun).

schwarzschild

Karl Schwarzschild 1873-1916.

The first person to do this calculation was Karl Schwarzschild and r° is called the Schwarzschild radius. This is the minimal radius for a star of mass M to produce closed space-time; a black hole. Msun is the mass of our sun, sol, 2 × 1030 kg.  To make a black hole one would have to compress the mass of our sun into a ball of 2.95 km radius, about the size of a small asteroid. Space-time would close around it, and light starting from the surface would not be able to escape.

As it happens, our sun is far bigger than an asteroid and is not a black hole: we can see light from the sun’s surface with minimal space-time deformation (there is some seen in the orbit of Mercury). Still, if the mass were a lot bigger, the radius would be a lot bigger and the density would be less. Consider a black hole the same mass as our galaxy, about 1 x1012 solar masses, or 2 x 1042  kg. This number is ten times what you might expect since our galaxy is 90% dark matter. The Schwarzschild radius with the mass of our galaxy would be 3 x 1012 km, or 0.3 light years. That’s far bigger than our solar system, and about 1/20 the distance to the nearest star, Alpha Centauri. This is a very big black hole, though it is far smaller than our galaxy, 5 x 1017 km, or 50,000 light years. The density, though is not all that high.

Now let’s consider a black hole comprising 15 billion galaxies, the mass of the known universe. The folks at Cornell estimate the sum of dark and luminous matter in the universe to be 3 x 1052 kg, about 15 billion times the mass of our galaxy. This does not include the mass hidden in the form of dark energy, but no one’s sure what dark energy is, or even if it really exists. A black hole encompassing this, known mass would have a Schwarzschild radius about 4.5 billion light years, or about 1/3 the actual size of the universe when size is calculated based on its Hubble-constant age, 14 billion years. The universe may be 2-3 times bigger than this on the inside because space is curved and, rather like Dr. Who’s Tardis it’s bigger on the inside, but in astronomical terms a factor of 3 or 10 is nothing: the actual size of the known universe is remarkably similar to its Schwarzschild radius, and this is without considering the mass its dark energy must have if it exists.

Standard picture of the big bang theory. Dark energy causes the latter-stage expansion.

Standard picture of the big bang theory. Dark energy causes the latter-stage expansion.

The evidence for dark energy is that the universe is expanding faster and faster instead of slowing. See figure. There is no visible reason for the acceleration, but it’s there. The source of the energy might be some zero-point effect, but wherever it comes from, the significant amount of energy must have significant mass, E = mc2. If the mass of this energy is 3 to 10 times the physical mass, as seems possible, we are living inside a large black hole, something many physicists, including Einstein considered extremely likely and aesthetically pleasing. Einstein originally didn’t consider the possibility that the hole could be expanding, but a reviewer of one of his articles convinced him it was possible.

Based on the above, we now know how to calculate the size of a black hole of any mass, and we now know what a black hole the size of the universe would look like from the inside. It looks just like home. Wait for further posts on curved space-time. For some reason, no religion seems to embrace science’s 14 billion year old, black-hole universe (expanding or not). As for the tidal forces around black holes, they are horrific only for the small black holes that most people write about. If the black hole is big, the tidal forces are small.

 Dr. µß Buxbaum Nov 17, 2014. The idea for this post came from an essay by Isaac Asimov that I read in a collection called “Buy Jupiter.” You can drink to the Schwarzchild radius with my new R° cocktail.