Category Archives: Science: Physics, Astronomy, etc.

Let’s visit an earth-like planet: Trappist-1d

According to Star Trek, Vulcans and Humans meet for the first time on April 5, 2063, near the town of Bozeman, Montana. It seems that Vulcan is a relatively nearby, earth-like planet with strongly humanoid inhabitants. It’s worthwhile to speculate why they are humanoid (alternatively, how likely is it that they are), and also worthwhile to figure out which planets we’d like to visit assuming we’re the ones who do the visiting.

First things first: It’s always assumed that life evolved on earth from scratch, as it were, but it is reasonably plausible that life was seeded here by some space-traveling species. Perhaps they came, looked around and left behind (intentionally or not) some blue-green algae, or perhaps some more advanced cells, or an insect or two. A billion or so years later, we’ve evolved into something that is reasonably similar to the visiting life-form. Alternately, perhaps we’d like to do the exploring, and even perhaps the settling. The Israelis are in the process of showing that low-cost space travel is a thing. Where do we want to go this century?

As it happens we know there are thousands of stars with planets nearby, but only one that we know that has reasonably earth-like planets reasonably near. This one planet circling star is Trappist-1, or more properly Trappist 1A. We don’t know which of the seven planets that orbit Trappist-1A is most earth-like, but we do know that there are at least seven planets, that they are all roughly earth size, that several have earth-like temperatures, and that all of these have water. We know all of this because the planetary paths of this star are aligned so that seven planets cross the star as seen from earth. We know their distances from their orbital times, and we know the latter from the shadows made as the planets transit. The radiation spectrum tells us there is water.

Trappist 1A is smaller than the sun, and colder than the sun, and 1 billion years older. It’s what is known as an ultra-cool dwarf. I’d be an ultra cool dwarf too, but I’m too tall. We can estimate the mass of the star and can measure its brightness. We then can calculate the temperatures on the planets based their distance from the star, something we determine as follows:

The gravitational force of a star, mass M, on a planet of mass, m,  is MmG/r2, where G is the gravitational constant, and r is the distance from the star to the planet. Since force = mass times acceleration, and the acceleration of a circular orbit is v2/r, we can say that, for these orbits (they look circular),

MmG/r2 = mv2/r = mω2r.

Here, v is the velocity of the planet and ω is its rotational velocity, ω = v/r. Eliminating m, we find that

r3 = MG/ω2.

Since we know G and ω, and we can estimate M (it’s 0.006 solar masses, we think), we have a can make good estimates of the distances of all seven planets from their various rotation speeds around the star, ω. We find that all of these planets are much closer to their star than we are to ours, so the their years are only a few days or weeks long.

We know that three planets have a temperatures reasonably close to earths, and we know that these three also have water based on observation of the absorption of light from their atmosphere as they pass in front of their star. To tell the temperature, we use our knowledge of how bright the star is (0.0052 times Sol), and our knowledge of the distance. As best we can tell, the following three of the Trappist-1 planets should have liquid surface water: Trappist 1c, d and e, the 2nd, 3rd and 4th planets from the star. With three planets to choose from, we can be fairly sure that at least one will be inhabitable by man somewhere in the planet.

The seven orbital times are in small-number ratios, suggesting that the orbits are linked into a so-called Laplace resonance-chain. For every two orbits of the outermost planet, the next one in completes three orbits, the next one completes four, followed by 6, 9 ,15, and 24. The simple whole number relationships between the periods are similar to the ratios between musical notes that produce pleasant and harmonic sounds as I discussed here. In the case of planets, resonant ratios keep the system stable. The most earth-like of the Trappist-1 planets is likely Trappist-1d, the third planet from the star. It’s iron-core, like earth, with water and a radius 1.043 times earth’s. It has an estimated average temperature of 19°C or 66°F. If there is oxygen, and if there is life there could well be, this planet will be very, very earth-like.

The temperature of the planet one in from this, Trappist-1c, is much warmer, we think on average, 62°C (143°F). Still, this is cool enough to have liquid water, and some plants live in volcanic pools on earth that are warmer than this. Besides this is an average, and we might the planet quite comfortable at the poles. The average temperature of the planet one out from this, Trappist-1e, is ice cold, -27°C (-17°F), an ice planet, it seems. Still, life can find a way. There is life on the poles of earth, and perhaps the plant was once warmer. Thus, any of these three might be the home to life, even humanoid life, or three-eyed, green men.

Visiting Trappist-1A won’t be easy, but it won’t be out-of hand impossible. The system is located about 39 light years away, which is far, but we already have a space ship heading out of the solar system, and we are developing better, and cheaper options all the time. The Israeli’s have a low cost, rocket heading to the moon. That is part of the minimal technology we’d want to visit a nearby star. You’d want to add enough rocket power to reach relativistic speeds. For a typical rocket this requires a fuel whose latent energy is on the order mc2. That turns out to be about 1 GeV/atomic mass. The only fuel that has such high power density is matter-antimatter annihilation, a propulsion system that might have time-reversal issues. A better option, I’d suggest is ion-propulsion with hydrogen atoms taken in during the journey, and ejected behind the rocket at 100 MeV energies by a cyclotron or bevatron. This system should work if the energy for the cyclotron comes from solar power. Perhaps this is the ion-drive of Star-Trek fame. To meet the Star-Trek’s made-up history, we’d have to meet up by April, 2063: forty-four years from now. If we leave today and reach near light speed by constant acceleration for a few of years, we could get there by then, but only as time is measured on the space-ship. At high speeds, time moves slower and space shrinks.

This planetary system is named Trappist-1 after the telescope used to discover it. It was the first system discovered by the 24 inch, 60 cm aperture, TRAnsiting Planets and PlanetesImals Small Telescope. This telescope is operated by The University of Liége, Belgium, and is located in Morocco. The reason most people have not heard of this work, I think, has to do with it being European science. Our news media does an awful job covering science, in my opinion, and a worse job covering Europe, or most anything outside the US. Finally, like the Israeli moon shot, this is a low-budget project, the work to date cost less than €2 million, or about US $2.3 million. Our media seems committed to the idea that only billions of dollars (or trillions) will do anything, and that the only people worth discussing are politicians. NASA’s budget today is about $6 billion, and its existence is barely mentioned.

The Trappist system appears to be about 1 billion years older than ours, by the way, so life there might be more advanced than ours, or it might have died out. And, for all we know, we’ll discover that the Trappist folks discover space travel, went on to colonize earth, and then died out. The star is located, just about exactly on the ecliptic, in the constellation Aquarius. This is an astrological sign associated with an expansion of human consciousness, and a revelation of truths. Let us hope that, in visiting Trappist, “peace will guide the planets and love will steer the stars”.

Robert Buxbaum, April 3, 2019. Science sources are: http://www.trappist.one. I was alerted to this star’s existence by an article in the Irish Times.

Why concrete cracks and why sealing is worthwhile

The oil tanker Palo Alto is one of several major ships made with concrete hulls.

The oil tanker Palo Alto is one of several major ships made with concrete hulls.

Modern concrete is a wonderful construction material. Major buildings are constructed of it, and major dams, and even some ships. But under the wrong circumstances, concrete has a surprising tendency to crack and fail. I thought I’d explain why that happens and what you can do about it. Concrete does not have to crack easily; ancient concrete didn’t and military or ship concrete doesn’t today. A lot of the fault lies in the use of cheap concrete — concrete with lots of filler — and with the cheap way that concrete is laid. First off, the major components of modern concrete are pretty uniform: sand and rock, Portland cement powder (made from cooked limestone, mostly), water, air, and sometimes ash. The cement component is what holds it all together — cements it together as it were — but it is not the majority of even the strongest concretes. The formula of cement has changed too, but the cement is not generally the problem. It doesn’t necessarily stick well to the rock or sand component of concrete (It sticks far better to itself) but it sticks well enough that spoliation, isn’t usually a problem by itself.

What causes problem is that the strength of concrete is strongly affected (decreased) by having lots of sand, aggregate and water. The concrete used in sidewalks is as cheap as possible, with lots of sand and aggregate. Highway and wall concrete has less sand and aggregate, and is stronger. Military and ship concrete has little sand, and is quite a lot stronger. The lowest grade, used in sidewalks, is M5, a term that refers to its compressive strength: 5 Mega Pascals. Pascals are European (Standard International) units of pressure and of strength. One Pascal is one Newton per square meter (Here’ a joke about Pascal units). In US (English) units, 5 MPa is 50 atm or 750 psi.

Ratios for concrete mixes of different strength.

Ratios for concrete mixes of different strength; the numbers I use are double these because these numbers don’t include water; that’s my “1”.

The ratio of dry ingredients in various concretes is shown at right. For M5, and including water, the ratio is 1 2 10 20. That is to say there is one part water, two parts cement, 10 parts sand, and 20 parts stone-aggregate (all these by weight). Added to this is 2-3% air, by volume, or nearly as much air as water. At least these are the target ratios; it sometimes happens that extra air and water are added to a concrete mix by greedy or rushed contractors. It’s sometimes done to save money, but more often because the job ran late. The more the mixer turns the more air gets added. If it turns too long there is extra air. It the job runs late, workers will have to add extra water too because the concrete starts hardening. I you see workers hosing down wet concrete as it comes from the truck, this is why. As you might expect, extra air and water decrease the strength of the product. M-10 and M-20 concrete have less sand, stone, and water as a proportion to cement. The result is 10 MPa or 20 MPa strength respectively.

A good on-site inspector is needed to keep the crew from adding too much water. Some water is needed for the polymerization (setting) of the concrete. The rest is excess, and when it evaporates, it leaves voids that are similar to the voids created by having air mix in. It is not uncommon to find 6% voids, in commercial concrete. This is to say that, after the water evaporates, the concrete contains about as much void as cement by volume. To get a sense of how much void space is in the normal concrete outside your house, go outside to a piece of old concrete (10 years old at least) on a hot, dry day, and pour out a cup of water. You will hear a hiss as the water absorbs, and you will see bubbles come out as the water goes in. It used to be common for cities to send inspectors to measuring the void content of the wet (and dry) concrete by a technique called “pycnometry” (that’s Greek for density measurement). I’ve not seen a local city do this in years, but don’t know why. An industrial pycnometer is shown below.

Pyncnometer used for concrete. I don't see these in use much any more.

Pycnometer used for concrete. I don’t see these in use much any more.

One of the main reason that concrete fails has to do with differential expansion, thermal stress, a concept I dealt with some years ago when figuring out how cold it had to be to freeze the balls off of a brass monkey. As an example of the temperature change to destroy M5, consider that the thermal expansion of cement is roughly 1 x 10-5/ °F or 1.8 x10-5/°C. This is to say that a 1 meter slab of cement that is heated or cooled by 100°F will expand or shrink by 10-3 m respectively; 100 x 1×10-5 = 10-3. This is a fairly large thermal expansion coefficient, as these things go. It would not cause stress-failure except that sand and rock have a smaller thermal expansion coefficients, about 0.6×10-5 — barely more than half the value for cement. Consider now what happens to concrete that s poured in the summer when it is 80°F out, and where the concrete heats up 100°F on setting (cement setting releases heat). Now lets come back in winter when it’s 0°F. This is a total of 100°F of temperature change. The differential expansion is 0.4 x 10-5/°F x 100°F =  4 x10-4 meter/meter = 4 x10-4 inch/inch.

The force created by this differential expansion is the elastic modulus of the cement times the relative change in expansion. The elastic modulus for typical cement is 20 GPa or, in English units, 3 million psi. This is to say that, if you had a column of cement (not concrete), one psi of force would compress it by 1/3,000,000. The differential expansion we calculated, cement vs sand and stone is 4×10-4 ; this much expansion times the elastic modulus, 3,000,000 = 1200 psi. Now look at the strength of the M-5 cement; it’s only 750 psi. When M-5 concrete is exposed to these conditions it will not survive. M-10 will fail on its own, from the temperature change, without any help needed from heavy traffic. You’d really like to see cities check the concrete, but I’ve seen little evidence that they do.

Water makes things worse, and not only because it creates voids when it evaporates. Water also messes up the polymerization reaction of the cement. Basic, fast setting cement is mostly Ca3SiO5

2Ca3SiO5 + 6 H2O –> 3Ca0SiO2•H2O +3Ca(OH)2•H2O.

The former of these, 3Ca0SiO2•H2O, forms something of a polymer. Monomer units of SiO4 are linked directly or by partially hydrated CaO linkages. Add too much water and the polymeric linkages are weakened or do not form at all. Over time the Ca(OH)2 can drain away or react with  CO2 in the air to form chalk.

concrete  strength versus-curing time. Slow curing of damp concrete helps; fast dry hurts. Carbonate formation adds little or no strength. Jehan Elsamni 2011.

Portland limestone cement strength versus curing time. Slow curing and damp helps; fast dry hurts. Carbonate formation adds little or no strength. Jehan Elsamni 2011.

Ca(OH)2 + CO2 → CaCO3 + H2O

Sorry to say, the chalk adds little or no strength, as the graph at right shows. Concrete made with too much water isn’t very strong at all, and it gets no stronger when dried in air. Hardening goes on for some weeks after pouring, and this is the reason you don’t drive on 1 too 2 day old concrete. Driving on weak concrete can cause cracks that would not form if you waited.

You might think to make better concrete by pouring concrete in the cold, but pouring in the cold makes things worse. Cold poured cement will expand the summer and the cement will detach from the sand and stone. Ideally, pouring should be in spring or fall, when the temperature is moderate, 40-60°F. Any crack that develops grows by a mechanism called Rayleigh crack growth, described here. Basically, once a crack starts, it concentrates the fracture forces, and any wiggling of the concrete makes the crack grow faster.

Based on the above, I’ve come to suspect that putting on a surface coat can (could) help strengthen old concrete, even long after it’s hardened. Mostly this would happen by filling in voids and cracks, but also by extending the polymer chains. I imagine it would be especially helpful to apply the surface coat somewhat watery on a dry day in the summer. In that case, I imagine that Ca3SiO5 and Ca(OH)2 from the surface coat will penetrate and fill the pores of the concrete below — the sales pores that hiss when you pour water on them. I imagine this would fill cracks and voids, and extend existing CaOSiO2•H2O chains. The coat should add strength, and should be attractive as well. At least that was my thought.

I should note that, while Portland cement is mostly Ca3SiO5, there is also a fair amount (25%) of Ca2SiO4. This component reacts with water to form the same calcium-silicate polymer as above, but does so at a slower rate using less water per gram. My hope was that this component would be the main one to diffuse into deep pores of the concrete, reacting there to strengthen the concrete long after surface drying had occurred.

Trump tower: 664', concrete and glass. What grade of concrete would you use?

Trump tower: 664′, concrete and glass. What grade of concrete would you use?

As it happened, I had a chance to test my ideas this summer and also about 3 years ago. The city inspector came by to say the concrete flags outside my house were rough, and thus needed replacing, and that I was to pay or do it myself. Not that I understand the need for smooth concrete, quite, but that’s our fair city. I applied for a building permit to apply a surface coat, and applied it watery. I used “Quickrete” brand concrete patch, and so far it’s sticking OK. Pock-holes in the old concrete have been filled in, and so far surface is smooth. We’ll have to see if my patch lasts 10-20 years like fresh cement. Otherwise, no matter how strong the concrete becomes underneath, the city will be upset, and I’ll have to fix it. I’ve noticed that there is already some crumbling at the sides of flags, something I attribute to the extra water. It’s not a problem yet, but hope this is not the beginning of something worse. If I’m wrong here, and the whole seal-coat flakes off, I’ll be stuck replacing the flags, or continuing to re-coat just to preserve my reputation. But that’s the cost of experimentation. I tried something new, and am blogging about it in the hope that you and I benefit. “Education is what you get when you don’t get what you want.” (It’s one of my wise sayings). At the worst, I’ll have spent 90 lb of patching cement to get an education. And, I’m happy to say that some of the relatively new concrete flags that the city put in are already cracked. I attribute this to: too much sand, air, water or air (they don’t look like they have much rock): Poor oversight.

Dr. Robert E. Buxbaum. March 5, 2019. As an aside, the 664 foot Trump Tower, NY is virtually the only skyscraper in the city to be built of concrete and glass. The others are mostly steel and glass. Concrete and glass is supposed to be stiffer and quieter. The engineer overseeing the project was Barbara Res, the first woman to oversee a major, NY building project. Thought question: if you built the Trump Tower, which quality of concrete would you use, and why.

Great waves, small circles, and the spread of ideas.

Simplified wave motion, GIf by Dan Russel (maybe? I think?).

The scientific method involves looking closely at things. Sometimes we look closely for a purpose — to make a better mouse-trap, say. But sometimes it’s just to understand what’s happening: to satisfy curiosity, to understand the way the world works, or to answer a child. Both motivations bring positive results, but there is a difference in how people honor the product of these motivations. Scientific knowledge developed for curiosity is considered better; it tends to become the model for social understanding, and for art and literature. Meanwhile, science developed for a purpose is considered suspect, and often that suspicion is valid. A surprising amount of our knowledge was developed for war: for the purpose of killing people, destroying things, and occupying lands.

Waves provide a wonderful example of science exploration that was developed mostly for curiosity, and so they have become models of social understanding and culture — far more so than the atom bomb and plague work discussed previously.

Waves appear magical: You poke a pond surface with a stick, and the influence of that poke travels, as if by magic, to all corners of the pond. Apparently the initial poke set off something, and that sets off something else, and we’ve come to use this as a model for cultural ideas. Any major change in music, art, or cultural thought is described as a wave (and not as a disease). The sense of wave is  that a small push occurs, and the impact travels across a continent and across an ocean. The Gifs above and below shows how this happens for the ordinary wave — the one with a peaked top. As shown, the bits of water do not move with the wave. Instead they just circulate in a small circle. The powerful waves that crosses an ocean are composed of many small circles of water rolling in the general direction of the wave. With ideas too, I think, one person can push a second, and that second a third, each acting in his or her own circle, and a powerful transmission of ideas results. Of course, for a big wave, you need a big circle, but maybe not in cases of reflection (reflected waves can add, sometimes very destructively).

simplified wave movement

In the figures I’ve shown, you will notice that the top of the circle always moves in the same direction as the top of the wave. If the wave moves to the right, the circle is clockwise. There are also Rayleigh waves. In these, the top of the wave is not peaked, but broad, with little indents between ripples. For Rayleigh wave the motion is not circular, but elliptical, and the top of the ellipse moves in the opposite direction to that of the wave. These waves go slower than the normal waves, but they are more destructive. Most of the damage of earthquakes is by the late-arriving Rayleigh waves.

If regular waves are related to fast-moving ideas, like rock n roll, Rayleigh waves might be related to slower-traveling, counter-intuitive ideas, paradigm shifts: Religions, chaos, entropyfeminism, or communism. Rayleigh waves are mostly seen in solids, and the destructive power of counter-intuitive ideas is mostly seen in rigid societies.

Then there are also pressure waves, like sound, and wiggle waves (transverse waves). Pressure waves travel the fastest, and work in both solids and liquids. Wiggle waves travel slower (and don’t travel in liquids). Both of these involve no circles at all, but just one bit of material pushing on its neighbor. I think the economy works this way: bouncing springs, for the most part. Life is made up of all of these, and life is good. The alternative to vibration, I should mention, is status. Status is a form of death. There is a certain sort of person who longs for nothing more than an unchanging, no-conflict world: one government and one leadership. Avoid such people.

Robert Buxbaum, February 10, 2019

Why the earth is magnetic with the north pole heading south.

The magnetic north pole, also known as true north, has begun moving south. It had been moving toward the north pole thought the last century. It moved out of Canadian waters about 15 years ago, heading toward Russia. This year it passed as close to the North pole as it is likely to, and begun heading south (Das Vedanga, old friend). So this might be a good time to ask “why is it moving?” or better yet, “Why does it exist at all?” Sorry to say the Wikipedia page is little help here; what little they say looks very wrong. So I thought I’d do my thing and write an essay.

The motion of the magnetic (true) north pole over the last century; it's nearly at the north pole.

Migration of the magnetic (true) north pole over the last century; it’s at 8°N and just passed the North Pole.

Your first assumption of the cause of the earth’s magnetic field would involve ferromagnetism: the earth’s core is largely iron and nickel, two metals that permanent magnets. Although the earth’s core is very hot, far above the “Curie Temperature” where permanent magnets form, you might imagine that some small degree of magnetizability remains. You’d be sort of right here and sort of wrong; to see why, lets take a diversion into the Curie Temperature (Pierre Curie in this case) before presenting a better explanation.

The reason there is no magnetism above the Curie temperature is similar to the reason that you can’t have a plague outbreak or an atom bomb if R-naught is less than one. Imagine a magnet inside a pot of iron. The surrounding iron will dissipate some of the field because magnets are dipoles and the iron occupies space. Fixed dipole effects dissipate with a distance relation of r-4; induced dipoles with a relation r-6. The iron surrounding the magnet will also be magnetized to an extent that augments the original, but the degree of magnetization decreases with temperature. Above some critical temperature, the surrounding dissipates more than it adds and the effect is that the original magnetic effect will die out if the original magnet is removed. It’s the same way that plagues die out if enough people are immunized, discussed earlier.

The earth rotates, and the earth's surface is negatively charged. There is thus some room for internal currents.

The earth rotates, and the earth’s surface is negatively charged. There is thus some room for internal currents.

It seems that the earth’s magnetic field is electromagnetic; that is, it’s caused by a current of some sort. According to Wikipedia, the magnetic field of the earth is caused by electric currents in the molten iron and nickel of the earth’s core. While there is a likely current within the core, I suspect that the effect is small. Wikipedia provides no mechanism for this current, but the obvious one is based on the negative charge of the earth’s surface. If the charge on the surface is non-uniform, It is possible that the outer part of the earth’s core could become positively charged rather the way a capacitor charges. You’d expect some internal circulation of the liquid the metal of the core, as shown above – it’s similar to the induced flow of tornadoes — and that flow could induce a magnetic field. But internal circulation of the metallic core does not seem to be a likely mechanism of the earth’s field. One problem: the magnitude of the field created this way would be smaller than the one caused by rotation of the negatively charged surface of the earth, and it would be in the opposite direction. Besides, it is not clear that the interior of the planet has any charge at all: The normal expectation is for charge to distribute fairly uniformly on a spherical surface.

The TV series, NOVA presents a yet more unlikely mechanism: That motion of the liquid metal interior against the magnetic field of the earth increases the magnetic field. The motion of a metal in a magnetic field does indeed produce a field, but sorry to say, it’s in the opposing direction, something that should be obvious from conservation of energy.

The true cause of the earth’s magnet field, in my opinion, is the negative charge of the earth and its rotation. There is a near-equal and opposite charge of the atmosphere, and its rotation should produce a near-opposite magnetic field, but there appears to be enough difference to provide for the field we see. The cause for the charge on the planet might be due to solar wind or the ionization of cosmic rays. And I notice that the average speed of parts of the atmosphere exceeds that of the surface —  the jet-stream, but it seems clear to me that the magnetic field is not due to rotation of the jet stream because, if that were the cause, magnetic north would be magnetic south. (When positive charges rotate from west to east, as in the jet stream, the magnetic field created in a North magnetic pole a the North pole. But in fact the North magnetic pole is the South pole of a magnet — that’s why the N-side of compasses are attracted to it, so … the cause must be negative charge rotation. Or so it seems to me.  Supporting this view, I note that the magnet pole sometimes flips, north for south, but this is only following a slow decline in magnetic strength, and it never points toward a spot on the equator. I’m going to speculate that the flip occurs when the net charge reverses, thought it could also come when the speed or charge of the jet stream picks up. I note that the magnetic field of the earth varies through the 24 hour day, below.

The earth's magnetic strength varies regularly through the day.

The earth’s magnetic strength varies regularly through the day.

Although magnetic north is now heading south, I don’t expect it to flip any time soon. The magnetic strength has been decreasing by about 6.3% per century. If it continues at that rate (unlikely) it will be some 1600 years to the flip, and I expect that the decrease will probably slow. It would probably take a massive change in climate to change the charge or speed of the jet stream enough to reverse the magnetic poles. Interestingly though, the frequency of magnetic strength variation is 41,000 years, the same frequency as the changes in the planet’s tilt. And the 41,000 year cycle of changes in the planet’s tilt, as I’ve described, is related to ice ages.

Now for a little math. Assume there are 1 mol of excess electrons on a large sphere of the earth. That’s 96500 Coulombs of electrons, and the effective current caused by the earth’s rotation equals 96500/(24 x3600) = 1.1 Amp = i. The magnetic field strength, H =  i N µ/L where H is magnetizability field in oersteds, N is the number of turns, in this case 1, µ is the magnetizability. The magnetizability of air is 0.0125 meter-oersteds/ per ampere-turn, and that of a system with an iron core is about 200 times more, 2.5 meter-tesla/ampere-turn. L is a characteristic length of the electromagnet, and I’ll say that’s 10,000 km or 107 meters. As a net result, I calculate a magnetic strength of 2.75×10-7 Tesla, or .00275 Gauss. The magnet field of the earth is about 0.3 gauss, suggesting that about 100 mols of excess charge are involved in the earth’s field, assuming that my explanation and my math are correct.

At this point, I should mention that Venus has about 1/100 the magnetic field of the earth despite having a molten metallic core like the earth. It’s rotation time is 243 days. Jupiter, Saturn and Uranus have greater magnetic fields despite having no metallic cores — certainly no molten metallic cores (some theorize a core of solid, metallic hydrogen). The rotation time of all of these is faster than the earth’s.

Robert E. Buxbaum, February 3, 2019. I have two pet peeves here. One is that none of the popular science articles on the earth’s magnetic field bother to show math to back their claims. This is a growing problem in the literature; it robs science of science, and makes it into a political-correctness exercise where you are made to appreciate the political fashion of the writer. The other peeve, related to the above concerns the game it’s thoroughly confusing, and politically ego-driven. The gauss is the cgs unit of magnetic flux density, this unit is called G in Europe but B in the US or England. In the US we like to use the tesla T as an SI – mks units. One tesla equals 104 gauss. The oersted, H is the unit of magnetizing field. The unit is H and not O because the English call this unit the henry because Henry did important work in magnetism One ampere-turn per meter is equal to 4π x 10−3 oersted, a number I approximated to 0.125 above. But the above only refers to flux density; what about flux itself? The unit for magnetic flux is the weber, Wb in SI, or the maxwell, Mx in cgs. Of course, magnetic flux is nothing more than the integral of flux density over an area, so why not describe flux in ampere-meters or gauss-acres? It’s because Ampere was French and Gauss was German, I think.

Disease, atom bombs, and R-naught

A key indicator of the speed and likelihood of a major disease outbreak is the number of people that each infected person is likely to infect. This infection number is called R-naught, or Ro; it is shown in the table below for several major plague diseases.

R-naught - communicability for several contagious diseases, CDC.

R-naught – infect-ability for several contagious diseases, CDC.

Of the diseases shown, measles is the most communicable, with an Ro of 12 to 18. In an unvaccinated population, one measles-infected person will infect 12- 18 others: his/her whole family and/ or most of his/her friends. After two weeks or so of incubation, each of the newly infected will infect another 12-18. Traveling this way, measles wiped out swaths of the American Indian population in just a few months. It was one of the major plagues that made America white.

While Measles is virtually gone today, Ebola, SARS, HIV, and Leprosy remain. They are far less communicable, and far less deadly, but there is no vaccine. Because they have a low Ro, outbreaks of these diseases move only slowly through a population with outbreaks that can last for years or decades.

To estimate of the total number of people infected, you can use R-naught and the incubation-transmission time as follows:

Ni = Row/wt

where Ni is the total number of people infected at any time after the initial outbreak, w is the number of weeks since the outbreak began, and wt is the average infection to transmission time in weeks.

For measles, wt is approximately 2 weeks. In the days before vaccine, Ro was about 15, as on the table, and

Ni = 15w/2.

In 2 weeks, there will be 15 measles infected people, in 4 weeks there will be 152, or 225, and in 6 generations, or 12 weeks, you’d expect to have 11.39 million. This is a real plague. The spread of measles would slow somewhat after a few weeks, as the infected more and more run into folks who are already infected or already immune. But even when the measles slowed, it still infected quite a lot faster than HIV, Leprosy, or SARS (SARS is a form of Influenza). Leprosy is particularly slow, having a low R-naught, and an infection-transmission time of about 20 years (10 years without symptoms!).

In America, more or less everyone is vaccinated for measles. Measles vaccine works, even if the benefits are oversold, mainly by reducing the effective value of Ro. The measles vaccine is claimed to be 93% effective, suggesting that only 7% of the people that an infected person meets are not immune. If the original value of Ro is 15, as above, the effect of immunization is to reduce the value Ro in the US today to effectively 15 x 0.07 = 1.05. We can still  have measles outbreaks, but only on a small-scale, with slow-moving outbreaks going through pockets of the less-immunized. The average measles-infected person will infect only one other person, if that. The expectation is that an outbreak will be captured by the CDC before it can do much harm.

Short of a vaccine, the best we can do to stop droplet-spread diseases, like SARS, Leprosy, or Ebola is by way of a face mask. Those are worn in Hong Kong and Singapore, but have yet to become acceptable in the USA. It is a low-tech way to reduce Ro to a value below 1.0, — if R-naught is below 1.0, the disease dies out on its own. With HIV, the main way the spread was stopped was by condoms — the same, low tech solution, applied to sexually transmitted disease.

Image from VCE Physics, https://sites.google.com/site/coyleysvcephysics/home/unit-2/optional-studies/26-how-do-fusion-and-fission-compare-as-viable-nuclear-energy-power-sources/fission-and-fusion---lesson-2/chain-reactions-with-dominoes

Progress of an Atom bomb going off. Image from VCE Physics, visit here

As it happens, the explosion of an atom bomb follows the same path as the spread of disease. One neutron appears out of somewhere, and splits a uranium or plutonium atom. Each atom produces two or three more neutrons, so that we might think that R-naught = 2.5, approximately. For a bomb, Ro is found to be a bit lower because we are only interested in fast-released neutrons, and because some neutrons are lost. For a well-designed bomb, it’s OK to say that Ro is about 2.

The progress of a bomb going off will follow the same math as above:

Nn = Rot/nt

where Nn is the total number of neutrons at any time, t is the average number of nanoseconds since the first neutron hit, and nt is the transmission time — the time it takes between when a neuron is given off and absorbed, in nanoseconds.

Assuming an average neutron speed of 13 million m/s, and an average travel distance for neutrons of about 0.1 m, the time between interactions comes out to about 8 billionths of a second — 8 ns. From this, we find the number of neutrons is:

Nn = 2t/8, where t is time measured in nanoseconds (billionths of a second). Since 1 kg of uranium contains about 2 x 1024 atoms, a well-designed A-bomb that contains 1 kg, should take about 83 generations (283 = 1024). If each generation is 8 ns, as above, the explosion should take about 0.664 milliseconds to consume 100% of the fuel. The fission power of each Uranium atom is about 210 MeV, suggesting that this 1 kg bomb could release 16 billion Kcal, or as much explosive energy as 16 kTons of TNT, about the explosive power of the Nagasaki bomb (There are about 38 x10-24 Kcal/eV).

As with disease, this calculation is a bit misleading about the ease of designing a working atomic bomb. Ro starts to get lower after a significant faction of the atoms are split. The atoms begin to move away from each other, and some of the atoms become immune. Once split, the daughter nuclei continue to absorb neutrons without giving off either neutrons or energy. The net result is that an increased fraction of neutrons that are lost to space, and the explosion dies off long before the full power is released.

Computers are very helpful in the analysis of bombs and plagues, as are smart people. The Manhattan project scientists got it right on the first try. They had only rudimentary computers but lots of smart people. Even so, they seem to have gotten an efficiency of about 15%. The North Koreans, with better computers and fewer smart people took 5 tries to reach this level of competence (analyzed here). They are now in the process of developing germ-warfare — directed plagues. As a warning to them, just as it’s very hard to get things right with A-bombs, it’s very hard to get it right with disease; people might start wearing masks, or drinking bottled water, or the CDC could develop a vaccine. The danger, if you get it wrong is the same as with atom bombs: the US will not take this sort of attack lying down.

Robert Buxbaum, January 18, 2019. One of my favorite authors, Issac Asimov, died of AIDS; a slow-moving plague that he contacted from a transfusion. I benefitted vastly from Isaac Asimov’s science and science fiction, but he wrote on virtually every topic. My aim is essays that are sort-of like his, but more mathematical.

Measles, anti-vaxers, and the pious lies of the CDC.

Measles is a horrible disease that contributed to the downfall that had been declared dead in the US, wiped out by immunization, but it has reappeared. A lot of the blame goes to folks who refuse to vaccinate: anti-vaxers in the popular press. The Center for Disease Control is doing its best to promote to stop the anti-vaxers, and promote vaccination for all, but in doing so, I find they present the risks of measles worse than they are. While I’m sympathetic to the goal, I’m not a fan of bending the truth. Lies hurt the people who speak them and the ones who believe them, and they can hurt the health of immune-compromized children who are pushed to vaccinate. You will see my arguments below.

The CDC’s most-used value for the mortality rate for measles is 0.3%. It appears, for example, in line two of the following table from Orenstein et al., 2004. This table also includes measles-caused complications, broken down by type and patient age; read the full article here.

Measles complications, death rates, US, 1987-2000, CDC.

Measles complications, death rates, US, 1987-2000, CDC, Orenstein et. al. 2004.

The 0.3% average mortality rate seems more in tune with the 1800s than today. Similarly, note that the risk of measles-associated encephalitis is given as 10.1%, higher than the risk of measles-diarrhea, 8.2%. Do 10.1% of measles cases today produce encephalitis, a horrible, brain-swelling disease that often causes death. Basically everyone in the 1950s and early 60s got measles (I got it twice), but there were only 1000 cases of encephalitis per year. None of my classmates got encephalitis, and none died. How is this possible; it was the era before antibiotics. Even Orenstein et. al comment that their measles mortality rates appear to be far higher today than in the 1940s and 50s. The article explains that the increase to 3 per thousand, “is most likely due to more complete reporting of measles as a cause of death, HIV infections, and a higher proportion of cases among preschool-aged children and adults.”

A far more likely explanation is that the CDC value is wrong. That the measles cases that were reported and certified as such are the ones that are the most severe. There were about 450 measles deaths per year in the 1940s and 1950s, and 408 in 1962, the last year before the MMR vaccine was developed and by Dr. Hilleman of Merck (a great man of science, forgotten). In the last two decades there were some 2000 measles cases reported US cases but only one measles death. A significant decline in cases, but the ratio does not support the CDC’s death rate. For a better estimate, I propose to divide the total number of measles deaths in 1962 by the average birth rate in the late 1950s. That is to say, I propose to divide 408 by the 4.3 million births per year. From this, I calculate a mortality rate just under 0.01% in 1962, That’s 1/30th the CDC number, and medicine has improved since 1962.

I suspect that the CDC inflates the mortality numbers, in part by cherry-picking its years. It inflates them further by treating “reported measles cases.” as if they were all measles cases. I suspect that the reported cases in these years were mainly the very severe ones. Mild case measles clears up before being reported or certified as measles. This seems the only normal explanation for why 10.1% of cases include encephalitis, and only 8.2% diarrhea. It’s why the CDC’s mortality numbers suggest that, despite antibiotics, our death rate has gone up by a factor of 30 since 1962.

Consider the experience of people who lived in the early 60s. Most children of my era went to public elementary schools with some 1000 other students, all of whom got measles. By the CDC’s mortality number, we should have seen three measles deaths per school, and 101 cases of encephalitis. In reality, if there had been one death in my school it would have been big news, and it’s impossible that 10% of my classmates got encephalitis. Instead, in those years, only 48,000 people were hospitalized per year for measles, and 1,000 of these suffered encephalitis (CDC numbers, reported here).

To see if vaccination is a good idea, lets now consider the risk of vaccination. The CDC reports their vaccine “is virtually risk free”, but what does risk-free mean? A British study finds vaccination-caused neurological damage in 1/365,000 MMR vaccinations, a rate of 0.00027%, with a small fraction leading to death. These problems are mostly found in immunocompromised patients. I will now estimate the neurological risk for actual measles based on the ratio of encephalitis to births, as before using the average birth rate as my estimate for measles cases; 1000/4,300,000 = 0.023%. This is far lower than the risk the CDC reports, and more in line with experience.

The risk for neurological damage from measles that I calculate is 86 times higher risk than the neurological risk from vaccination, suggesting vaccination is a very good thing, on average: The vast majority of people should get vaccinated. But for people with a weakened immune system, my calculations suggest it is worthwhile to not immunize at 12 months as doctors recommend. The main cause of vaccination death is encephalitis, but this only happens in patients with weakened immune systems. If your child’s immune system is weakened, even by a cold, I’d suggest you wait 1-3 months, and would hope that your doctor would concur. If your child has AIDS, ALS, Lupus, or any other, long-term immune problem, you should not vaccinate at all. Not vaccinating your immune-weakened child will weaken the herd immunity, but will protect your child.

We live in a country with significant herd immunity: Even if there were a measles outbreak, it is unlikely there would be 500 cases at one time, and your child’s chance of running into one of them in the next month is very small assuming that you don’t take your child to Disneyland, or to visit relatives from abroad. Also, don’t hang out with anti-vaxers if you are not vaccinated. Associating with anti-vaxers will dramatically increase your child’s risk of infection.

As for autism: there appears to be no autism advantage to pushing off vaccination. Signs of autism typically appear around 12 months, the same age that most children receive their first-stage MMR shot, so some people came to associate the two. Parents who push-off vaccination do not push-off the child’s chance of developing autism, they just increase the chance their child will get measles, and that their child will infect others. Schools are right to bar such children, IMHO.

I’ve noticed that, with health care in, particular, there is a tendency for researchers to mangle statistics so that good things seem better than they are. Health food: is not necessarily so healthy as they say; nor is weight lossBicycle helmets: ditto. Sometimes this bleeds over to outright lies. Generic modified grains were branded as cancer-causing based on outright lies and  missionary zeal. I feel that I help a bit, in part by countering individual white lies; in part by teaching folks how to better read statistic arguments. If you are a researcher, I strongly suggest you do not set up your research with a hypothesis so that only one outcome will be publishable or acceptable. Here’s how.

Robert E. Buxbaum, December 9, 2018.

James Croll, janitor scientist; man didn’t cause warming or ice age

When politicians say that 98% of published scientists agree that man is the cause of global warming you may wonder who the other scientists are. It’s been known at least since the mid 1800s that the world was getting warmer; that came up talking about the president’s “Resolute” desk, and the assumption was that the cause was coal. The first scientist to present an alternate theory was James Croll, a scientist who learned algebra only at 22, and got to mix with high-level scientists as the janitor at the Anderson College in Glasgow. I think he is probably right, though he got some details wrong, in my opinion.

James Croll was born in 1821 to a poor farming family in Scotland. He had an intense interest in science, but no opportunity for higher schooling. Instead he worked on the farm and at various jobs that allowed him to read, but he lacked a mathematics background and had no one to discuss science with. To learn formal algebra, he sat in the back of a class of younger students. Things would have pretty well ended there but he got a job as janitor for the Anderson College (Scotland), and had access to the library. As janitor, he could read journals, he could talk to scientists, and he came up with a theory of climate change that got a lot of novel things right. His idea was that there were  regular ice ages and warming periods that would follow in cycles. In his view these were a product of the precession of the equinox and the fact that the earth’s orbit was not round, but elliptical, with an eccentricity of 1.7%. We are 3.4% closer to the sun on January 3 than we are on July 4, but the precise dates changes slowly because of precession of the earth’s axis, otherwise known as precession of the equinox.

Currently, at the spring equinox, the sun is in “the house of Pisces“. This is to say, that a person who looks at the stars all the night of the spring equinox will be able to see all of the constellations of the zodiac except for the stars that represent Pisces (two fish). But the earth’s axes turns slowly, about 1 days worth of turn every 70 years, one rotation every 25,770 years. Some 1800 years ago, the sun would have been in the house of Ares, and 300 years from now, we will be “in the age of Aquarius.” In case you wondered what the song, “age of Aquarius” was about, it’s about the precession of the equinox.

Our current spot in the precession, according to Croll is favorable to warmth. Because we are close to the sun on January 3, our northern summers are less-warm than they would be otherwise, but longer; in the southern hemisphere summers are warmer but shorter (southern winters are short because of conservation of angular momentum). The net result, according to Croll should be a loss of ice at both poles, and slow warming of the earth. Cooling occurs, according to Croll, when the earth’s axis tilt is 90° off the major axis of the orbit ellipse, 6300 years before or after today. Similar to this, a decrease in the tilt of the earth would cause an ice age (see here for why). Earth tilt varies over a 42,000 year cycle, and it is now in the middle of a decrease. Croll’s argument is that it takes a real summer to melt the ice at the poles; if you don’t have much of a tilt, or if the tilt is at the wrong time, ice builds making the earth more reflective, and thus a little colder and iceier each year; ice extends south of Paris and Boston. Eventually precession and tilt reverses the cooling, producing alternating warm periods and ice ages. We are currently in a warm period.

Global temperatures measured from the antarctic ice showing stable, cyclic chaos and self-similarity.

Global temperatures measured from the antarctic ice showing stable, cyclic chaos and self-similarity.

At the time Croll was coming up with this, it looked like numerology. Besides, most scientists doubted that ice ages happened in any regular pattern. We now know that ice ages do happen periodically and think that Croll must have been on to something. See figure; the earth’s temperature shows both a 42,000 year cycle and a 23,000 year cycle with ice ages coming every 100,000 years.

In the 1920s a Serbian Mathematician, geologist, astronomer, Milutin Milanković   proposed a new version of Croll’s theory that justified longer space between ice ages based on the beat frequency between a 23,000 year time for axis precession, and the 42,000 year time for axis tilt variation. Milanković used this revised precession time because the ellipse precesses, and thus the weather-related precession of the axis is 23,000 years instead of 25,770 years. The beat frequency is found as follows:

51,000 = 23,000 x 42,000 / (42000-23000).

As it happens neither Croll’s nor Milanković’s was accepted in their own lifetimes. Despite mounting evidence that there were regular ice ages, it was hard to believe that these small causes could produce such large effects. Then, in a 1976 study (Hayes, Imbrie, and Shackleton) demonstrated clear climate variations based on the mud composition from New York and Arizona. The variations followed all four of the Milankocitch cycles.

Southern hemisphere ice is growing, something that confounds CO2-centric experts

Southern hemisphere ice is growing, something that confounds CO2-centric experts

Further confirmation came from studying the antarctic ice, above. You can clearly see the 23,000 year cycle of precession, the 41,000 year cycle of tilt, the 51,000 year beat cycle, and also a 100,000 year cycle that appears to correspond to 100,000 year changes in the degree of elliptic-ness of the orbit. Our orbit goes from near circular to quite elliptic (6.8%) with a cycle time effectively of 100,000 years. It is currently 1.7% elliptic and decreasing fast. This, along with the decrease in earth tilt suggests that we are soon heading to an ice age. According to Croll, a highly eccentric orbit leads to warming because the minor access of the ellipse is reduced when the orbit is lengthened. We are now heading to a less-eccentric orbit; for more details go here; also for why the orbit changes and why there is precession.

We are currently near the end of a 7,000 year warm period. The one major thing that keeps maintaining this period seems to be that our precession is such that we are closest to the sun at nearly the winter solstice. In a few thousand years all the factors should point towards global cooling, and we should begin to see the glaciers advance. Already the antarctic ice is advancing year after year. We may come to appreciate the CO2 produced by cows and Chinese coal-burning as these may be all that hold off the coming ice age.

Robert Buxbaum, November 16, 2018.

Of God and gauge blocks

Most scientists are religious on some level. There’s clear evidence for a big bang, and thus for a God-of-Creation. But the creation event is so distant and huge that no personal God is implied. I’d like to suggest that the God of creation is close by and as a beginning to this, I’d like to discus Johansson gauge blocks, the standard tool used to measure machine parts accurately.

jo4

A pair of Johansson blocks supporting 100 kg in a 1917 demonstration. This is 33 times atmospheric pressure, about 470 psi.

Lets say you’re making a complicated piece of commercial machinery, a car engine for example. Generally you’ll need to make many parts in several different shops using several different machines. If you want to be sure the parts will fit together, a representative number of each part must be checked for dimensional accuracy in several places. An accuracy requirement of 0.01 mm is not uncommon. How would you do this? The way it’s been done, at least since the days of Henry Ford, is to mount the parts to a flat surface and use a feeler gauge to compare the heights of the parts to the height of stacks of precisely manufactured gauge blocks. Called Johansson gauge blocks after the inventor and original manufacturer, Henrik Johansson, the blocks are typically made of steel, 1.35″ wide by .35″ thick (0.47 in2 surface), and of various heights. Different height blocks can be stacked to produce any desired height in multiples of 0.01 mm. To give accuracy to the measurements, the blocks must be manufactured flat to within 1/10000 of a millimeter. This is 0.1µ, or about 1/5 the wavelength of visible light. At this degree of flatness an amazing thing is seen to happen: Jo blocks stick together when stacked with a force of 100 kg (220 pounds) or more, an effect called, “wringing.” See picture at right from a 1917 advertising demonstration.

This 220 lbs of force measured in the picture suggests an invisible pressure of 470 psi at least that holds the blocks together (220 lbs/0.47 in2 = 470 psi). This is 32 times the pressure of the atmosphere. It is independent of air, or temperature, or the metal used to make the blocks. Since pressure times volume equals energy, and this pressure can be thought of as a vacuum energy density arising “out of the nothingness.” We find that each cubic foot of space between the blocks contains, 470 foot-lbs of energy. This is the equivalent of 0.9 kWh per cubic meter, energy you can not see, but you can feel. That is a lot of energy in the nothingness, but the energy (and the pressure) get larger the flatter you make the surfaces, or the closer together you bring them together. This is an odd observation since, generally get more dense the smaller you divide them. Clean metal surfaces that are flat enough will weld together without the need for heat, a trick we have used in the manufacture of purifiers.

A standard way to think of quantum scattering is that the particle is scattered by invisible bits of light (virtual photons), the wavy lines. In this view, the force that pushes two flat surfaces together is from a slight deficiency in the amount of invisible light in the small space between them.

A standard way to think of quantum scattering of an atom (solid line) is that it is scattered by invisible bits of light, virtual photons (the wavy lines). In this view, the force that pushes two blocks together comes from a slight deficiency in the number of virtual photons in the small space between the blocks.

The empty space between two flat surfaces also has the power to scatter light or atoms that pass between them. This scattering is seen even in vacuum at zero degrees Kelvin, absolute zero. Somehow the light or atoms picks up energy, “out of the nothingness,” and shoots up or down. It’s a “quantum effect,” and after a while physics students forget how odd it is for energy to come out of nothing. Not only do students stop wondering about where the energy comes from, they stop wondering why it is that the scattering energy gets bigger the closer you bring the surfaces. With Johansson block sticking and with quantum scattering, the energy density gets higher the closer the surface, and this is accepted as normal, just Heisenberg’s uncertainly in two contexts. You can calculate the force from the zero-point energy of vacuum, but you must add a relativistic wrinkle: the distance between two surfaces shrinks the faster you move according to relativity, but measurable force should not. A calculation of the force that includes both quantum mechanics and relativity was derived by Hendrik Casimir:

Energy per volume = P = F/A = πhc/ 480 L4,

where P is pressure, F is force, A is area, h is plank’s quantum constant, 6.63×10−34 Js, c is the speed of light, 3×108 m/s, and L is the distance between the plates, m. Experiments have been found to match the above prediction to within 2%, experimental error, but the energy density this implies is huge, especially when L is small, the equation must apply down to plank lengths, 1.6×10-35 m. Even at the size of an atom, 1×10-10m, the amount of the energy you can see is 3.6 GWhr/m3, 3.6 Giga Watts. 3.6 GigaWatt hrs is one hour’s energy output of three to four large nuclear plants. We see only a tiny portion of the Plank-length vacuum energy when we stick Johansson gauge blocks together, but the rest is there, near invisible, in every bit of empty space. The implication of this enormous energy remains baffling in any analysis. I see it as an indication that God is everywhere, exceedingly powerful, filling the universe, and holding everything together. Take a look, and come to your own conclusions.

As a homiletic, it seems to me that God likes friendship, but does not desire shaman, folks to stand between man and Him. Why do I say that? The huge force-energy between plates brings them together, but scatters anything that goes between. And now you know something about nothing.

Robert Buxbaum, November 7, 2018. Physics references: H. B. G. Casimir and D. Polder. The Influence of Retardation on the London-van der Waals Forces. Phys. Rev. 73, 360 (1948).
S. Lamoreaux, Phys. Rev. Lett. 78, 5 (1996).

Of God and Hubble

Edwin Hubble and Andromeda Photograph

Edwin Hubble and Andromeda Photograph

Perhaps my favorite proof of God is that, as best we can tell using the best science we have, everything we see today, popped into existence some 14 billion years ago. The event is called “the big bang,” and before that, it appears, there was nothing. After that, there was everything, and as best we can tell, not an atom has popped into existence since. I see this as the miracle of creation: Ex nihilo, Genesis, Something from nothing.

The fellow who saw this miracle first was an American, Edwin P. Hubble, born 1889. Hubble got a law degree and then a PhD (physics) studying photographs of faint nebula. That is, he studied the small, glowing, fuzzy areas of the night sky, producing a PhD thesis titled: “Photographic Investigations of Faint Nebulae.” Hubble served in the army (WWI) and continued his photographic work at the Mount Wilson Observatory, home to the world’s largest telescope at the time. He concluded that many of these fuzzy nebula were complete galaxies outside of our own. Most of the stars we see unaided are located relatively near us, in our own, local area, or our own, “Milky Way” galaxy, that is within a swirling star blob that appears to be some 250,000 light years across. Through study of photographs of the Andromeda “nebula”, Hubble concluded it was another swirling galaxy quite like ours, but some 900,000 light years away. (A light year is 5,900,000,000 miles, the distance light would travel in a year). Finding another galaxy was a wonderful find; better yet, there were more swirling galaxies besides Andromeda, about 100 billion of them, we now think. Each galaxy contains about 100 billion stars; there is plenty of room for intelligent life. 

Emission from Galaxy NGC 5181. The bright, hydrogen ß line should be at but it's at

Emission spectrum from Galaxy NGC 5181. The bright, hydrogen ß line should be at 4861.3 Å, but it’s at about 4900 Å. This difference tells you the speed of the galaxy.

But the discovery of galaxies beyond our own is not what Hubble is most famous for. Hubble was able to measure the distance to some of these galaxies, mostly by their apparent brightness, and was able to measure the speed of the galaxies relative to us by use of the Doppler shift, the same phenomenon that causes a train whistle to sound differently when the train is coming towards you or going away from you. In this case, he used the frequency spectrum of light for example, at right, for NGC 5181. The color of the spectral lines of light from the galaxy is shifted to the red, long wavelengths. Hubble picked some recognizable spectral line, like the hydrogen emission line, and determined the galactic velocity by the formula,

V= c (λ – λ*)/λ*.

In this equation, V is the velocity of the galaxy relative to us, c is the speed of light, 300,000,000 m/s, λ is the observed wavelength of the particular spectral line, and λ*is the wavelength observed for non-moving sources. Hubble found that all the distant galaxies were moving away from us, and some were moving quite fast. What’s more, the speed of a galaxy away from us was roughly proportional to the distance. How odd. There were only two explanations for this: (1) All other galaxies were propelled away from us by some, earth-based anti-gravity that became more powerful with distance (2) The whole universe was expanding at a constant rate, and thus every galaxy sees itself moving away from every other galaxy at a speed proportional to the distance between them.

This second explanation seems a lot more likely than the first, but it suggests something very interesting. If the speed is proportional to the distance, and you carry the motion backwards in time, it seems there must have been a time, some 14 billion years ago, when all matter was in one small bit of space. It seems there was one origin spot for everything, and one origin time when everything popped into existence. This is evidence for creation, even for God. The term “Big Bang” comes from a rival astronomer, Fred Hoyle, who found the whole creation idea silly. With each new observation of a galaxy moving away from us, the idea became that much less silly. Besides, it’s long been known that the universe can’t be uniform and endless.

Whatever we call the creation event, we can’t say it was an accident: a lot of stuff popped out at one time, and nothing at all similar has happened since. Nor can we call it a random fluctuation since there are just too many stars and too many galaxies in close proximity to us for it to be the result of random atoms moving. If it were all random, we’d expect to see only one star and our one planet. That so much stuff popped out in so little time suggests a God of creation. We’d have to go to other areas of science to suggest it’s a personal God, one nearby who might listen to prayer, but this is a start. 

If you want to go through the Hubble calculations yourself, you can find pictures and spectra of galaxies here for the 24 or so original galaxies studied by Hubble: http://astro.wku.edu/astr106/Hubble_intro.html. Based on your analysis, you’ll likely calculate a slightly different time for creation from the standard 14 billion, but you’ll find you calculate something close to what Hubble did. To do better, you’ll need to look deeper into space, and that would take a better telescope, e.g.  the “Hubble space telescope”

Robert E. Buxbaum, October 28, 2018.

Isotopic effects in hydrogen diffusion in metals

For most people, there is a fundamental difference between solids and fluids. Solids have long-term permanence with no apparent diffusion; liquids diffuse and lack permanence. Put a penny on top of a dime, and 20 years later the two coins are as distinct as ever. Put a layer of colored water on top of plain water, and within a few minutes you’ll see that the coloring diffuse into the plain water, or (if you think the other way) you’ll see the plain water diffuse into the colored.

Now consider the transport of hydrogen in metals, the technology behind REB Research’s metallic  membranes and getters. The metals are clearly solid, keeping their shapes and properties for centuries. Still, hydrogen flows into and through the metals at a rate of a light breeze, about 40 cm/minute. Another way of saying this is we transfer 30 to 50 cc/min of hydrogen through each cm2 of membrane at 200 psi and 400°C; divide the volume by the area, and you’ll see that the hydrogen really moves through the metal at a nice clip. It’s like a normal filter, but it’s 100% selective to hydrogen. No other gas goes through.

To explain why hydrogen passes through the solid metal membrane this way, we have to start talking about quantum behavior. It was the quantum behavior of hydrogen that first interested me in hydrogen, some 42 years ago. I used it to explain why water was wet. Below, you will find something a bit more mathematical, a quantum explanation of hydrogen motion in metals. At REB we recently put these ideas towards building a membrane system for concentration of heavy hydrogen isotopes. If you like what follows, you might want to look up my thesis. This is from my 3rd appendix.

Although no-one quite understands why nature should work this way, it seems that nature works by quantum mechanics (and entropy). The basic idea of quantum mechanics you will know that confined atoms can only occupy specific, quantized energy levels as shown below. The energy difference between the lowest energy state and the next level is typically high. Thus, most of the hydrogen atoms in an atom will occupy only the lower state, the so-called zero-point-energy state.

A hydrogen atom, shown occupying an interstitial position between metal atoms (above), is also occupying quantum states (below). The lowest state, ZPE is above the bottom of the well. Higher energy states are degenerate: they appear in pairs. The rate of diffusive motion is related to ∆E* and this degeneracy.

A hydrogen atom, shown occupying an interstitial position between metal atoms (above), is also occupying quantum states (below). The lowest state, ZPE is above the bottom of the well. Higher energy states are degenerate: they appear in pairs. The rate of diffusive motion is related to ∆E* and this degeneracy.

The fraction occupying a higher energy state is calculated as c*/c = exp (-∆E*/RT). where ∆E* is the molar energy difference between the higher energy state and the ground state, R is the gas constant and T is temperature. When thinking about diffusion it is worthwhile to note that this energy is likely temperature dependent. Thus ∆E* = ∆G* = ∆H* – T∆S* where asterisk indicates the key energy level where diffusion takes place — the activated state. If ∆E* is mostly elastic strain energy, we can assume that ∆S* is related to the temperature dependence of the elastic strain.

Thus,

∆S* = -∆E*/Y dY/dT

where Y is the Young’s modulus of elasticity of the metal. For hydrogen diffusion in metals, I find that ∆S* is typically small, while it is often typically significant for the diffusion of other atoms: carbon, nitrogen, oxygen, sulfur…

The rate of diffusion is now calculated assuming a three-dimensional drunkards walk where the step lengths are constant = a. Rayleigh showed that, for a simple cubic lattice, this becomes:

D = a2/6τ

a is the distance between interstitial sites and t is the average time for crossing. For hydrogen in a BCC metal like niobium or iron, D=

a2/9τ; for a FCC metal, like palladium or copper, it’s

a2/3τ. A nice way to think about τ, is to note that it is only at high-energy can a hydrogen atom cross from one interstitial site to another, and as we noted most hydrogen atoms will be at lower energies. Thus,

τ = ω c*/c = ω exp (-∆E*/RT)

where ω is the approach frequency, or the amount of time it takes to go from the left interstitial position to the right one. When I was doing my PhD (and still likely today) the standard approach of physics writers was to use a classical formulation for this time-scale based on the average speed of the interstitial. Thus, ω = 1/2a√(kT/m), and

τ = 1/2a√(kT/m) exp (-∆E*/RT).

In the above, m is the mass of the hydrogen atom, 1.66 x 10-24 g for protium, and twice that for deuterium, etc., a is the distance between interstitial sites, measured in cm, T is temperature, Kelvin, and k is the Boltzmann constant, 1.38 x 10-16 erg/°K. This formulation correctly predicts that heavier isotopes will diffuse slower than light isotopes, but it predicts incorrectly that, at all temperatures, the diffusivity of deuterium is 1/√2 that for protium, and that the diffusivity of tritium is 1/√3 that of protium. It also suggests that the activation energy of diffusion will not depend on isotope mass. I noticed that neither of these predictions is borne out by experiment, and came to wonder if it would not be more correct to assume ω represent the motion of the lattice, breathing, and not the motion of a highly activated hydrogen atom breaking through an immobile lattice. This thought is borne out by experimental diffusion data where you describe hydrogen diffusion as D = D° exp (-∆E*/RT).

Screen Shot 2018-06-21 at 12.08.20 AM

You’ll notice from the above that D° hardly changes with isotope mass, in complete contradiction to the above classical model. Also note that ∆E* is very isotope dependent. This too is in contradiction to the classical formulation above. Further, to the extent that D° does change with isotope mass, D° gets larger for heavier mass hydrogen isotopes. I assume that small difference is the entropy effect of ∆E* mentioned above. There is no simple square-root of mass behavior in contrast to most of the books we had in grad school.

As for why ∆E* varies with isotope mass, I found that I could get a decent explanation of my observations if I assumed that the isotope dependence arose from the zero point energy. Heavier isotopes of hydrogen will have lower zero-point energies, and thus ∆E* will be higher for heavier isotopes of hydrogen. This seems like a far better approach than the semi-classical one, where ∆E* is isotope independent.

I will now go a bit further than I did in my PhD thesis. I’ll make the general assumption that the energy well is sinusoidal, or rather that it consists of two parabolas one opposite the other. The ZPE is easily calculated for parabolic energy surfaces (harmonic oscillators). I find that ZPE = h/aπ √(∆E/m) where m is the mass of the particular hydrogen atom, h is Plank’s constant, 6.63 x 10-27 erg-sec,  and ∆E is ∆E* + ZPE, the zero point energy. For my PhD thesis, I didn’t think to calculate ZPE and thus the isotope effect on the activation energy. I now see how I could have done it relatively easily e.g. by trial and error, and a quick estimate shows it would have worked nicely. Instead, for my PhD, Appendix 3, I only looked at D°, and found that the values of D° were consistent with the idea that ω is about 0.55 times the Debye frequency, ω ≈ .55 ωD. The slight tendency for D° to be larger for heavier isotopes was explained by the temperature dependence of the metal’s elasticity.

Two more comments based on the diagram I presented above. First, notice that there is middle split level of energies. This was an explanation I’d put forward for quantum tunneling atomic migration that some people had seen at energies below the activation energy. I don’t know if this observation was a reality or an optical illusion, but present I the energy picture so that you’ll have the beginnings of a description. The other thing I’d like to address is the question you may have had — why is there no zero-energy effect at the activated energy state. Such a zero energy difference would cancel the one at the ground state and leave you with no isotope effect on activation energy. The simple answer is that all the data showing the isotope effect on activation energy, table A3-2, was for BCC metals. BCC metals have an activation energy barrier, but it is not caused by physical squeezing between atoms, as for a FCC metal, but by a lack of electrons. In a BCC metal there is no physical squeezing, at the activated state so you’d expect to have no ZPE there. This is not be the case for FCC metals, like palladium, copper, or most stainless steels. For these metals there is a much smaller, on non-existent isotope effect on ∆E*.

Robert Buxbaum, June 21, 2018. I should probably try to answer the original question about solids and fluids, too: why solids appear solid, and fluids not. My answer has to do with quantum mechanics: Energies are quantized, and always have a ∆E* for motion. Solid materials are those where ω exp (-∆E*/RT) has unit of centuries. Thus, our ability to understand the world is based on the least understandable bit of physics.