Tag Archives: physics

Sailors, boaters, and motor sailing at the hull speed.

I’ve gone sailing a few times this summer, and once again was struck by the great difference between sailing and boating, as well as by the mystery of the hull speed.

Sailors are distinct from boaters in that they power their boats by sails in the wind. Sailing turns out to be a fairly pleasant way to spend an afternoon. At least as I did it, it was social, pleasant, and not much work, but the speeds were depressingly slow. I went on two boats (neither were my own), each roughly 20 feet long, with winds running about 10-15 knots (about 13 mph). We travelled at about 3 knots, about 3.5 mph. That’s walking speed. At that speed it would take about 7 hours to cross Lake St. Clair (25 miles wide). To go across and back would take a full day.

Based on the length of the boats, they should have been able to go a lot faster, at about 5.8 knots (6 mph). This target speed is called the hull speed; it’s the speed where the wave caused by the bow provides a resonance at the back of the boat giving it a slight surfing action, see drawing.

This speed can be calculated from the relationship between wave speed and wavelength, so that Vhull = √gλ/2π where g is the gravitational constant and λ is the water line length of the boat. For Vhull in knots, it’s calculated as the square-root of the length in feet, multiplied by 1.34. For a 20 foot boat, then,

Hull speed, 20′ = 1.34 √20 = 1.34 x 4.5 = 6.03 knots.

While power boats routinely go much faster than this, as do racing skulls and Americas cup sailboats, most normal sailboats are designed for this speed. One advantage is that it leads to a relatively comfortable ride. There is just enough ballast and sail so that the boat runs out of wind at this speed while tipping no more than 15°. Sailors claim there is a big increase in drag at this speed, but a look at the drag profile of some ocean kayaks (12 to 18 feet, see below) shows only a very slight increase around this magical speed. More important is weight; the lowest drag in the figure below is found for the shortest kyack that is also the lightest. I suspect that the sailboats I was on could have gone at 6 knots or faster, even with our current wind, if we’d unrolled the spinnaker, and used a ‘screecher’ (a very large jib), and hung over the edge to keep the boat upright. But the owner chose to travel in relative comfort, and the result is that we had a pleasant afternoon going nowhere.

Data from Vaclav Stejskal of “oneoceankyacks.com”

And this brings me to my problem with power boating. Th boats are about the same length as the sailboats I was in, and the weight is similar too. You travel a lot faster, 20 to 25 knots, and you get somewhere, but the boats smell, and provide a jarring ride, and I felt they burn gas too fast for my comfort. The boats exceed hull speed and hydroplane, somewhat. That is, they ride up one wave, fly a bit, and crash down the other side, sending annoying wakes to the sailboaters. We crossed lake St. Clair and rode a way down the Detroit river. This was nice, but it left me thinking there was room for power -assisted sailing at an intermediate speed, power sailing.

Both sailboats I was on had outboard motors, 3 hp, as it happened, and both moved nicely at 1 hp into and out of the harbor, even without the sail up. Some simple calculations suggest that, with I could power a 15 to 20 foot sailboat or canoe at a decent speed – hull speed – by use of a small sail and an electric motor drawing less than 1 hp, ~400 W, powered by one or two car batteries.

Consider the drag for the largest, heaviest kayak in the chart a move, the Cape Ann Double, going at 6.5 knots. At 6 knots, the resistance is seen to be 15 lbs. To calculate the power demand, convert this speed to 10 fps and multiply by the force:

Power for 6 knot cruising = 10 fps x 15 lbs = 150 ft lbs/s = 202 W or 0.27 hp.

Outboard motors are not 100% efficient, so let’s assume that you need to draw more like 250 W at the motor, and you will need to add power by a sail. How big a battery is needed for the 250 W? I’ll aim for powering a 4 hour trip, and find the battery size by multiplying the 250 W by 4 hours: that’s 1250 Hrs, or 1.25 kWh. A regular, lithium car battery is all that’s needed. In terms of the sail, I’m inclined to get really invovative, and use a Flettner sail, as discussed here.

It seems to me that adding this would be a really fun way to sail. I’d expect to be able to go somewhere, without the smell, or the cost, or being jarred to badly. Now, all I need is a good outboard motor, and a willing companion to try this with.

Robert Buxbaum, Sept. 9, 2024

Einstein’s theory of diffusion in liquids, and my extension.

In 1905 and 1908, Einstein developed two formulations for the diffusion of a small particle in a liquid. As a side-benefit of the first derivation, he demonstrated the visible existence of molecules, a remarkable piece of work. In the second formulation, he derived the same result using non-equilibrium thermodynamics, something he seems to have developed on the spot. I’ll give a brief version of the second derivation, and will then I’ll show off my own extension. It’s one of my proudest intellectual achievements.

But first a little background to the problem. In 1827, a plant biologist, Robert Brown examined pollen under a microscope and noticed that it moved in a jerky manner. He gave this “Brownian motion” the obvious explanation: that the pollen was alive and swimming. Later, it was observed that the pollen moved faster in acetone. The obvious explanation: pollen doesn’t like acetone, and thus swims faster. But the pollen never stopped, and it was noticed that cigar smoke also swam. Was cigar smoke alive too?

Einstein’s first version of an answer, 1905, was to consider that the liquid was composed of atoms whose energy was a Boltzmann distribution with an average of E= kT in every direction where k is the Boltzmann constant, and k = R/N. That is Boltsman’s constant equals the gas constant, R, divided by Avogadro’s number, N. He was able to show that the many interactions with the molecules should cause the pollen to take a random, jerky walk as seen, and that the velocity should be faster the less viscous the solvent, or the smaller the length-scale of observation. Einstein applied the Stokes drag equation to the solute, the drag force per particle was f = -6πrvη where r is the radius of the solute particle, v is the velocity, and η is the solution viscosity. Using some math, he was able to show that the diffusivity of the solute should be D = kT/6πrη. This is called the Stokes-Einstein equation.

In 1908 a French physicist, Jean Baptiste Perrin confirmed Einstein’s predictions, winning the Nobel prize for his work. I will now show the 1908 Einstein derivation and will hope to get to my extension by the end of this post.

Consider the molar Gibbs free energy of a solvent, water say. The molar concentration of water is x and that of a very dilute solute is y. y<<1. For this nearly pure water, you can show that µ = µ° +RT ln x= µ° +RT ln (1-y) = µ° -RTy.

Now, take a derivative with respect to some linear direction, z. Normally this is considered illegal, since thermodynamic is normally understood to apply to equilibrium systems only. Still Einstein took the derivative, and claimed it was legitimate at nearly equilibrium, pseudo-equilibrium. You can calculate the force on the solvent, the force on the water generated by a concentration gradient, Fw = dµ/dz = -RT dy/dz.

Now the force on each atom of water equals -RT/N dy/dz = -kT dy/dz.

Now, let’s call f the force on each atom of solute. For dilute solutions, this force is far higher than the above, f = -kT/y dy/dz. That is, for a given concentration gradient, dy/dz, the force on each solute atom is higher than on each solvent atom in inverse proportion to the molar concentration.

For small spheres, and low velocities, the flow is laminar and the drag force, f = 6πrvη.

Now calculate the speed of each solute atom. It is proportional to the force on the atom by the same relationship as appeared above: f = 6πrvη or v = f/6πrη. Inserting our equation for f= -kT/y dy/dz, we find that the velocity of the average solute molecule,

v = -kT/6πrηy dy/dz.

Let’s say that the molar concentration of solvent is C, so that, for water, C will equal about 1/18 mols/cc. The atomic concentration of dilute solvent will then equal Cy. We find that the molar flux of material, the diffusive flux equals Cyv, or that

Molar flux (mols/cm2/s) = Cy (-kT/6πrηy dy/dz) = -kTC/6πrη dy/dz -kT/6πrη dCy/dz.

where Cy is the molar concentration of solvent per volume.

Classical engineering comes to a similar equation with a property called diffusivity. Sp that

Molar flux of y (mols y/cm2/s) = -D dCy/dz, and D is an experimentally determined constant. We thus now have a prediction for D:

D = kT/6πrη.

This again is the Stokes Einstein Equation, the same as above but derived with far less math. I was fascinated, but felt sure there was something wrong here. Macroscopic viscosity was not the same as microscopic. I just could not think of a great case where there was much difference until I realized that, in polymer solutions there was a big difference.

Polymer solutions, I reasoned had large viscosities, but a diffusing solute probably didn’t feel the liquid as anywhere near as viscous. The viscometer measured at a larger distance, more similar to that of the polymer coil entanglement length, while a small solute might dart between the polymer chains like a rabbit among trees. I applied an equation for heat transfer in a dispersion that JK Maxwell had derived,

where κeff is the modified effective thermal conductivity (or diffusivity in my case), κl and κp are the thermal conductivity of the liquid and the particles respectively, and φ is the volume fraction of particles. 

To convert this to diffusion, I replaced κl by Dl, and κp by Dp where

Dl = kT/6πrηl

and Dp = kT/6πrη.

In the above ηl is the viscosity of the pure, liquid solvent.

The chair of the department, Don Anderson didn’t believe my equation, but agreed to help test it. A student named Kit Yam ran experiments on a variety of polymer solutions, and it turned out that the equation worked really well down to high polymer concentrations, and high viscosity.

As a simple, first approximation to the above, you can take Dp = 0, since it’s much smaller than Dl and you can take Dl to equal Dl = kT/6πrηl as above. The new, first order approximation is:

D = kT/6πrηl (1 – 3φ/2).

We published in Science. That is I published along with the two colleagues who tested the idea and proved the theory right, or at least useful. The reference is Yam, K., Anderson, D., Buxbaum, R. E., Science 240 (1988) p. 330 ff. “Diffusion of Small Solutes in Polymer-Containing Solutions”. This result is one of my proudest achievements.

R.E. Buxbaum, March 20, 2024

Relativity’s twin paradox explained, and why time is at right angles to space.

One of the most famous paradoxes of physics is explained wrong — always. It makes people feel good to think they understand it, but the explanation is wrong and confusing, and it drives young physicists in a wrong direction. The basic paradox is an outgrowth of the special relativity prediction that time moves slower if you move faster.

Thus, if you entered a spaceship and were to travel to a distant star at 99% the speed of light, turn around and get here 30 years, you would have aged far less than 30 years. You and everyone else on the space ship would have aged three years, 1/10 as much as someone on earth.

The paradox part, not that the above isn’t weird enough by itself, is that the person in the spaceship will imagine that he (or she) is standing still, and that everyone on earth is moving away at 99% the speed of light. Thus, the person on the spaceship should expect to find that the people on earth will age slower. That is, the person on the space ship should return from his (or her) three year journey, expecting to find that the people on earth have only aged 0.3 years. Obviously, only one of these expectations can be right, but it’s not clear which (It’s the first one), nor is it clear why.

The wrong explanation appears in an early popular book, “Mr Tompkins in Wonderland”, by Physicist, George Gamow. The book was written shortly after Relativity was proposed, and involves a Mr Tompkins who falls asleep in a physics lecture. Mr. Tompkins dreams he’s riding on a train going near the speed of light, finds things are shorter and time is going slower. He then asks the paradox question to the conductor, who admits he doesn’t quite know how it works (perhaps Gamow didn’t), but that “it has something do do with the brakeman.” That sounds like Gamow is saying the explanation has to do with deceleration at the turn around, or general relativity in general, implying gravity could have a similarly large effect. It doesn’t work that way, and the effect of 1G gravity is small, but everyone seems content to explain the paradox this way. This is particularly unfortunate because these include physicists clouding an already cloudy issue.

In the early days of physics, physicists tried to explain things with a little legitimate math to the lay audience. Gamow did this, as did Einstein, Planck, Feynman, and most others. I try to do this too. Nowadays, physicists have removed the math, and added gobbledygook. The one exception here are the cinematographers of Star Wars. They alone show the explanation correctly.

The explanation does not have to do general relativity or the acceleration at the end of the journey (the brakeman). Instead of working through some acceleration, general relativity effect, the twin paradox works with simple, special relativity: all space contracts for the duration of the trip, and everything in it gets shorter. The person in this spaceship will see the distance to the star shrink by 90%. Traveling there thus takes 1/10th the time because the distance is 1/10th. There and back at 99% the speed of light, takes exactly 3 years.

The equation for time contraction is: t’ = v/x° √(1-(v/c)2) = t° √(1-(v/c)2) where t’ is the time in the spaceship, v is the speed, x° is the distance traveled (as measured from earth), and c is the speed of light. For v/c = .99, we find that √1-(v/c)2 is 0.1. We thus find that t’ = 0.1 t°. When dealing with the twin paradox, it’s better to say that x’ = 0.1x° where x’ is the distance to the star as seen from the spaceship. In either case, when the people on the space ship accelerate, they see the distance in front of them shrink, as shown in Star Wars, below.

Star Wars. The millennium falcon jumps to light speed, and beyond.

That time was at right angles to space was a comment in one of Einstein’s popular articles and books; he wrote several, all with some minimal mathematics Current science has no math, and a lot of politics, IMHO, and thus is not science.

He showed that time and space are at right angles by analogy from Pythagoras. Pythagoras showed that distance on a diagonal, d between two points at right angles, x and y is d = √(x2 + y2). Another way of saying this is d2 =x2 + y2. The relationship is similar for relativistic distances. To explain the twin paradox, we find that the square of the effective distance, x’2 = x°2 (1 – (v/c)2) = x°2 – (x°v)2/c2 = x°2 – (x°v/c)2 = x°2 – (t°2/c2). Here, x°2 is the square of the original distance, and it comes out that the term, – (t°2/c2) behaves like the square of an imaginary distance that is at right angles to it. It comes out that co-frame time, t° behaves as if it were a distance with a scale factor of i/c.

For some reason people today read books on science by non-scientist ‘explainers.’ I These books have no math, and I guess they sell. Publishers think they are helping democratize science, perhaps. You are better off reading the original thinkers, IMHO.

Robert Buxbaum, July 16, 2023. In his autobiography, Einstein claimed to be a fan of scientist -philosopher, Ernst Mach. Mach derived the speed of sound from a mathematical analysis of thermodynamics. Einstein followed, considering that it must be equally true to consider an empty box traveling in space to be one that carries its emptiness with it, as to assume that fresh emptiness comes in at one end and leaves by the other. If you set the two to be equal mathematically, you conclude that both time and space vary with velocity. Similar analysis will show that atoms are real, and that energy must travel in packets, quanta. Einstein also did fun work on the curvature of rivers, and was a fan of this sail ship design. Here is some more on the scientific method.

Dark matter: why our galaxy still has its arms

Our galaxy may have two arms, or perhaps four. It was thought to be four until 2008, when it was reduced to two. Then, in 2015, it was expanded again to four arms, but recent research suggests it’s only two again. About 70% of galaxies have arms, easily counted from the outside, as in the picture below. Apparently it’s hard to get a good view from the inside.

Four armed, spiral galaxy, NGC 2008. There is a debate over whether our galaxy looks like this, or if there are only two arms. Over 70% of all galaxies are spiral galaxies. 

Logically speaking, we should not expect a galaxy to have arms at all. For a galaxy to have arms, it must rotate as a unit. Otherwise, even if the galaxy had arms when it formed, it would lose them by the time the outer rim rotated even once. As it happens we know the speed of rotation and age of galaxies; they’ve all rotated 10 to 50 times since they formed.

For stable rotation, the rotational acceleration must match the force of gravity and this should decrease with distances from the massive center. Thus, we’d expect that the stars should circle much faster the closer they are to the center of the galaxy. We see that Mercury circles the sun much faster than we do, and that we circle much faster than the outer planets. If stars circled the galactic core this way, any arm structure would be long gone. We see that the galactic arms are stable, and to explain it, we’ve proposed the existence of lots of unseen, dark matter. This matter has to have some peculiar properties, behaving as a light gas that doesn’t spin with the rest of the galaxy, or absorb light, or reflect. Some years ago, I came to believe that there was only one gas distribution that fit, and challenged folks to figure out the distribution.

The mass of the particles that made up this gas has to be very light, about 10-7 eV, about 2 x 1012 lighter than an electron, and very slippery. Some researchers had posited large, dark rocks, but I preferred to imagine a particle called the axion, and I expected it would be found soon. The particle mass had to be about this or it would shrink down to the center of he galaxy or start to spin, or fill the universe. Ina ny of these cases, galaxies would not be stable. The problem is, we’ve been looking for years, and not only have we not seen any particle like this. What’s more, continued work on the structure of matter suggests that no such particle should exist. At this point, galactic stability is a bigger mystery than it was 40 years ago.;

So how to explain galactic stability if there is no axion. One thought, from Mordechai Milgrom, is that gravity does not work as we thought. This is an annoying explanation: it involves a complex revision of General Relativity, a beautiful theory that seems to be generally valid. Another, more recent explanation is that the dark matter is regular matter that somehow became an entangled, super fluid despite the low density and relatively warm temperatures of interstellar space. This has been proposed by Justin Khoury, here. Either theory would explain the slipperiness, and the fact that the gas does not interact with light, but the details don’t quite work. For one, I’d still think that the entangled particle mass would have to be quite light; maybe a neutrino would fit (entangled neutrinos?). Super fluids don’t usually exist at space temperatures and pressures, and long distances (light years) should preclude entanglements, and neutrinos don’t seem to interact at all.

Sabine Hossenfelder suggests a combination of modified gravity and superfluidity. Some version of this might fit observations better, but doubles the amount of new physics required. Sabine does a good science video blog, BTW, with humor and less math. She doesn’t believe in Free will or religion, or entropy. By her, the Big Bang was caused by a mystery particle called an inflateon that creates mass and energy from nothing. She claims that the worst thing you can do in terms of resource depletion is have children, and seems to believe religious education is child abuse. Some of her views I agree with, with many, I do not. I think entropy is fundamental, and think people are good. Also, I see no advantage in saying “In the beginning an inflateon created the heavens and the earth”, but there you go. It’s not like I know what dark matter is any better than she does.

There are some 200 billion galaxies, generally with 100 billion stars. Our galaxy is about 150,000 light years across, 1.5 x 1018 km. It appears to behave, more or less, as a solid disk having rotated about 15 full turns since its formation, 10 billion years ago. The speed at the edge is thus about π x 1.5 x 1018 km/ 3 x 1016 s = 160km/s. That’s not relativistic, but is 16 times the speed of our fastest rockets. The vast majority of the mass of our galaxy would have to be dark matter, with relatively little between galaxies. Go figure.

Robert Buxbaum, May 24, 2023. I’m a chemical engineer, PhD, but studied some physics and philosophy.

Rotating sail ships and why your curve ball doesn’t curve.

The Flettner-sail ship, Barbara, 1926.

Sailing ships are wonderfully economic and non-polluting. They have unlimited range because they use virtually no fuel, but they tend to be slow, about 5-12 knots, about half as fast as Diesel-powered ships, and they can be stranded for weeks if the wind dies. Classic sailing ships also require a lot of manpower: many skilled sailors to adjust the sails. What’s wanted is an easily manned, economical, hybrid ship: one that’s powered by Diesel when the wind is light, and by a simple sail system when the wind blows. Anton Flettner invented an easily manned sail and built two ships with it. The Barbara above used a 530 hp Diesel and got additional thrust, about an additional 500 hp worth, from three, rotating, cylindrical sails. The rotating sales produced thrust via the same, Magnus force that makes a curve ball curve. Barbara went at 9 knots without the wind, or about 12.5 knots when the wind blew. Einstein thought it one of the most brilliant ideas he’d seen.

Force diagram of Flettner rotor (Lele & Rao, 2017)

The source of the force can be understood with help of the figure at left and the graph below. When a simple cylinder sits in the wind, with no spin, α=0, the wind force is essentially drag, and is 1/2 the wind speed squared, times the cross-sectional area of the cylinder, Dxh, and the density of air. Multiply this by a drag coefficient, CD, that is about 1 for a non-spinning cylinder, and about 2 for a fast spinning cylinder. FD= CDDhρv2/2.

A spinning cylinder has lift force too. FL= CLDhρv2/2.

Numerical lift coefficients versus time, seconds for different ratios of surface speed to wind speed, a. (Mittal & Kumar 2003), Journal of Fluid Mechanics.

As graphed in the figure at right, CL is effectively zero with sustained vibrations at zero spin, α=0. Vibrations are useless for propulsion, and can be damaging to the sail, though they are helpful in baseball pitching, producing the erratic flight of knuckle balls. If you spin a cylindrical mast at more than α=2.1 the vibrations disappear, and you get significant lift, CL= 6. At this rotation speed the fast surface moves with the wind at 2.1 times the wind speed. That is it moves significantly faster than the wind. The other side of the rotor moves opposite the wind, 1.1 times as fast as the wind. The coefficient of lift lift, CL= 6, is more than twice that found with a typical, triangular, non-rotating sail. Rotation increases the drag too, but not as much. The lift is about 4 times the drag, far better than in a typical sail. Another plus is that the ship can be propelled forward or backward -just reverse the spin direction. This is very good for close-in sailing.

The sail lift, and lift to drag ratio, increases with rotation speed reaching very values of 10 to 18 at α values of 3 to 4. Flettner considered α=3.5. optimal. At this α you get far more thrust than with a normal sail, and you can go faster than the wind, and far closer to the wind than with any normal sail. You don’t want α values above 4.2 because you start seeing vibrations again. Also more rotation power is needed (rotation power goes as ω2); unless the wind is strong, you might as well use a normal propeller.

The driving force is always at right angles to the perceived wind, called the “fair wind”, and the fair wind moves towards the front as the ship speed increases. Controlling the rotation speed is somewhat difficult but important. Flettner sails were no longer used by the 1930s because fuel became cheaper and control was difficult. Normal sails weren’t being used either for the same reasons.

In the early 1980s, there was a return to the romantic. Famous underwater explorer, Jacques Cousteau, revived a version of the Flettner sail for his exploratory ship, the Alcyone. He used aluminum sails, and an electric motor for rotation. He claimed that the ship drew more than half of its power from the wind, and claimed that, because of computer control, it could sail with no crew. This claim was likely bragging, but he bragged a lot. Even with today’s computer systems, people are needed to steer and manage things in case something goes wrong. The energy savings were impressive, though, enough so that some have begun to put Flettner sails on cargo ships, as a right. This is an ideal use since cargo ships go about as fast as a typical wind, 10- 20 knots. It’s reported that, Flettner- powered cargo ships get about 20% of their propulsion from wind power, not an insignificant amount.

And this gets us to the reason your curve ball does not curve: it’s likely you’re not spinning it fast enough. To get a good curve, you want the ball to spin at α =3, or about 1.5 times the rate you’d get by rolling the ball off your fingers. You have to snap your wrist hard to get it to spin this fast. As another approach, you can aim for α=0, a knuckle ball, achieved with zero rotation. At α=0, the ball will oscillate. It’s hard to do, but your pitch will be nearly impossible to hit or catch. Good luck.

Robert Buxbaum, March 22, 2023. There are also Flettner airplane designs where horizontal, cylindrical “wings” rotate to provide high lift with short wings and a relatively low power draw. So-far, these planes are less efficient and slower than a normal helicopter. The idea could bear more development work, IMHO. Einstein had an eye for good ideas.

Of covalent bonds and muon catalyzed cold fusion.

A hydrogen molecule consists of two protons held together by a covalent bond. One way to think of such bonds is to imagine that there is only one electron is directly involved as shown below. The bonding electron only spends 1/7 of its time between the protons, making the bond, the other 6/7 of the time the electron shields the two protons by 3/7 e each, reducing the effective charge of each proton to 4/7e+.

We see that the two shielded protons will repel each other with the force of FR = Ke (16/49 e2 /r2) where e is the charge of an electron or proton, r is the distance between the protons (r = 0.74Å = 0.74×10-10m), and Ke is Coulomb’s electrical constant, Ke ≈ 8.988×109 N⋅m2⋅C−2. The attractive force is calculated similarly, as each proton attracts the central electron by FA = – Ke (4/49) e2/ (r/2)2. The forces are seen to be in balance, the net force is zero.

It is because of quantum mechanics, that the bond is the length that it is. If the atoms were to move closer than r = 0.74Å, the central electron would be confined to less space and would get more energy, causing it to spend less time between the two protons. With less of an electron between them, FR would be greater than FA and the protons would repel. If the atoms moved further apart than 0.74Å, a greater fraction of the electron would move to the center, FA would increase, and the atoms would attract. This is a fairly pleasant way to understand why the hydrogen side of all hydrogen covalent bonds are the same length. It’s also a nice introduction to muon-catalyzed cold fusion.

Most fusion takes place only at high temperatures, at 100 million °C in a TOKAMAK Fusion reactor, or at about 15 million °C in the high pressure interior of the sun. Muon catalyzed fusion creates the equivalent of a much higher pressure, so that fusion occurs at room temperature. The trick to muon catalyzed fusion is to replace one of the electrons with a muon, an unstable, heavy electron particle discovered in 1936. The muon, designated µ-, behaves just like an electron but it has about 207 times the mass. As a result when it replaces an electron in hydrogen, it forms form a covalent bond that is about 1/207th the length of a normal bond. This is the equivalent of extreme pressure. At this closer distance, hydrogen nuclei fuse even at room temperature.

In normal hydrogen, the nuclei are just protons. When they fuse, one of them becomes a neutron. You get a deuteron (a proton-neutron pair), plus an anti electron and 1.44 MeV of energy after the anti-electron has annihilated (for more on antimatter see here). The muon is released most of the time, and can catalyze many more fusion reactions. See figure at right.

While 1.44MeV per reaction is a lot by ordinary standards — roughly one million times more energy than is released per atom when hydrogen is burnt — it’s very little compared to the energy it takes to make a muon. Making a muon takes a minimum of 1000 MeV, and more typically 4000 MeV using current technology. You need to get a lot more energy per muon if this process is to be useful.

You get quite a lot more energy when a muon catalyzes deuterium fusion or deuterium- fusion. With these reactions, you get 3.3 to 4 MeV worth of energy per fusion, and the muon will be ejected with enough force to support about eight D-D fusions before it decays or sticks to a helium atom. That’s better than before, but still not enough to justify the cost of making the muon.

The next reactions to consider are D-T fusion and Li-D fusion. Tritium is an even heavier isotope of hydrogen. It undergoes muon catalyzed fusion with deuterium via the reaction, D+T –> 4He +n +17.6 MeV. Because of the higher energy of the reaction, the muons are even less likely to stick to a helium atom, and you get about 100 fusions per muon. 100 x 17.6 MeV = 1.76 GeV, barely break-even for the high energy cost to make the muon, but there is no reason to stop there. You can use the high energy fusion neutrons to catalyze LiD fusion. For example, 2LiD +n –> 34He + T + D +n producing 19.9 MeV and a tritium atom.

With this additional 19.9 MeV per DT fusion, the system can start to produce usable energy for sale. It is also important that tritium is made in the process. You need tritium for the fusion reactions, and there are not many other supplies. The spare neutron is interesting too. It can be used to make additional tritium or for other purposes. It’s a direction I’d like to explore further. I worked on making tritium for my PhD, and in my opinion, this sort of hybrid operation is the most attractive route to clean nuclear fusion power.

Robert Buxbaum, September 8, 2022. For my appraisal of hot fusion, see here.

Ladder on table, safe till it’s not.

via GIFER

Two years ago I wrote about how to climb a ladder safely without fear. This fellow has no fear and has done the opposite. This fellow has chosen to put a ladder on a table to reach higher than he could otherwise. That table is on another table. At first things are going pretty well, but somewhere about ten steps up the ladder there is disaster. A ladder that held steadily, slips to the edge of the table, and then the table tips over. It’s just physics: the higher he climbs on the ladder the more the horizontal force. Eventually, the force is enough to move the table. He could have got up safely if he moved the tables closer to the wall or if he moved the ladder bottom further to the right on the top table. Either activity would have decreased the slip force, and thus the tendency for the table to tip.

Perhaps the following analysis will help. Lets assume that the ladder is 12.5′ long and sits against a ten foot ledge, with a base 7.5′ away from the wall. Now lets consider the torque and force balance at the bottom of the ladder. Torque is measured in foot-pounds, that is by the rotational product of force and distance. As the fellow climbs the ladder, his weight moves further to the right. This would increase the tendency for the ladder to rotate, but any rotation tendency is matched by force from the ledge. The force of the ledge gets higher the further up the ladder he goes. Let’s assume the ladder weighs 60 lbs and the fellow weighs 240 pounds. When the fellow has gone up ten feet up, he has moved over to the right by 7.5 feet, as the diagram shows. The weight of the man and the ladder produces a rotation torque on the bottom of 60 x 3.75 + 240 x 7.5 = 1925 foot pounds. This torque is combatted by a force of 1926 foot pounds provided by the ledge. Since the ladder is 12.5 feet long the force of the ledge is 1925/12.5 = 154 pounds, normal to the ladder. The effect of this 154 lbs of normal force is to push the ladder to the left by 123.2 lbs and to lift the ladder by 92.4lbs. It is this 123.2 pounds of sideways push force that will cause the ladder to slip.

The slip resistance at the bottom of the ladder equals the net weight times a coefficient of friction. The net weight here equals 60+240-92.4 = 217.6 lbs. Now lets assume that the coefficient of friction is 0.5. We’d find that the maximum friction force, the force available to stop a slip is 217.6 x 0.5 = 108.8 lbs. This is not equal to the horizontal push to prevent rotation, 123.2 lbs. The net result, depending on how you loot at things, is either that the ladder rotates to the right, or that the ladder slips to the left. It keeps slipping till, somewhere near the end of the table, the table tips over.

Force balance of man on ladder. Based on this, I will go through the slippage math in gruesome detail.

I occasionally do this sort of detailed physics; you might as well understand what you see in enough detail to be able to calculate what will happen. One take home from here is that it pays to have a ladder with rubber feet (my ladders do). That adds to the coefficient of friction at the bottom.

Robert Buxbaum, November 6, 2019.

Why the earth is magnetic with the north pole heading south.

The magnetic north pole, also known as true north, has begun moving south. It had been moving toward the north pole thought the last century. It moved out of Canadian waters about 15 years ago, heading toward Russia. This year it passed as close to the North pole as it is likely to, and begun heading south (Das Vedanga, old friend). So this might be a good time to ask “why is it moving?” or better yet, “Why does it exist at all?” Sorry to say the Wikipedia page is little help here; what little they say looks very wrong. So I thought I’d do my thing and write an essay.

The motion of the magnetic (true) north pole over the last century; it's nearly at the north pole.

Migration of the magnetic (true) north pole over the last century; it’s at 8°N and just passed the North Pole.

Your first assumption of the cause of the earth’s magnetic field would involve ferromagnetism: the earth’s core is largely iron and nickel, two metals that permanent magnets. Although the earth’s core is very hot, far above the “Curie Temperature” where permanent magnets form, you might imagine that some small degree of magnetizability remains. You’d be sort of right here and sort of wrong; to see why, lets take a diversion into the Curie Temperature (Pierre Curie in this case) before presenting a better explanation.

The reason there is no magnetism above the Curie temperature is similar to the reason that you can’t have a plague outbreak or an atom bomb if R-naught is less than one. Imagine a magnet inside a pot of iron. The surrounding iron will dissipate some of the field because magnets are dipoles and the iron occupies space. Fixed dipole effects dissipate with a distance relation of r-4; induced dipoles with a relation r-6. The iron surrounding the magnet will also be magnetized to an extent that augments the original, but the degree of magnetization decreases with temperature. Above some critical temperature, the surrounding dissipates more than it adds and the effect is that the original magnetic effect will die out if the original magnet is removed. It’s the same way that plagues die out if enough people are immunized, discussed earlier.

The earth rotates, and the earth's surface is negatively charged. There is thus some room for internal currents.

The earth rotates, and the earth’s surface is negatively charged. There is thus some room for internal currents.

It seems that the earth’s magnetic field is electromagnetic; that is, it’s caused by a current of some sort. According to Wikipedia, the magnetic field of the earth is caused by electric currents in the molten iron and nickel of the earth’s core. While there is a likely current within the core, I suspect that the effect is small. Wikipedia provides no mechanism for this current, but the obvious one is based on the negative charge of the earth’s surface. If the charge on the surface is non-uniform, It is possible that the outer part of the earth’s core could become positively charged rather the way a capacitor charges. You’d expect some internal circulation of the liquid the metal of the core, as shown above – it’s similar to the induced flow of tornadoes — and that flow could induce a magnetic field. But internal circulation of the metallic core does not seem to be a likely mechanism of the earth’s field. One problem: the magnitude of the field created this way would be smaller than the one caused by rotation of the negatively charged surface of the earth, and it would be in the opposite direction. Besides, it is not clear that the interior of the planet has any charge at all: The normal expectation is for charge to distribute fairly uniformly on a spherical surface.

The TV series, NOVA presents a yet more unlikely mechanism: That motion of the liquid metal interior against the magnetic field of the earth increases the magnetic field. The motion of a metal in a magnetic field does indeed produce a field, but sorry to say, it’s in the opposing direction, something that should be obvious from conservation of energy.

The true cause of the earth’s magnet field, in my opinion, is the negative charge of the earth and its rotation. There is a near-equal and opposite charge of the atmosphere, and its rotation should produce a near-opposite magnetic field, but there appears to be enough difference to provide for the field we see. The cause for the charge on the planet might be due to solar wind or the ionization of cosmic rays. And I notice that the average speed of parts of the atmosphere exceeds that of the surface —  the jet-stream, but it seems clear to me that the magnetic field is not due to rotation of the jet stream because, if that were the cause, magnetic north would be magnetic south. (When positive charges rotate from west to east, as in the jet stream, the magnetic field created in a North magnetic pole a the North pole. But in fact the North magnetic pole is the South pole of a magnet — that’s why the N-side of compasses are attracted to it, so … the cause must be negative charge rotation. Or so it seems to me.  Supporting this view, I note that the magnet pole sometimes flips, north for south, but this is only following a slow decline in magnetic strength, and it never points toward a spot on the equator. I’m going to speculate that the flip occurs when the net charge reverses, thought it could also come when the speed or charge of the jet stream picks up. I note that the magnetic field of the earth varies through the 24 hour day, below.

The earth's magnetic strength varies regularly through the day.

The earth’s magnetic strength varies regularly through the day.

Although magnetic north is now heading south, I don’t expect it to flip any time soon. The magnetic strength has been decreasing by about 6.3% per century. If it continues at that rate (unlikely) it will be some 1600 years to the flip, and I expect that the decrease will probably slow. It would probably take a massive change in climate to change the charge or speed of the jet stream enough to reverse the magnetic poles. Interestingly though, the frequency of magnetic strength variation is 41,000 years, the same frequency as the changes in the planet’s tilt. And the 41,000 year cycle of changes in the planet’s tilt, as I’ve described, is related to ice ages.

Now for a little math. Assume there are 1 mol of excess electrons on a large sphere of the earth. That’s 96500 Coulombs of electrons, and the effective current caused by the earth’s rotation equals 96500/(24 x3600) = 1.1 Amp = i. The magnetic field strength, H =  i N µ/L where H is magnetizability field in oersteds, N is the number of turns, in this case 1, µ is the magnetizability. The magnetizability of air is 0.0125 meter-oersteds/ per ampere-turn, and that of a system with an iron core is about 200 times more, 2.5 meter-tesla/ampere-turn. L is a characteristic length of the electromagnet, and I’ll say that’s 10,000 km or 107 meters. As a net result, I calculate a magnetic strength of 2.75×10-7 Tesla, or .00275 Gauss. The magnet field of the earth is about 0.3 gauss, suggesting that about 100 mols of excess charge are involved in the earth’s field, assuming that my explanation and my math are correct.

At this point, I should mention that Venus has about 1/100 the magnetic field of the earth despite having a molten metallic core like the earth. It’s rotation time is 243 days. Jupiter, Saturn and Uranus have greater magnetic fields despite having no metallic cores — certainly no molten metallic cores (some theorize a core of solid, metallic hydrogen). The rotation time of all of these is faster than the earth’s.

Robert E. Buxbaum, February 3, 2019. I have two pet peeves here. One is that none of the popular science articles on the earth’s magnetic field bother to show math to back their claims. This is a growing problem in the literature; it robs science of science, and makes it into a political-correctness exercise where you are made to appreciate the political fashion of the writer. The other peeve, related to the above concerns the game it’s thoroughly confusing, and politically ego-driven. The gauss is the cgs unit of magnetic flux density, this unit is called G in Europe but B in the US or England. In the US we like to use the tesla T as an SI – mks units. One tesla equals 104 gauss. The oersted, H is the unit of magnetizing field. The unit is H and not O because the English call this unit the henry because Henry did important work in magnetism One ampere-turn per meter is equal to 4π x 10−3 oersted, a number I approximated to 0.125 above. But the above only refers to flux density; what about flux itself? The unit for magnetic flux is the weber, Wb in SI, or the maxwell, Mx in cgs. Of course, magnetic flux is nothing more than the integral of flux density over an area, so why not describe flux in ampere-meters or gauss-acres? It’s because Ampere was French and Gauss was German, I think.

Of God and gauge blocks

Most scientists are religious on some level. There’s clear evidence for a big bang, and thus for a God-of-Creation. But the creation event is so distant and huge that no personal God is implied. I’d like to suggest that the God of creation is close by and as a beginning to this, I’d like to discus Johansson gauge blocks, the standard tool used to measure machine parts accurately.

jo4

A pair of Johansson blocks supporting 100 kg in a 1917 demonstration. This is 33 times atmospheric pressure, about 470 psi.

Lets say you’re making a complicated piece of commercial machinery, a car engine for example. Generally you’ll need to make many parts in several different shops using several different machines. If you want to be sure the parts will fit together, a representative number of each part must be checked for dimensional accuracy in several places. An accuracy requirement of 0.01 mm is not uncommon. How would you do this? The way it’s been done, at least since the days of Henry Ford, is to mount the parts to a flat surface and use a feeler gauge to compare the heights of the parts to the height of stacks of precisely manufactured gauge blocks. Called Johansson gauge blocks after the inventor and original manufacturer, Henrik Johansson, the blocks are typically made of steel, 1.35″ wide by .35″ thick (0.47 in2 surface), and of various heights. Different height blocks can be stacked to produce any desired height in multiples of 0.01 mm. To give accuracy to the measurements, the blocks must be manufactured flat to within 1/10000 of a millimeter. This is 0.1µ, or about 1/5 the wavelength of visible light. At this degree of flatness an amazing thing is seen to happen: Jo blocks stick together when stacked with a force of 100 kg (220 pounds) or more, an effect called, “wringing.” See picture at right from a 1917 advertising demonstration.

This 220 lbs of force measured in the picture suggests an invisible pressure of 470 psi at least that holds the blocks together (220 lbs/0.47 in2 = 470 psi). This is 32 times the pressure of the atmosphere. It is independent of air, or temperature, or the metal used to make the blocks. Since pressure times volume equals energy, and this pressure can be thought of as a vacuum energy density arising “out of the nothingness.” We find that each cubic foot of space between the blocks contains, 470 foot-lbs of energy. This is the equivalent of 0.9 kWh per cubic meter, energy you can not see, but you can feel. That is a lot of energy in the nothingness, but the energy (and the pressure) get larger the flatter you make the surfaces, or the closer together you bring them together. This is an odd observation since, generally get more dense the smaller you divide them. Clean metal surfaces that are flat enough will weld together without the need for heat, a trick we have used in the manufacture of purifiers.

A standard way to think of quantum scattering is that the particle is scattered by invisible bits of light (virtual photons), the wavy lines. In this view, the force that pushes two flat surfaces together is from a slight deficiency in the amount of invisible light in the small space between them.

A standard way to think of quantum scattering of an atom (solid line) is that it is scattered by invisible bits of light, virtual photons (the wavy lines). In this view, the force that pushes two blocks together comes from a slight deficiency in the number of virtual photons in the small space between the blocks.

The empty space between two flat surfaces also has the power to scatter light or atoms that pass between them. This scattering is seen even in vacuum at zero degrees Kelvin, absolute zero. Somehow the light or atoms picks up energy, “out of the nothingness,” and shoots up or down. It’s a “quantum effect,” and after a while physics students forget how odd it is for energy to come out of nothing. Not only do students stop wondering about where the energy comes from, they stop wondering why it is that the scattering energy gets bigger the closer you bring the surfaces. With Johansson block sticking and with quantum scattering, the energy density gets higher the closer the surface, and this is accepted as normal, just Heisenberg’s uncertainly in two contexts. You can calculate the force from the zero-point energy of vacuum, but you must add a relativistic wrinkle: the distance between two surfaces shrinks the faster you move according to relativity, but measurable force should not. A calculation of the force that includes both quantum mechanics and relativity was derived by Hendrik Casimir:

Energy per volume = P = F/A = πhc/ 480 L4,

where P is pressure, F is force, A is area, h is plank’s quantum constant, 6.63×10−34 Js, c is the speed of light, 3×108 m/s, and L is the distance between the plates, m. Experiments have been found to match the above prediction to within 2%, experimental error, but the energy density this implies is huge, especially when L is small, the equation must apply down to plank lengths, 1.6×10-35 m. Even at the size of an atom, 1×10-10m, the amount of the energy you can see is 3.6 GWhr/m3, 3.6 Giga Watts. 3.6 GigaWatt hrs is one hour’s energy output of three to four large nuclear plants. We see only a tiny portion of the Plank-length vacuum energy when we stick Johansson gauge blocks together, but the rest is there, near invisible, in every bit of empty space. The implication of this enormous energy remains baffling in any analysis. I see it as an indication that God is everywhere, exceedingly powerful, filling the universe, and holding everything together. Take a look, and come to your own conclusions.

As a homiletic, it seems to me that God likes friendship, but does not desire shaman, folks to stand between man and Him. Why do I say that? The huge force-energy between plates brings them together, but scatters anything that goes between. And now you know something about nothing.

Robert Buxbaum, November 7, 2018. Physics references: H. B. G. Casimir and D. Polder. The Influence of Retardation on the London-van der Waals Forces. Phys. Rev. 73, 360 (1948).
S. Lamoreaux, Phys. Rev. Lett. 78, 5 (1996).

Isotopic effects in hydrogen diffusion in metals

For most people, there is a fundamental difference between solids and fluids. Solids have long-term permanence with no apparent diffusion; liquids diffuse and lack permanence. Put a penny on top of a dime, and 20 years later the two coins are as distinct as ever. Put a layer of colored water on top of plain water, and within a few minutes you’ll see that the coloring diffuse into the plain water, or (if you think the other way) you’ll see the plain water diffuse into the colored.

Now consider the transport of hydrogen in metals, the technology behind REB Research’s metallic  membranes and getters. The metals are clearly solid, keeping their shapes and properties for centuries. Still, hydrogen flows into and through the metals at a rate of a light breeze, about 40 cm/minute. Another way of saying this is we transfer 30 to 50 cc/min of hydrogen through each cm2 of membrane at 200 psi and 400°C; divide the volume by the area, and you’ll see that the hydrogen really moves through the metal at a nice clip. It’s like a normal filter, but it’s 100% selective to hydrogen. No other gas goes through.

To explain why hydrogen passes through the solid metal membrane this way, we have to start talking about quantum behavior. It was the quantum behavior of hydrogen that first interested me in hydrogen, some 42 years ago. I used it to explain why water was wet. Below, you will find something a bit more mathematical, a quantum explanation of hydrogen motion in metals. At REB we recently put these ideas towards building a membrane system for concentration of heavy hydrogen isotopes. If you like what follows, you might want to look up my thesis. This is from my 3rd appendix.

Although no-one quite understands why nature should work this way, it seems that nature works by quantum mechanics (and entropy). The basic idea of quantum mechanics you will know that confined atoms can only occupy specific, quantized energy levels as shown below. The energy difference between the lowest energy state and the next level is typically high. Thus, most of the hydrogen atoms in an atom will occupy only the lower state, the so-called zero-point-energy state.

A hydrogen atom, shown occupying an interstitial position between metal atoms (above), is also occupying quantum states (below). The lowest state, ZPE is above the bottom of the well. Higher energy states are degenerate: they appear in pairs. The rate of diffusive motion is related to ∆E* and this degeneracy.

A hydrogen atom, shown occupying an interstitial position between metal atoms (above), is also occupying quantum states (below). The lowest state, ZPE is above the bottom of the well. Higher energy states are degenerate: they appear in pairs. The rate of diffusive motion is related to ∆E* and this degeneracy.

The fraction occupying a higher energy state is calculated as c*/c = exp (-∆E*/RT). where ∆E* is the molar energy difference between the higher energy state and the ground state, R is the gas constant and T is temperature. When thinking about diffusion it is worthwhile to note that this energy is likely temperature dependent. Thus ∆E* = ∆G* = ∆H* – T∆S* where asterisk indicates the key energy level where diffusion takes place — the activated state. If ∆E* is mostly elastic strain energy, we can assume that ∆S* is related to the temperature dependence of the elastic strain.

Thus,

∆S* = -∆E*/Y dY/dT

where Y is the Young’s modulus of elasticity of the metal. For hydrogen diffusion in metals, I find that ∆S* is typically small, while it is often typically significant for the diffusion of other atoms: carbon, nitrogen, oxygen, sulfur…

The rate of diffusion is now calculated assuming a three-dimensional drunkards walk where the step lengths are constant = a. Rayleigh showed that, for a simple cubic lattice, this becomes:

D = a2/6τ

a is the distance between interstitial sites and t is the average time for crossing. For hydrogen in a BCC metal like niobium or iron, D=

a2/9τ; for a FCC metal, like palladium or copper, it’s

a2/3τ. A nice way to think about τ, is to note that it is only at high-energy can a hydrogen atom cross from one interstitial site to another, and as we noted most hydrogen atoms will be at lower energies. Thus,

τ = ω c*/c = ω exp (-∆E*/RT)

where ω is the approach frequency, or the amount of time it takes to go from the left interstitial position to the right one. When I was doing my PhD (and still likely today) the standard approach of physics writers was to use a classical formulation for this time-scale based on the average speed of the interstitial. Thus, ω = 1/2a√(kT/m), and

τ = 1/2a√(kT/m) exp (-∆E*/RT).

In the above, m is the mass of the hydrogen atom, 1.66 x 10-24 g for protium, and twice that for deuterium, etc., a is the distance between interstitial sites, measured in cm, T is temperature, Kelvin, and k is the Boltzmann constant, 1.38 x 10-16 erg/°K. This formulation correctly predicts that heavier isotopes will diffuse slower than light isotopes, but it predicts incorrectly that, at all temperatures, the diffusivity of deuterium is 1/√2 that for protium, and that the diffusivity of tritium is 1/√3 that of protium. It also suggests that the activation energy of diffusion will not depend on isotope mass. I noticed that neither of these predictions is borne out by experiment, and came to wonder if it would not be more correct to assume ω represent the motion of the lattice, breathing, and not the motion of a highly activated hydrogen atom breaking through an immobile lattice. This thought is borne out by experimental diffusion data where you describe hydrogen diffusion as D = D° exp (-∆E*/RT).

Screen Shot 2018-06-21 at 12.08.20 AM

You’ll notice from the above that D° hardly changes with isotope mass, in complete contradiction to the above classical model. Also note that ∆E* is very isotope dependent. This too is in contradiction to the classical formulation above. Further, to the extent that D° does change with isotope mass, D° gets larger for heavier mass hydrogen isotopes. I assume that small difference is the entropy effect of ∆E* mentioned above. There is no simple square-root of mass behavior in contrast to most of the books we had in grad school.

As for why ∆E* varies with isotope mass, I found that I could get a decent explanation of my observations if I assumed that the isotope dependence arose from the zero point energy. Heavier isotopes of hydrogen will have lower zero-point energies, and thus ∆E* will be higher for heavier isotopes of hydrogen. This seems like a far better approach than the semi-classical one, where ∆E* is isotope independent.

I will now go a bit further than I did in my PhD thesis. I’ll make the general assumption that the energy well is sinusoidal, or rather that it consists of two parabolas one opposite the other. The ZPE is easily calculated for parabolic energy surfaces (harmonic oscillators). I find that ZPE = h/aπ √(∆E/m) where m is the mass of the particular hydrogen atom, h is Plank’s constant, 6.63 x 10-27 erg-sec,  and ∆E is ∆E* + ZPE, the zero point energy. For my PhD thesis, I didn’t think to calculate ZPE and thus the isotope effect on the activation energy. I now see how I could have done it relatively easily e.g. by trial and error, and a quick estimate shows it would have worked nicely. Instead, for my PhD, Appendix 3, I only looked at D°, and found that the values of D° were consistent with the idea that ω is about 0.55 times the Debye frequency, ω ≈ .55 ωD. The slight tendency for D° to be larger for heavier isotopes was explained by the temperature dependence of the metal’s elasticity.

Two more comments based on the diagram I presented above. First, notice that there is middle split level of energies. This was an explanation I’d put forward for quantum tunneling atomic migration that some people had seen at energies below the activation energy. I don’t know if this observation was a reality or an optical illusion, but present I the energy picture so that you’ll have the beginnings of a description. The other thing I’d like to address is the question you may have had — why is there no zero-energy effect at the activated energy state. Such a zero energy difference would cancel the one at the ground state and leave you with no isotope effect on activation energy. The simple answer is that all the data showing the isotope effect on activation energy, table A3-2, was for BCC metals. BCC metals have an activation energy barrier, but it is not caused by physical squeezing between atoms, as for a FCC metal, but by a lack of electrons. In a BCC metal there is no physical squeezing, at the activated state so you’d expect to have no ZPE there. This is not be the case for FCC metals, like palladium, copper, or most stainless steels. For these metals there is a much smaller, on non-existent isotope effect on ∆E*.

Robert Buxbaum, June 21, 2018. I should probably try to answer the original question about solids and fluids, too: why solids appear solid, and fluids not. My answer has to do with quantum mechanics: Energies are quantized, and always have a ∆E* for motion. Solid materials are those where ω exp (-∆E*/RT) has unit of centuries. Thus, our ability to understand the world is based on the least understandable bit of physics.