Category Archives: Science: Physics, Astronomy, etc.

Fusion advance: LLNL’s small H-bomb, 1.5 lb TNT didn’t destroy the lab.

There was a major advance in nuclear fusion this month at the The National Ignition Facility of Lawrence Livermore National Laboratory (LLNL), but the press could not figure out what it was, quite. They claimed ignition, and it was not. They claimed that it opened the door to limitless power. It did not. Some heat-energy was produced, but not much, 2.5 MJ was reported. Translated to the English system, that’s 600 kCal, about as much heat in a “Big Mac”. That’s far less energy went into lasers that set the reaction off. The importance wasn’t the amount in the energy produced, in my opinion, it’s that the folks at LLNL fired off a small hydrogen bomb, in house, and survived the explosion. 600 kCal is about the explosive power of 1.5 lb of TNT.

Many laser beams converge on a droplet of deuterium-tritium setting off the explosion of a small fraction of the fuel. The explosion had about the power of 1.2 kg of TNT. Drawing from IEEE Spectrum

The process, as reported in the Financial Times, involved “a BB-sized” droplet of holmium -enclosed deuterium and tritium. The folks at LLNL fast-cooked this droplet using 100 lasers, see figure of 2.1MJ total output, converging on one spot simultaneously. As I understand it 4.6 MJ came out, 2.5 MJ more than went in. The impressive part is that the delicate lasers survived the event. By comparison, the blast that bought down Pan Am flight 103 over Lockerbie took only 2-3 ounces of explosive, about 70g. The folks at LLNL say they can do this once per day, something I find impressive.

The New York Times seemed to think this was ignition. It was not. Given the size of a BB, and the density of liquid deuterium-tritium, it would seem the weight of the drop was about 0.022g. This is not much but if it were all fused, it would release 12 GJ, the equivalent of about 3 tons of TNT. That the energy released was only 2.5MJ, suggests that only 0.02% of the droplet was fused. It is possible, though unlikely, that the folks at LLNL could have ignited the entire droplet. If they did, the damage from 5 tons of TNT equivalent would have certainly wrecked the facility. And that’s part of the problem; to make practical energy, you need to ignite the whole droplet and do it every second or so. That’s to say, you have to burn the equivalent of 5000 Big Macs per second.

You also need the droplets to be a lot cheaper than they are. Today, these holmium capsules cost about $100,000 each. We will need to make them, one per second for a cost around $! for this to make any sort of sense. Not to say that the experiments are useless. This is a great way to test H-bomb designs without destroying the environment. But it’s not a practical energy production method. Even ignoring the energy input to the laser, it is impossible to deal with energy when it comes in the form of huge explosions. In a sense we got unlimited power. Unfortunately it’s in the form of H-Bombs.

Robert Buxbaum, January 5, 2023

Almost no one over 50 has normal blood pressure now.

Four years ago, when the average lifespan of American men was 3.1 years longer than today, the American Heart Association and the American College of Cardiology dropped the standard for normal- acceptable blood pressure for 50+ years olds from 140/90 to 120/80. The new standard of normal was for everyone regardless or age or gender despite the fact that virtually no one over 50 now reached it. Normal is now quite un-common.

By the new definition, virtually everyone over 50 now is diagnosed with high blood pressure or hypertension. Almost all require one or two medications — no more baby aspirin. Though the evidence for aspirin’s benefit is strong, it doesn’t lower blood pressure. AHA guidance is to lower a patients blood pressure to <140/90 mmHg or at least treat him/her with 2–3 antihypertensive medications.4 

Average systolic blood pressures for long-lived populations of men and women without drugs.

The graphs shows the average blood pressures, without drugs in a 2008 study of the longest-lived, Scandinavian populations. These were the source of the previous targets: the natural pressures for the healthiest populations at the time, based on the study of 1304 men (50-79 years old) and 1246 women (38-79 years old) observed for up to 12 years. In this healthy population, the average untreated systolic pressure is seen till age 70, reaching 154 for men, and over 160 for women. By the new standards, these individuals would be considered highly unhealthy, though they live a lot longer than we do. The most common blood-pressure drug prescribed in the US today is atenolol, a beta blocker. See my essay on Atenolol. It’s good at lowering blood pressure, but does not decrease mortality.

The plot at left shows the relationship between systolic blood pressure and death. There is a relationship, but it is not clear that the one is the cause of the other, especially for individuals with systolic pressure below 160. Those with pressures of 170 and above have significantly higher mortality, and perhaps should take atenolol, but even here it might be that high cholesterol, or something else, is causing both the high blood pressure and the elevated death risk.

The death-risk difference between 160 and 100 mmHg is small and likely insignificant. The minimum at 110 is rather suspect too. I suspect it’s an artifact of a plot that ignores age. Only young people have this low number, and young people have fewer heart attacks. Artificially lowering a person’s blood pressure, even to this level does not make him young, [2][3] and brings some problems. Among the older-old, 85 and above, a systolic blood pressure of 180 mmHg is associated with resilience to physical and cognitive decline, though it is also associated with higher death rate.

The AHA used a smoothed version of the life risk graph above to justify their new standards, see below. In this version, any blood pressure looks like it’s bad. The ideal systolic pressure seems to be 100 or below. This is vastly too low a target, especially for a 60 year old. Based on the original graph, I would think that anything below 155 is OK.

smoothed chart of deaths per 1000 vs blood pressure. According to this chart, any blood pressure is bad. There is no optimum.

Light exercise seems to do some good especially for the overweight. Walking helps, as does biking, and aerobics. Weight loss without exercise seems to hurt health. Aspirin is known to do some good, with minimal cost and side effects. Ablation seems to help for those with atrial fibrillation. Elequis (a common blood thinner) seems to have value too, for those with atrial fibrillation — not necessarily for those without. Low sodium helps some, and coffee, reducing gout, dementia and Parkinson’s, and alcohol. Some 2-3 drinks per day (red wine?) is found to improve heart health.

I suspect that the Scandinavians live longer because they drink mildly, exercise mildly, have good healthcare (but not too good), and have a low crime rate. They seem to have dodged the COVID problem too, even Sweden that did next to nothing. it’s postulated that the problem is over medication, including heart medication.

Robert Buxbaum, January 4, 2023. The low US lifespan is startling. Despite spending more than any other developed countries on heath treatments, we have horribly lower lifespans, and it’s falling fast. A black man in the US has the same expected lifespan as in Rwanda. Causes include heart attacks and strokes, accidents, suicide, drugs, and disease. Opioids too, especially since the COVID lockdowns.

The main building block of Alzheimer’s research was faked. Now, what.

Much of health research is a search for simple, bio-molecular causes for our medical problems. These can result in pill-solutions. Diseases tend to be more complex, but Alzheimers seemed to work that way, until this summer when it turned out that the data supporting the simple theory was faked. Alzheimer’s is a devastating cognitive disease that is accompanied by a degenerating brain, with sticky, beta-amyloid plaques and tangles. About 16 years ago, this report, published in Nature seemed to show that a beta-amyloid, Aβ*56, caused the plaques and caused cognitive decline independent of any other Alzheimers indicators. 

The visual difference between an Alzheimer brain and a normal brain is that the former has shrunk. Maybe fat is relevant, fat body leads to a fat brain, and less AZ, maybe?

We were on the way to a cure, or so it seemed. Several studies by this group backed the initial results, and much of Alzheimer’s research was directed into an effort to fill in the story, and find ways to reduce the amount and bonding of this amyloid and others like it. Several other groups claimed they could not find the amyloid at all, or show that amyloids caused the symptoms described. But most negative results went unpublished. The theory was so satisfying, and the evidence from a few so strong, that the NIH poured billions into this approach, over $1B in this year alone. The FDA approved aducanumab, a drug from Biogen, on the assumption that it should work, even though it showed little to no benefit, and had some deadly side effects. Other firms followed, asking for approval of related anti-amyloid drugs that should work.

When news of the fraud came out, detected by Matthew Scragg and a few lone curmudgeons, stock prices plummeted in the drug companies. It now appears that the original work was made up, presented to journals and to the NIH using photoshopped images. For the group that did the fake work, it may mean jail time, for most other groups, the claim is that their work is still relevant. Doctors still prescribe the medications as they have nothing better to offer (Aducanumab therapy costs $50,000 per year). Maybe it’s time to start looking at alternative approaches and theories, sidelined over the last 16 years.

Some alternative theories posit that another molecule is responsible, particularly tau, associated with the tangles. Another sidelined theory is that amyloids are good. For example, that it’s the loss of soluble amyloids that causes Alzheimer’s. Alternately, that inflammation is the root cause, and that the amyloid plaques and tangles are a response to the inflammation, a bandage, perhaps. These theories could explain why the anti-amyloid drugs so often resulted in patient death.

It could be that high bmi protects from dementia. Either that or the diseases that cause weight loss cause dementia. It’s debated here.

It’s also possible that the inability of nerve cells to dispose of waste is the cause of AZ. In heathy people, waste is removed through acidic enzymes within lysosomes. Patients with decreased acid activity have a buildup of waste that includes amyloids. Perhaps the cure is to restore the acid enzymes.

My favorite theory is based on statistical data that shows that fat people are less likely to develop Alzheimers. This might lead to a junk-food cure. The fitness industry is very much against this theory–It’s debated here. They tend to support the inflammation model, claiming that diseases cause Alzheimer’s and cause patients to loose weight first. Could be. I note that Henry Kissinger is the only active politician of my era, the early 70s, still alive and writing intelligently.

Robert Buxbaum, November 17-19, 2022. I hope that Matthew Schragg comes out OK, by the way. Ben Franklin pointed out, that “No good deed goes unpunished.”

Eliquis, over-prescribed but better than Coumadin.

Eliquis (apixaban) is blood thinner shown to prevent stroke with fewer side effects than Warfarin (Coumadin). Aspirin does the same, but not as effectively for people over 75. My problem with eliquis is that it’s over-prescribed. The studies favoring it over aspirin found benefits for those over 75, and for those with A-Fib. And even in this cohort the advantage over aspirin is small or non-existent because eliquis has far more serious side effects; hemorrhage, or internal bleeding.

Statistically, the AVERROES study (Apixaban Versus Acetylsalicylic Acid to Prevent Stroke in AF Patients Who Have Failed or Are Unsuitable for Vitamin K Antagonist Treatment) found that apixaban is substantially better than aspirin at preventing stroke in atrial fibrillation patients, but worse at preventing heart attack.

Taking 50 mg of Eliquis twice a day, reduces the risk of stroke in people with A-Fib by more than 50% and reduces the rate of heart attack by about 15%. By comparison, taking 1/2 tablet of aspirin, 178 mg, reduces the risk of stroke by 17% and of heart attack by 42%. The benefits were higher in the elderly, those over 75, and non existent in those with A-Fib under 75, see here, and figure. Despite this, doctors prescribe Eliquis over aspirin, even to those without A-Fib and those under 75. I suspect the reason is advertising by the drug companies, as I’ve claimed earlier with Atenolol.

The major deadly side-effect is hemorrhage, brain hemorrhage and GI (stomach) hemorrhage. Here apixaban is far worse than with aspirin (but better than Warfarin). The net result is that in the AVERROES random-double blind study there was no difference in all-cause mortality between apixaban and aspirin for those with A-fib who were under 75, see here. Or here.

To reduce your chance of GI hemorrhage with Eliquis, it is a very good idea to take a stomach proton pump drug like Pantoprazole. If you have A-Fib, the combination of Eliquis and pantoprazole seems better than aspirin alone, even for those under 75. If you have no A-Fib and are under 75, I see no benefit to Eliquis, especially if you find you have headaches, stomach aches, back pain, or other signs of internal bleeding, you might switch to aspirin or choose a reduced dose.

A Japanese study found that half the normal dose of Eliquis, was approximately as effective as the full dose, 50 mg twice a day. I was prescribed Eliquis, full dose twice a day, but I’m under 70 and I have no A-Fib since my ablation.

Life expectancy has dropped in the US to undeveloped world levels. Biden blames COVID and racism. I think it’s too much drugs, and too few opportunities.

I’m struck by the fact that US life expectancy is uncommonly low, lower than in most developed countries. Lower too than in many semi-developed countries, and our life expectancy is decreasing while other countries are not seeing the same. It dropped by about 3 years over the last 2 years as shown. I wonder why the US has suffered more than other countries, and suspect we are over-prescribed. Too much of a good thing, typically isn’t good.

Robert Buxbaum, September 16, 2022. As a side issue, low dose aspirin may forestall Alzheimers and other dementias. See current article here. Also another study here.

Of covalent bonds and muon catalyzed cold fusion.

A hydrogen molecule consists of two protons held together by a covalent bond. One way to think of such bonds is to imagine that there is only one electron is directly involved as shown below. The bonding electron only spends 1/7 of its time between the protons, making the bond, the other 6/7 of the time the electron shields the two protons by 3/7 e each, reducing the effective charge of each proton to 4/7e+.

We see that the two shielded protons will repel each other with the force of FR = Ke (16/49 e2 /r2) where e is the charge of an electron or proton, r is the distance between the protons (r = 0.74Å = 0.74×10-10m), and Ke is Coulomb’s electrical constant, Ke ≈ 8.988×109 N⋅m2⋅C−2. The attractive force is calculated similarly, as each proton attracts the central electron by FA = – Ke (4/49) e2/ (r/2)2. The forces are seen to be in balance, the net force is zero.

It is because of quantum mechanics, that the bond is the length that it is. If the atoms were to move closer than r = 0.74Å, the central electron would be confined to less space and would get more energy, causing it to spend less time between the two protons. With less of an electron between them, FR would be greater than FA and the protons would repel. If the atoms moved further apart than 0.74Å, a greater fraction of the electron would move to the center, FA would increase, and the atoms would attract. This is a fairly pleasant way to understand why the hydrogen side of all hydrogen covalent bonds are the same length. It’s also a nice introduction to muon-catalyzed cold fusion.

Most fusion takes place only at high temperatures, at 100 million °C in a TOKAMAK Fusion reactor, or at about 15 million °C in the high pressure interior of the sun. Muon catalyzed fusion creates the equivalent of a much higher pressure, so that fusion occurs at room temperature. The trick to muon catalyzed fusion is to replace one of the electrons with a muon, an unstable, heavy electron particle discovered in 1936. The muon, designated µ-, behaves just like an electron but it has about 207 times the mass. As a result when it replaces an electron in hydrogen, it forms form a covalent bond that is about 1/207th the length of a normal bond. This is the equivalent of extreme pressure. At this closer distance, hydrogen nuclei fuse even at room temperature.

In normal hydrogen, the nuclei are just protons. When they fuse, one of them becomes a neutron. You get a deuteron (a proton-neutron pair), plus an anti electron and 1.44 MeV of energy after the anti-electron has annihilated (for more on antimatter see here). The muon is released most of the time, and can catalyze many more fusion reactions. See figure at right.

While 1.44MeV per reaction is a lot by ordinary standards — roughly one million times more energy than is released per atom when hydrogen is burnt — it’s very little compared to the energy it takes to make a muon. Making a muon takes a minimum of 1000 MeV, and more typically 4000 MeV using current technology. You need to get a lot more energy per muon if this process is to be useful.

You get quite a lot more energy when a muon catalyzes deuterium fusion or deuterium- fusion. With these reactions, you get 3.3 to 4 MeV worth of energy per fusion, and the muon will be ejected with enough force to support about eight D-D fusions before it decays or sticks to a helium atom. That’s better than before, but still not enough to justify the cost of making the muon.

The next reactions to consider are D-T fusion and Li-D fusion. Tritium is an even heavier isotope of hydrogen. It undergoes muon catalyzed fusion with deuterium via the reaction, D+T –> 4He +n +17.6 MeV. Because of the higher energy of the reaction, the muons are even less likely to stick to a helium atom, and you get about 100 fusions per muon. 100 x 17.6 MeV = 1.76 GeV, barely break-even for the high energy cost to make the muon, but there is no reason to stop there. You can use the high energy fusion neutrons to catalyze LiD fusion. For example, 2LiD +n –> 34He + T + D +n producing 19.9 MeV and a tritium atom.

With this additional 19.9 MeV per DT fusion, the system can start to produce usable energy for sale. It is also important that tritium is made in the process. You need tritium for the fusion reactions, and there are not many other supplies. The spare neutron is interesting too. It can be used to make additional tritium or for other purposes. It’s a direction I’d like to explore further. I worked on making tritium for my PhD, and in my opinion, this sort of hybrid operation is the most attractive route to clean nuclear fusion power.

Robert Buxbaum, September 8, 2022. For my appraisal of hot fusion, see here.

Arctic Ice has shrunk 1.5% since ’99 and Gore’s inconvenient truth. Is this bad?

At the 1999 Copenhagen Climate Change Summit, Al Gore announced an inconvenient truth: “There is a 75 per cent chance that the entire north polar ice cap, during the summer months, could be completely ice-free within five to seven years.” It was a bold prediction, part of a campaign that got Mr Gore a Nobel Prize and motivated the US to devote billions to stopping global warming. Supposedly 98% of scientists agreed with Mr. Gore and his remedies. Prince Charles and Bill Gates too. Twenty three years later there is still arctic ice, 98.5% as much as in 1999. Two questions arise: 1. Is the ice loss bad? and 2. Why were those 98% of scientists so wrong?

Arctic sea ice extent 1999-2021
Arctic sea ice extent when Al Gore spoke (1999) and since. Not much change, nor clearly for the worse

The second question is far easier than the first: the 98% number was bogus, a lie, like many other climate lies that followed. it was effective at stopping argument, and could not be checked immediately. It bullied scientists who argued that global warming wasn’t bad, or wasn’t man-made, and it gave do-gooders the ability to label their opponents “liars” and “science deniers”. The claim of 98% was used to silence scientists with long, prominent careers. Deniers lost their funding and were no longer published. Other scientists learned to keep quiet. Twenty years later, when the arctic ice wasn’t gone and antarctic ice hit a record extent, the deniers’ careers largely were gone.

Scientists are not stupid, nor independently rich, for the most part. They are dependent on government funding and their employers, the universities are too. As a group they (we) are incapable of stemming the tide of public opinion. This week Biden signed a nearly 1 trillion dollar bill to stop climate change. Every scientist with a chance to get the money will go for it. Whether or not they think a colder earth is good, they will claim it is in their proposals, and imply that their work can stop the natural chaos that is climate. They will ask for their share of the $1T to study the appropriate things: solar cells, corn-based power, and wind turbines. The proposals will not mention the huge costs in mining or land use. Scientists already know they can not get funded for nuclear power, though it works and produces no CO2, nor should can scientists benefit by criticizing China, as the largest source of CO2. That is seen as undermine the green effort at home. When we stop manufacturing at home, BTW, we end up buying the same materials manufactured in China, where they really generate lots of pollution. When asked about this, Biden’s climate chief said not to worry about it, we had to do our part, and Biden would speak to the Chinese. The result is the biggest buildup in coal-fired power plants in the world, with more coming on line.

This second question is at least as important as the first one: is less arctic ice bad? Or, asking more generally, is a warm earth bad? It’s an opinion question; it’s in no way science, impossible to answer definitively. Cold weather is bad for food production, and that’s bad for people, in general. Most people prefer to live where it’s warm, I find. Supposedly polar bears prefer it cold, but I don’t know for sure. I’m not keen to go back to the climate of the ice ages, 10,000- 100,000 years ago when ice covered Canada and you could walk from France to England. I’m not convinced that life was better when the world was 1°C colder. The sea was lower in 1900, but had been higher in the year zero. Less arctic ice means easier shipping. For all I know we may want to make a Northwest Passage. More food and a easier shipping are the convenient truths about global warming.

Robert Buxbaum, August 19, 2022. If you believe any of what I said about Gore/Biden’s green energy, you may like a movie by Michael Moore, Planet of the Humans, see it here. The political greens are not saving energy or cooling the planet, and they know it. It’s a money maker.

Atenolol, not good for the heart, maybe good for the doctor.

Atenolol and related beta blockers have been found to be effective reducing blood pressure and heart rate. Since high blood pressure is a warning sign for heart problems, doctors have been prescribing atenolol and related beta blockers for all sorts of heart problems, even problems that are not caused by high blood pressure. I was prescribed metoprolol and then atenolol for Atrial Fibrillation, A-Fib, beginning 2 yeas ago, even though I have low-moderate blood pressure. For someone like me, it might have been deadly. Even for patients with moderately high blood pressure (hypertension) studies suggest there is no heart benefit to atenolol and related ß-blockers, and only minimal stroke and renal benefit. As early as 1985 (37 years ago) the Medical Research Council trial found that “ß blockers are relatively ineffective for primary treatment of hypertensive outcomes.”

End point. Relative risk. 95% CI. All-cause mortality Cardiovascular mortality MI Stroke Carlberg B et al. Lancet 2004; 364:1684–1689.

There lots of adverse side-effects to atenolol, as listed at the end of this post. More recent studies (e.g. Carlsberg et al., at right) continue to find no positive effects on the heart, but lots of negatives. A review in Lancet (2004) 364,1684–9 was titled, “Review: atenolol may be ineffective for reducing cardiovascular morbidity or all cause mortality in hypertension” (link here). “In patients with essential hypertension, atenolol is not better than placebo or no treatment for reducing cardiovascular morbidity or all cause mortality.” It further concluded that, “compared to other antihypertensive drugs, it [atenolol] may increase the risk of stroke or death.” I showed this and related studies to my doctor, and pointed out that I have averaged to low blood pressure, but he persisted in pushing this drug, something that seems common among medical men. My guess is that the advertising or doctor subsidies are spectacular. By contrast, aspirin has long been known to be effective for heart problems; my doctor said to go off aspirin.

The graph at right is from “Trial of Secondary Prevention with Atenolol after transient Ischemic Attack or Nondisabling Ischemic Stroke”, published in Stroke, 24 4 (1993), (see link here). a Thje study involved 1473 at-risk patients, randomly prescribed atenolol or placebo. It found no outcome benefit from atenolol, and several negatives. After 3 years, in two equal-size randomized groups, there were 64 deaths among the atenolol group, 58 among the placebo group; there were 11 fatal strokes with atenolol, versus 8 with placebo. There were somewhat fewer non-fatal strokes with atenolol, but the sum-total of fatal and non-fatal strokes was equal; there were 81 in each group.

“Trial of Secondary Prevention with Atenolol after transient Ischemic Attack or Nondisabling Ischemic Stroke”, published in Stroke, 24 4 (1993).

Newer beta blockers seem marginally better, as in “Effect of nebivolol or atenolol vs. placebo on cardiovascular health in subjects with borderline blood pressure: the EVIDENCE study.” “Nebivolol (NEB) in contrast to atenolol (ATE) may have a beneficial effect on endothelial function …. there was no significant change in the ATE and PLAC groups.” My question: why not use one of these, or better yet aspirin. Aspirin is shown to be beneficial, and relatively side-effect free. If you tolerate aspirin, and most people do, beneficial has to be better than maybe beneficial.

Among atenolol’s ugly side effects, as listed by the Mayo Clinic, there are: tiredness, sweating, shortness of breath, confusion, loss of sex drive, cold fingers and toes, diarrhea, nausea, and general confusion. I had some of these. There was no increase in heart stability (decrease in A-fib). My heart rate went as low at 32 bpm at night. My doctor was unconcerned, but I was. I suspected the low heart rate put me at extreme risk. Eventually, the same doctor gave me ablation therapy, and that seemed to cure the A-Fib.

Following my ablation, I was told I could get off atenolol. I then discovered another negative effect of atenolol: you have to ease off it or your heart will race. If you have A-fib, or modest hypertension, consider aspirin, eliquis, ablation, or exercise. If you are prescribed atenolol for heart issues and don’t have symptoms of very-high blood pressure, consider other options and/or changing doctors.

Robert Buxbaum, August 14, 2022

Three identical strangers, and the genetics of personality

Inheritability of traits is one of the greatest of insights; it’s so significant and apparent, that one who does not accept it may safely be called a dullard. Personal variation exists, but most everyone accepts that if your parents are tall, you are likely to be tall; If they are dark, you too will likely be dark, etc., but when it comes to intelligence, or proclivities, or psychological leanings, it is more than a little impolite to acknowledge that genetics holds sway. This unwillingness is glaringly apparent in the voice-over narration of a popular movie about three identical triplets who were raised separately without knowing of one another. The movie is “Three identical strangers”, and it recounts their meeting, and their life afterwards.

Triplets, raised separately, came out near identical.

As one might expect, given my introduction, though raised separately, the three showed near identical intelligence, and near identical proclivities: two of them picked the same out-of-the way college. All of them liked the same sort of clothes and had the same taste in women. There were differences as well: one was a more outgoing, one was depressed, but in many ways, they were identical. Meanwhile, the voice-over kept saying things like, “isn’t it a shame that we never saw any results on nature/nurture from this study.” Let me clear this us: genetics applies to psychology too. It’s not all genetics, but it is at least as influential as upbringing/ nurture.

This movie also included pairs of identical twins, raised separately, they also showed strong personality similarities. It’s a finding that is well replicated in broader studies involving siblings raised separately, and unrelated adoptees raised together. Blood, it seems, is stronger than nurture. See for example the research survey paper, “Genetic Influence on Human Psychological Traits” Journal of the American Psychological Society 13-4, pp 148-151 (2004). A table from that paper appears below. Genetics plays a fairly strong role in all personal traits including intelligence, personality, self-control, mental illness, criminality, political views (even mobile phone use). The role is age-dependent, though so that intelligence (test determined) is strongly environment-dependent in 5 year olds, almost entirely genetic in 25-50 year olds. One area that is not strongly genetic, it seems, is religion.

In a sense, the only thing surprising about this result is that anyone is surprised. Genetics is accepted as crucial for all things physical, so why not mental and social. As an example of the genetic influence on sports, consider Jewish chess genius, Lazlo Polgar: he decided to prove that anyone could be great at chess, and decided to train his three daughters: he got two grand masters and an international master. By comparison, there are only 2 chess grand masters in all of Finland. Then consider that there are five all-star, baseball players named Alou, all from the same household, including the three brothers below. The household has seven pro baseball players in all.

Most people are uncomfortable with such evidence of genetic proclivity. The movie has been called “deeply disturbing” as any evidence of proclivity contradicts the promise of education: that all men are equal, blank slates at birth that can be fashioned into whatever you want through education. What we claim we want is leaders — lots of them, and we expect that education will produce equal ratios of woman and men, black and white and Hispanic, etc. and we expect to be able to get there without testing for skills, — especially without blind testing. I notice that the great universities have moved to have testing optional, instead relying on interviews and related measures of leadership. I think this is nonsense, but then I don’t run Harvard. As a professor, I’ve found that some kids have an aptitude and a burning interest, and others do not. You can tell a lot by testing, but the folks who run the universities disagree.

The All star Alou brothers share an outfield.

University heads claim that blind testing is racist. They find that some races score poorly on spacial sense, for example, or vocabulary suggesting that the tests are to blame. There is some truth to these concerns, but I find that the lack of blind testing is more racist. Once the test is eliminated, academia finds a way to elevate their friends, and the progeny of the powerful.

The variety of proclivities plays into an observation that you can be super intelligent in one area, and super stupid in others. That was the humor of some TV shows: “Big Bang Theory” and “Fraser”. That was also the tragedy of Bobby Fischer. He was brilliant in chess (and the child of brilliant parents), but was a blithering idiot in all other areas of life. Finland should not feel bad about their lack of great chess players. The country has produced two phone companies, two strong operating systems, and the all time top sniper.

Robert Buxbaum, May 15, 2022

Induction

Most of science is induction. Scientists measure correlation, for example that fat people don’t run as much as thin people. They then use logic to differentiate cause from effect that is do they not run because they are fat, or are they fat because they don’t run, or is everything base on some third factor, like genetics. At every step this is all inductive logic, but that’s how science is done.

The lack of certainty shows up especially commonly in health work. Many of our cancer cures are found to not work when studied under slightly different conditions. Similarly with weight los, or heart health. I’d mentioned previously that CPAPs reduce heart fibrillation, and heart filtration is correlated with shortened life, but then we find that CPAP use does not lengthen life, but seems to shorten it. (see a reason here). That’s the problem with induction; correlation isn’t typically predictive in a useful way.

Despite these problems, this is how science works. You look for patterns, use induction, find an explanation, and try to check your results. I have an essay on the scientific methods, with quotes from Sherlock Holmes. His mysteries are a wonderful guide, and his inductive leaps are almost always true. Meanwhile, the inductive leaps of Watson and Lastrade are almost always false.

Robert Buxbaum, May 9, 2022

A more accurate permeation tester

There are two ASTM-approved methods for measuring the gas permeability of a material. The equipment is very similar, and REB Research makes equipment for either. In one of these methods (described in detail here) you measure the rate of pressure rise in a small volume.This method is ideal for high permeation rate materials. It’s fast, reliable, and as a bonus, allows you to infer diffusivity and solubility as well, based on the permeation and breakthrough time.

Exploded view of the permeation cell.

For slower permeation materials, I’ve found you are better off with the other method: using a flow of sampling gas (helium typically, though argon can be used as well) and a gas-sampling gas chromatograph. We sell the cells for this, though not the gas chromatograph. For my own work, I use helium as the carrier gas and sampling gas, along with a GC with a 1 cc sampling loop (a coil of stainless steel tube), and an automatic, gas-operated valve, called a sampling valve. I use a VECO ionization detector since it provides the greatest sensitivity differentiating hydrogen from helium.

When doing an experiment, the permeate gas is put into the upper chamber. That’s typically hydrogen for my experiments. The sampling gas (helium in my setup) is made to flow past the lower chamber at a fixed, flow rate, 20 sccm or less. The sampling gas then flows to the sampling loop of the GC, and from there up the hood. Every 20 minutes or so, the sampling valve switches, sending the sampling gas directly out the hood. When the valve switches, the carrier gas (helium) now passes through the sampling loop on its way to the column. This sends the 1 cc of sample directly to the GC column as a single “injection”. The GC column separates the various gases in the sample and determines the components and the concentration of each. From the helium flow rate, and the argon concentration in it, I determine the permeation rate and, from that, the permeability of the material.

As an example, let’s assume that the sample gas flow is 20 sccm, as in the diagram above, and that the GC determines the H2 concentration to be 1 ppm. The permeation rate is thus 20 x 10-6 std cc/minute, or 3.33 x 10-7 std cc/s. The permeability is now calculated from the permeation area (12.56 cm2 for the cells I make), from the material thickness, and from the upstream pressure. Typically, one measures the thickness in cm, and the pressure in cm of Hg so that 1 atm is 76cm Hg. The result is that permeability is determined in a unit called barrer. Continuing the example above, if the upstream hydrogen is 15 psig, that’s 2 atmospheres absolute or or 152 cm Hg. Lets say that the material is a polymer of thickness is 0.3 cm; we thus conclude that the permeability is 0.524 x 10-10 scc/cm/s/cm2/cmHg = 0.524 barrer.

This method is capable of measuring permeabilities lower than the previous method, easily lower than 1 barrer, because the results are not fogged by small air leaks or degassing from the membrane material. Leaks of oxygen, and nitrogen show up on the GC output as peaks that are distinct from the permeate peak, hydrogen or whatever you’re studying as a permeate gas. Another plus of this method is that you can measure the permeability of multiple gas species simultaneously, a useful feature when evaluating gas separation polymers. If this type of approach seems attractive, you can build a cell like this yourself, or buy one from us. Send us an email to reb@rebresearch.com, or give us a call at 248-545-0155.

Robert Buxbaum, April 27, 2022.