Arctic and Antarctic Ice Increases; Antarctic at record levels

Good news if you like ice. I’m happy to report that there has been a continued increase in the extent of both Antarctic and Arctic Ice sheets, in particular the Antarctic sheet. Shown below is a plot of Antarctic ice size (1981-2010) along with the average (the black line), the size for 2012 (dotted line), and the size for 2013 so far. This year (2013) it’s broken new records. Hooray for the ice.

Antarctic ice at record size in 2013, after breaking records in 2012

Antarctic ice at record size in 2013, after a good year in 2012

The arctic ice has grown too, and though it’s not at record levels, the Arctic ice growth  is more visually dramatic, see photo below. It’s also more welcome — to polar bears at least. It’s not so welcome if you are a yachter, or a shipping magnate trying to use the Northwest passage to get your products to market cheaply.

Arctic Ice August 2012-2013

Arctic Ice August 2012-2013

The recent (October 2013) global warming report from NASA repeats the Arctic melt warnings from previous reports, but supports that assertion with an older satellite picture — the one from 2006. That was a year when the Arctic had even less ice than in 2012, but the date should be a warning. From the picture, you’d think it’s an easy sail through the Northwest passage; some 50 yachts tried it this summer, and none got through, though some got half way. It’s a good bet you can buy those ships cheap.

I should mention that only the Antarctic data is relevant to Al Gore’s 1996 prediction of a 20 foot rise in the sea level by 2100. Floating ice, as in the arctic, displaces the same amount of mass as water. Ice floats but has the same effect on sea level as if it were melted; it’s only land-based ice that affects sea level. While there is some growth seen in land-ice in the arctic photos above — compare Greenland and Canada on the 2 photos, there is also a lot of glacier ice loss in Norway (upper left corners). The ocean levels are rising, but I don’t think this is the cause, and it’s not rising anywhere near as fast as Al Gore said: more like 1.7mm/year, or 6.7 inches per century. I don’t know what the cause is, BTW. Perhaps I’ll post speculate on this when I have a good speculation.

Other good news: For the past 15 years global warming appears to have taken a break. And the ozone hole shrunk in 2012 to near record smallness. Yeah ozone. The most likely model for all this, in my opinion, is to view weather as chaotic and fractal; that is self-similar. Calculus works on this, just not the calculus that’s typically taught in school. Whatever the cause, its good news, and welcome.

Robert E. Buxbaum, October 21, 2013. Here are some thoughts about how to do calculus right, and how to do science right; that is, look at the data first; don’t come in with a hypothesis.

Calculus is taught wrong, and is often wrong

The high point of most people’s college math is The Calculus. Typically this is a weeder course that separates the science-minded students from the rest. It determines which students are admitted to medical and engineering courses, and which will be directed to english or communications — majors from which they can hope to become lawyers, bankers, politicians, and spokespeople (the generally distrusted). While calculus is very useful to know, my sense is that it is taught poorly: it is built up on a year of unnecessary pre-calculus and several shady assumptions that were not necessary for the development, and that are not generally true in the physical world. The material is presented in a way that confuses and turns off many of the top students — often the ones most attached to the reality of life.

The most untenable assumption in calculus teaching, in my opinion, are that the world involves continuous functions. That is, for example, that at every instant in time an object has one position only, and that its motion from point to point is continuous, defining a slow-changing quantity called velocity. That is, every x value defines one and only one y value, and there is never more than a small change in y at the limit of a small change in X. Does the world work this way? Some parts do, others do not. Commodity prices are not really defined except at the moment of sale, and can jump significantly between two sales a micro-second apart. Objects do not really have one position, the quantum sense, at any time, but spread out, sometimes occupying several positions, and sometimes jumping between positions without ever occupying the space in-between.

These are annoying facts, but calculus works just fine in a discontinuous world — and I believe that a discontinuous calculus is easier to teach and understand too. Consider the fundamental law of calculus. This states that, for a continuous function, the integral of the derivative of changes equals the function itself (nearly incomprehensible, no?) Now consider the same law taught for a discontinuous group of changes: the sum of the changes that take place over a period equals the total change. This statement is more general, since it applies to discrete and continuous functions, and it’s easier to teach. Any idiot can see that this is true. By contrast, it takes weeks of hard thinking to see that the integral of all the derivatives equals the function — and then it takes more years to be exposed to delta functions and realize that the statement is still true for discrete change. Why don’t we teach so that people will understand? Teach discrete first and then smooth as a special case where the discrete changes happen at a slow rate. Is calculus taught this way to make us look smart, or because we want this to be a weeder course?

Because most students are not introduced to discrete change, they are in a very poor position  to understand, or model, activities that are discreet, like climate change or heart rate. Climate only makes sense year to year, as day-to-day behavior is mostly affected by seasons, weather, and day vs night. We really want to model the big picture and leave out the noise by considering each day or year as a whole, keeping track of the average temperature for noon on September 21, for example. Similarly with heart rate, the rate has no meaning if measured every microsecond; it’s only meaning is as a measure of the time between beats. If we taught calculus in terms of discrete functions, our students would be in a better place to deal with these things, and in a better place to deal with total discontinuous behaviors, like chaos and fractals, an important phenomena when dealing with economics, for example.

A fundamental truth of quantum mechanics is that there is no defined speed and position of an object at any given time. Students accept this, but (because they are used to continuous change) they come to wonder how it is that over time energy is conserved. It’s simple, quantum motion involves a gross discrete changes in position that leaves energy conserved by the end, but where an item goes from here to there without ever having to be in the middle. This helps explain the old joke about Heisenberg and his car.

Calculus-based physics is taught in terms of limits and the mean value theorem: that if x is the position of a thing at any time, t then the derivative of these positions, the velocity, will approach ∆x/∆t more and more as ∆x and ∆t become more tightly defined. When this is found to be untrue in a quantum sense, the remnant of the belief in it hinders them when they try to solve real world problems. Normal physics is the limit of quantum physics because velocity is really a macroscopic ratio of difference in position divided by macroscopic difference in time. Because of this, it is obvious that the sum of these differences is the total distance traveled even when summed over many simultaneous paths. A feature of electromagnetism, Green’s theorem becomes similarly obvious: the sum effect of a field of changes is the total change. It’s only confusing if you try to take the limits to find the exact values of these change rates at some infinitesimal space.

This idea is also helpful in finance, likely a chaotic and fractal system. Finance is not continuous: just because a stock price moved from $1 to $2 per share in one day does not mean that the price was ever $1.50 per share. While there is probably no small change in sales rate caused by a 1¢ change in sales price at any given time, this does not mean you won’t find it useful to consider the relation between the sales of a product. Though the details may be untrue, the price demand curve is still very useful (but unjustified) abstraction.

This is not to say that there are not some real-world things that are functions and continuous, but believing that they are, just because the calculus is useful in describing them can blind you to some important insights, e.g. of phenomena where the butterfly effect predominates. That is where an insignificant change in one place (a butterfly wing in China) seems to result in a major change elsewhere (e.g. a hurricane in New York). Recognizing that some conclusions follow from non-continuous math may help students recognize places where some parts of basic calculus allies, while others do not.

Dr. Robert Buxbaum (my thanks to Dr. John Klein for showing me discrete calculus).

Improving Bankrupt Detroit

Detroit is Bankrupt in more ways than one. Besides having too few assets to cover their $18 Billion in debts, and besides running operational deficits for years, Detroit is bankrupt in the sense that most everyone who can afford to leaves. The population has shrunk from 2,000,000 in 1950 to about 680,000 today, an exodus that shows no sign of slowing.

The murder rate in Detroit is 25 times the state average; 400/year in 2012 (58/100,00) as compared to 250 in the rest of the state (2.3/100,000). The school system in 2009 scored the lowest math scores that had ever been recorded for any major city in the 21 year history of the tests. And mayor Kwame Kilpatrick, currently in prison, was called “a walking crime wave” by the mayor of Washington DC. The situation is not pretty. Here are a few simple thoughts though.

(1) Reorganize the city to make it smaller. The population density of Detroit is small, generally about 7000/ square mile, and some of the outlying districts might be carved off and made into townships. Most of Michigan started as townships. When they return to that status, each could contract their children’s education as they saw fit, perhaps agreeing to let the outlying cities use their school buildings and teachers, or perhaps closing failed schools as the local area sees fit.

This could work work well for outlying areas like the southern peninsula of Detroit, Mexicantown and south, a narrow strip of land lying along Route 75 that’s further from the center of Detroit than it is from the centers of 5 surrounding cities: River Rouge, Ecorse, Dearborn, Melvindale, and Lincoln Park. This area was Stillwell township before being added to Detroit in 1922. If removed from Detroit control the property values would likely rise. The people could easily contract education or police with any of the 5 surrounding cities that were previously parts of Stillwell township. Alternately, this newly created township might easily elect to join one of the surrounding communities entirely. All the surrounding communities offer lower crime and better services than Detroit. Most manage to do it with lower tax rates too.

Another community worth removing from Detroit is the western suburb previously known as Greenfield, This community was absorbed into Detroit in 1925. Like the Mexicantown area, this part of Detroit still has a majority of the houses occupied, and the majority of the businesses are viable enough that the area could reasonably stand on its own. Operating as a township, they could bring back whatever services they consider more suitable to their population. They would be in control of their own destiny.

 

How to make fine lemonade

As part of discussing a comment by H.L. Mencken, that a philosopher was a man in a dark room looking for a black cat that wasn’t there, I alluded to the idea that a good person should make something or do something, perhaps make lemonade, but I gave no recipe. Here is the recipe for lemonade something you can do with your life that benefits everyone around:

The key is to use lots of water, and not too much lemon. Start a fresh lemon and two 16 oz glasses. Cut the lemon in half and squeeze half into each glass, squeezing out all of the juice by hand (you can use a squeezer). Ideally, you should pass the juice through a screen for the pits, but if you don’t have one it’s OK — pits sink to the bottom. Add 8 oz of water and 2 tbs of sugar to each (1/8 cup). Stir well until the sugar dissolves, add the lemon rind (I like to cut this into 3rds); stir again and add a handful of ice. This should get you to 3/4″ of the top, but if not add more water. Enjoy.

For a more-adult version, use less water and sugar, but add a shot of Cognac and a shot of Cointreau. It’s called a side-car, one of the greatest of all drinks.

Robert E. Buxbaum *82

How to make a simple time machine

I’d been in science fairs from the time I was in elementary school until 9th grade, and  usually did quite well. One trick: I always like to do cool, unexpected things. I didn’t have money, but tried for the gee-whiz factor. Sorry to say, the winning ideas of my youth are probably old hat, but here’s a project that I never got to do, but is simple and cheap and good enough to win today. It’s a basic time machine, or rather a quantum eraser — it lets you go back in time and erase something.

The first thing you should know is that the whole aspect of time rests on rather shaky footing in modern science. It is possible therefore that antimatter, positrons say, are just regular matter moving backwards in time.

The trick behind this machine is the creation of entangled states, an idea that Einstein and Rosen proposed in the 1930s (they thought it could not work and thus disproved quantum mechanics, turned out the trick works). The original version of the trick was this: start with a particle that splits in half at a given, known energy. If you measure the energy of either of the halves of the particle they are always the same, assuming the source particle starts at rest. The thing is, if you start with the original particle at absolute zero and were to measure the position of one half, and the velocity of the other, you’d certainly know the position and velocity of the original particle. Actually, you should not need to measure the velocity, since that’s fixed by they energy of the split, but we’re doing it just to be sure. Thing is quantum mechanics is based on the idea that you can not know both the velocity and position, even just before the split. What happens? If you measure the position of one half the velocity of the other changes, but if you measure the velocity of both halves it is the same, and this even works backward in time. QM seems to know if you intend to measure the position, and you measure an odd velocity even before you do so. Weird. There is another trick to making time machines, one found in Einstein’s own relativity by Gödel. It involves black holes, and we’re not sure if it works since we’ve never had a black hole to work with. With the QM time machine you’re never able to go back in time before the creation of the time machine.

To make the mini-version of this time machine, we’re going to split a few photons and play with the halves. This is not as cool as splitting an elephant, or even a proton, but money don’t grow on trees, and costs go up fast as the mass of the thing being split increases. You’re not going back in time more than 10 attoseconds (that’s a hundredth of a femtosecond), but that’s good enough for the science fair judges (you’re a kid, and that’s your lunch money at work). You’ll need a piece of thick aluminum foil, a sharp knife or a pin, a bright lamp, superglue (or, in a pinch, Elmer’s), a polarizing sunglass lens, some colored Saran wrap or colored glass, a shoe-box worth of cardboard, and wood + nails  to build some sort of wooden frame to hold everything together. Make your fixture steady and hard to break; judges are clumsy. Use decent wood (judges don’t like splinters). Keep spares for the moving parts in case someone breaks them (not uncommon). Ideally you’ll want to attach some focussing lenses a few inches from the lamp (a small magnifier or reading glass lens will do). You’ll want to lay the colored plastic smoothly over this lens, away from the lamp heat.

First make a point light source: take the 4″ square of shoe-box cardboard and put a quarter-inch hole in it near the center. Attach it in front of your strong electric light at 6″ if there is no lens, or at the focus if there is a lens. If you have no lens, you’ll want to put the Saran over this cardboard.

Take two strips of aluminum foil about 6″ square and in the center of each, cut two slits perhaps 4 mm long by .1 mm wide, 1 mm apart from each other near the middle of both strips. Back both strips with some cardboard with a 1″ hole in the middle (use glue to hold it there). Now take the sunglass lens; cut two strips 2 mm x 10 mm on opposite 45° diagonals to the vertical of the lens. Confirm that this is a polarized lens by rotating one against the other; at some rotation the pieces of sunglass, the pair should be opaque, at 90° it should be fairly clear. If this is not so, get a different sunglass.

Paste these two strips over the two slits on one of the aluminum foil sheets with a drop of super-glue. The polarization of the sunglasses is normally up and down, so when these strips are glued next to one another, the polarization of the strips will be opposing 45° angles. Look at the point light source through both of your aluminum foils (the one with the polarized filter and the one without); they should look different. One should look like two pin-points (or strips) of light. The other should look like a fog of dots or lines.

The reason for the difference is that, generally speaking a photon passes through two nearby slits as two entangled halves, or its quantum equivalent. When you use the foil without the polarizers, the halves recombine to give an interference pattern. The result with the polarization is different though since polarization means you can (in theory at least) tell the photons apart. The photons know this and thus behave like they were not two entangled halves, but rather like they passed either through one slit or the other. Your device will go back in time after the light has gone through the holes and will erase this knowledge.

Now cut another 3″ x 3″ cardboard square and cut a 1/4″ hole in the center. Cut a bit of sunglass lens, 1/2″ square and attach it over the hole of this 3×3″ cardboard square. If you view the aluminum square through this cardboard, you should be able to make one hole or the other go black by rotating this polarized piece appropriately. If it does not, there is a problem.

Set up the lamp (with the lens) on one side so that a bright light shines on the slits. Look at the light from the other side of the aluminum foil. You will notice that the light that comes through the foil with the polarized film looks like two dots, while the one that comes through the other one shows a complex interference pattern; putting the other polarizing lens in front of the foil or behind it does not change the behavior of the foil without the polarizing filters, but if done right it will change things if put behind the other foil, the one with the filters.

Robert Buxbaum, of the future.

Self Esteem Cartoon

Having potential makes a fine breakfast, but a lousy dinner.

Barbara Smaller cartoon, from The New Yorker.

Is funny because ……  it holds a mirror to the adulteration of adulthood: our young adults come out of college with knowledge, some skills, and lots of self-esteem, but with a lack of direction and a lack of focus in what they plan to do with their talents and education. One part of the problem is that kids enter college with no focused major or work background beyond an expectation that they will be leaders when they graduate.

In a previous post I’d suggested that Detroit schools should teach shop as a way to build responsibility. On further reflection, most schools should require shop, or similar subjects where tangible products are produced and where quality of output is apparent and directly related to the student, e.g. classical music, representative art, automotive tuning. Responsibility is not well taught through creative writing or non-representative art, as here quality is in the eye of the beholder.

My sense is that it’s not enough to teach a skill, you have to teach an aesthetic about the skill (Is this a good job), and a desire to put the skill to use. Two quotes of my own invention: “it’s not enough to teach a man how to fish, you have to teach him to actually do it, or he won’t even eat for a day.” Also, “Having potential makes a fine breakfast, but a lousy dinner” (if you use my quotes please quote me). If you don’t like these, here’s one from Peter Cooper, the founder of my undergraduate college. “The problem with Harvard and Yale is that they teach everything about doing honest business except that you are supposed to do it.”

by R.E. Buxbaum,  Sept 22, 2013; Here’s another personal relationship cartoon, and a thought about engineering job-choice.

Murder rate in Finland, Japan higher than in US

The murder rate in Finland and Japan is higher than in the US if suicide is considered as a type of murder. In the figure below, I’ve plotted total murder rates (homicide plus suicide) for several developed-world countries. The homicide component is in blue, with the suicide rate above it, in green. In terms of this total, the US is seen to be about average among the developed counties. Mexico has the highest homicide rate for those shown, Japan has the highest suicide rate, and Russia has this highest total murder rate shown (homicide + suicide): nearly double that of the US and Canada. In Russia and Japan, some .02% of the population commit suicide every year. The Scandinavian countries are quite similar to the US, and Japan, and Mexico are far worse. Italy, Greece and the UK are better than the US, both in terms of low suicide rate and low homicide rate.

  Combined homicide and suicide rates for selected countries, 2005.


Homicide and suicide rates for selected countries, 2005 Source: Wikipedia.

In the US, pundants like Piers Morgan like to use our high murder rate as an indicator of the ills of American society: loose gun laws are to blame, they say, along with the lack of social welfare safety net, a lack of support for the arts, and a lack of education and civility in general. Japan, Canada, and Scandinavia are presented as near idyls, in these regards. When murder is considered to include suicide though, the murder-rate difference disappears. Add to this, that violent crime rates are higher in Europe, Canada, and the UK, suggesting that clean streets and education do not deter crime.

The interesting thing though is suicide, and what it suggests about happiness. According to my graphic, the happiest, safest countries appear to be Italy and Greece. Part of this is likely weather , people commit suicide more in cold countries, but another part may be that some people (malcontents?) are better served by dirty, noisy cafés and pubs where people meet and complain, and are not so well served by clean streets and civility. It’s bad enough to be a depressed outsider, but it’s really miserable if everything around you is clean, and everyone is polite but busy.

Yet another thought about the lower suicide rates in the US and Mexico, is that some of the homicide in these countries is really suicide by proxy. In the US and Mexico depressed people (particularly men) can go off to war or join gangs. They still die, but they die more heroically (they think) by homicide. They volunteer for dangerous army missions or to attack a rival drug-lord outside a bar. Either they succeed in killing someone else, or they’re shot dead. If you’re really suicidal and can’t join the army, you could move to Detroit; the average house sold for $7100 last year (it’s higher now, I think), and the homicide rate was over 56 per 100,000. As bad as that sounds, it’s half the murder rate of Greenland, assuming you take suicide to be murder.

R.E. Buxbaum, Sept 14, 2013

Why random experimental design is better

In a previous post I claimed that, to do good research, you want to arrange experiments so there is no pre-hypothesis of how the results will turn out. As the post was long, I said nothing direct on how such experiments should be organized, but only alluded to my preference: experiments should be organized at randomly chosen conditions within the area of interest. The alternative, shown below is that experiments should be done at the cardinal points in the space, or at corner extremes: the Wilson Box and Taguchi design of experiments (DoE), respectively. Doing experiments at these points implies a sort of expectation of the outcome; generally that results will be linearly, orthogonal related to causes; in such cases, the extreme values are the most telling. Sorry to say, this usually isn’t how experimental data will fall out. First experimental test points according to a Wilson Box, a Taguchi, and a random experimental design. The Wilson box and Taguchi are OK choices if you know or suspect that there are no significant non-linear interactions, and where experiments can be done at these extreme points. Random is the way nature works; and I suspect that's best -- it's certainly easiest.

First experimental test points according to a Wilson Box, a Taguchi, and a random experimental design. The Wilson box and Taguchi are OK choices if you know or suspect that there are no significant non-linear interactions, and where experiments can be done at these extreme points. Random is the way nature works; and I suspect that’s best — it’s certainly easiest.

The first test-points for experiments according to the Wilson Box method and Taguchi method of experimental designs are shown on the left and center of the figure above, along with a randomly chosen set of experimental conditions on the right. Taguchi experiments are the most popular choice nowadays, especially in Japan, but as Taguchi himself points out, this approach works best if there are “few interactions between variables, and if only a few variables contribute significantly.” Wilson Box experimental choices help if there is a parabolic effect from at least one parameter, but are fairly unsuited to cases with strong cross-interactions.

Perhaps the main problems with doing experiments at extreme or cardinal points is that these experiments are usually harder than at random points, and that the results from these difficult tests generally tell you nothing you didn’t know or suspect from the start. The minimum concentration is usually zero, and the minimum temperature is usually one where reactions are too slow to matter. When you test at the minimum-minimum point, you expect to find nothing, and generally that’s what you find. In the data sets shown above, it will not be uncommon that the two minimum W-B data points, and the 3 minimum Taguchi data points, will show no measurable result at all.

Randomly selected experimental conditions are the experimental equivalent of Monte Carlo simulation, and is the method evolution uses. Set out the space of possible compositions, morphologies and test conditions as with the other method, and perhaps plot them on graph paper. Now, toss darts at the paper to pick a few compositions and sets of conditions to test; and do a few experiments. Because nature is rarely linear, you are likely to find better results and more interesting phenomena than at any of those at the extremes. After the first few experiments, when you think you understand how things work, you can pick experimental points that target an optimum extreme point, or that visit a more-interesting or representative survey of the possibilities. In any case, you’ll quickly get a sense of how things work, and how successful the experimental program will be. If nothing works at all, you may want to cancel the program early, if things work really well you’ll want to expand it. With random experimental points you do fewer worthless experiments, and you can easily increase or decrease the number of experiments in the program as funding and time allows.

Consider the simple case of choosing a composition for gunpowder. The composition itself involves only 3 or 4 components, but there is also morphology to consider including the gross structure and fine structure (degree of grinding). Instead of picking experiments at the maximum compositions: 100% salt-peter, 0% salt-peter, grinding to sub-micron size, etc., as with Taguchi, a random methodology is to pick random, easily do-able conditions: 20% S and 40% salt-peter, say. These compositions will be easier to ignite, and the results are likely to be more relevant to the project goals.

The advantages of random testing get bigger the more variables and levels you need to test. Testing 9 variables at 3 levels each takes 27 Taguchi points, but only 16 or so if the experimental points are randomly chosen. To test if the behavior is linear, you can use the results from your first 7 or 8 randomly chosen experiments, derive the vector that gives the steepest improvement in n-dimensional space (a weighted sum of all the improvement vectors), and then do another experimental point that’s as far along in the direction of that vector as you think reasonable. If your result at this point is better than at any point you’ve visited, you’re well on your way to determining the conditions of optimal operation. That’s a lot faster than by starting with 27 hard-to-do experiments. What’s more, if you don’t find an optimum; congratulate yourself, you’ve just discovered an non-linear behavior; something that would be easy to overlook with Taguchi or Wilson Box methodologies.

The basic idea is one Sherlock Holmes pointed out (Study in Scarlet): It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.” (Case of Identity). Life is infinitely stranger than anything which the mind of man could invent.

Robert E. Buxbaum, September 11, 2013. A nice description of the Wilson Box method is presented in Perry’s Handbook (6th ed). SInce I had trouble finding a free, on-line description, I linked to a paper by someone using it to test ingredient choices in baked bread. Here’s a link for more info about random experimental choice, from the University of Michigan, Chemical Engineering dept. Here’s a joke on the misuse of statistics, and a link regarding the Taguchi Methodology. Finally, here’s a pointless joke on irrational numbers, that I posted for pi-day.

The Scientific Method isn’t the method of scientists

A linchpin of middle school and high-school education is teaching ‘the scientific method.’ This is the method, students are led to believe, that scientists use to determine Truths, facts, and laws of nature. Scientists, students are told, start with a hypothesis of how things work or should work, they then devise a set of predictions based on deductive reasoning from these hypotheses, and perform some critical experiments to test the hypothesis and determine if it is true (experimentum crucis in Latin). Sorry to say, this is a path to error, and not the method that scientists use. The real method involves a few more steps, and follows a different order and path. It instead follows the path that Sherlock Holmes uses to crack a case.

The actual method of Holmes, and of science, is to avoid beginning with a hypothesis. Isaac Newton claimed: “I never make hypotheses” Instead as best we can tell, Newton, like most scientists, first gathered as much experimental evidence on a subject as possible before trying to concoct any explanation. As Holmes says (Study in Scarlet): “It is a capital mistake to theorize before you have all the evidence. It biases the judgment.”

It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts (Holmes, Scandal in Bohemia).

Holmes barely tolerates those who hypothesize before they have all the data: “It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.” (Scandal in Bohemia).

Then there is the goal of science. It is not the goal of science to confirm some theory, model, or hypothesis; every theory probably has some limited area where it’s true. The goal for any real-life scientific investigation is the desire to explain something specific and out of the ordinary, or do something cool. Similarly, with Sherlock Holmes, the start of the investigation is the arrival of a client with a specific, unusual need – one that seems a bit outside of the normal routine. Similarly, the scientist wants to do something: build a bigger bridge, understand global warming, or how DNA directs genetics; make better gunpowder, cure a disease, or Rule the World (mad scientists favor this). Once there is a fixed goal, it is the goal that should direct the next steps: it directs the collection of data, and focuses the mind on the wide variety of types of solution. As Holmes says: , “it’s wise to make one’s self aware of the potential existence of multiple hypotheses, so that one eventually may choose one that fits most or all of the facts as they become known.” It’s only when there is no goal, that any path will do

In gathering experimental data (evidence), most scientists spend months in the less-fashionable sections of the library, looking at the experimental methods and observations of others, generally from many countries, collecting any scrap that seems reasonably related to the goal at hand. I used 3 x5″ cards to catalog this data and the references. From many books and articles, one extracts enough diversity of data to be able to look for patterns and to begin to apply inductive logic. “The little things are infinitely the most important” (Case of Identity). You have to look for patterns in the data you collect. Holmes does not explain how he looks for patterns, but this skill is innate in most people to a greater or lesser extent. A nice set approach to inductive logic is called the Baconian Method, it would be nice to see schools teach it. If the author is still alive, a scientist will try to contact him or her to clarify things. In every SH mystery, Holmes does the same and is always rewarded. There is always some key fact or observation that this turns up: key information unknown to the original client.

Based on the facts collected one begins to create the framework for a variety of mathematical models: mathematics is always involved, but these models should be pretty flexible. Often the result is a tree of related, mathematical models, each highlighting some different issue, process, or problem. One then may begin to prune the tree, trying to fit the known data (facts and numbers collected), into a mathematical picture of relevant parts of this tree. There usually won’t be quite enough for a full picture, but a fair amount of progress can usually be had with the application of statistics, calculus, physics, and chemistry. These are the key skills one learns in college, but usually the high-schooler and middle schooler has not learned them very well at all. If they’ve learned math and physics, they’ve not learned it in a way to apply it to something new, quite yet (it helps to read the accounts of real scientists here — e.g. The Double Helix by J. Watson).

Usually one tries to do some experiments at this stage. Homes might visit a ship or test a poison, and a scientist might go off to his, equally-smelly laboratory. The experiments done there are rarely experimenti crucae where one can say they’ve determined the truth of a single hypothesis. Rather one wants to eliminated some hypotheses and collect data to be used to evaluate others. An answer generally requires that you have both a numerical expectation and that you’ve eliminated all reasonable explanations but one. As Holmes says often, e.g. Sign of the four, “when you have excluded the impossible, whatever remains, however improbable, must be the truth”. The middle part of a scientific investigation generally involves these practical experiments to prune the tree of possibilities and determine the coefficients of relevant terms in the mathematical model: the weight or capacity of a bridge of a certain design, the likely effect of CO2 on global temperature, the dose response of a drug, or the temperature and burn rate of different gunpowder mixes. Though not mentioned by Holmes, it is critically important in science to aim for observations that have numbers attached.

The destruction of false aspects and models is a very important part of any study. Francis Bacon calls this act destruction of idols of the mind, and it includes many parts: destroying commonly held presuppositions, avoiding personal preferences, avoiding the tendency to see a closer relationship than can be justified, etc.

In science, one eliminates the impossible through the use of numbers and math, generally based on your laboratory observations. When you attempt to the numbers associated with our observations to the various possible models some will take the data well, some poorly; and some twill not fit the data at all. Apply the deductive reasoning that is taught in schools: logical, Boolean, step by step; if some aspect of a model does not fit, it is likely the model is wrong. If we have shown that all men are mortal, and we are comfortable that Socrates is a man, then it is far better to conclude that Socrates is mortal than to conclude that all men but Socrates is mortal (Occam’s razor). This is the sort of reasoning that computers are really good at (better than humans, actually). It all rests on the inductive pattern searches similarities and differences — that we started with, and very often we find we are missing a piece, e.g. we still need to determine that all men are indeed mortal, or that Socrates is a man. It’s back to the lab; this is why PhDs often take 5-6 years, and not the 3-4 that one hopes for at the start.

More often than not we find we have a theory or two (or three), but not quite all the pieces in place to get to our goal (whatever that was), but at least there’s a clearer path, and often more than one. Since science is goal oriented, we’re likely to find a more efficient than we fist thought. E.g. instead of proving that all men are mortal, show it to be true of Greek men, that is for all two-legged, fairly hairless beings who speak Greek. All we must show is that few Greeks live beyond 130 years, and that Socrates is one of them.

Putting numerical values on the mathematical relationship is a critical step in all science, as is the use of models — mathematical and otherwise. The path to measure the life expectancy of Greeks will generally involve looking at a sample population. A scientist calls this a model. He will analyze this model using statistical model of average and standard deviation and will derive his or her conclusions from there. It is only now that you have a hypothesis, but it’s still based on a model. In health experiments the model is typically a sample of animals (experiments on people are often illegal and take too long). For bridge experiments one uses small wood or metal models; and for chemical experiments, one uses small samples. Numbers and ratios are the key to making these models relevant in the real world. A hypothesis of this sort, backed by numbers is publishable, and is as far as you can go when dealing with the past (e.g. why Germany lost WW2, or why the dinosaurs died off) but the gold-standard of science is predictability.  Thus, while we a confident that Socrates is definitely mortal, we’re not 100% certain that global warming is real — in fact, it seems to have stopped though CO2 levels are rising. To be 100% sure you’re right about global warming we have to make predictions, e.g. that the temperature will have risen 7 degrees in the last 14 years (it has not), or Al Gore’s prediction that the sea will rise 8 meters by 2106 (this seems unlikely at the current time). This is not to blame the scientists whose predictions don’t pan out, “We balance probabilities and choose the most likely. It is the scientific use of the imagination” (Hound of the Baskervilles)The hope is that everything matches; but sometimes we must look for an alternative; that’s happened rarely in my research, but it’s happened.

You are now at the conclusion of the scientific process. In fiction, this is where the criminal is led away in chains (or not, as with “The Woman,” “The Adventure of the Yellow Face,” or of “The Blue Carbuncle” where Holmes lets the criminal free — “It’s Christmas”). For most research the conclusion includes writing a good research paper “Nothing clears up a case so much as stating it to another person”(Memoirs). For a PhD, this is followed by the search for a good job. For a commercial researcher, it’s a new product or product improvement. For the mad scientist, that conclusion is the goal: taking over the world and enslaving the population (or not; typically the scientist is thwarted by some detail!). But for the professor or professional research scientist, the goal is never quite reached; it’s a stepping stone to a grant application to do further work, and from there to tenure. In the case of the Socrates mortality work, the scientist might ask for money to go from country to country, measuring life-spans to demonstrate that all philosophers are mortal. This isn’t as pointless and self-serving as it seems, Follow-up work is easier than the first work since you’ve already got half of it done, and you sometimes find something interesting, e.g. about diet and life-span, or diseases, etc. I did some 70 papers when I was a professor, some on diet and lifespan.

One should avoid making some horrible bad logical conclusion at the end, by the way. It always seems to happen that the mad scientist is thwarted at the end; the greatest criminal masterminds are tripped by some last-minute flaw. Similarly the scientist must not make that last-mistep. “One should always look for a possible alternative, and provide against it” (Adventure of Black Peter). Just because you’ve demonstrated that  iodine kills germs, and you know that germs cause disease, please don’t conclude that drinking iodine will cure your disease. That’s the sort of science mistakes that were common in the middle ages, and show up far too often today. In the last steps, as in the first, follow the inductive and quantitative methods of Paracelsus to the end: look for numbers, (not a Holmes quote) check how quantity and location affects things. In the case of antiseptics, Paracelsus noticed that only external cleaning helped and that the help was dose sensitive.

As an example in the 20th century, don’t just conclude that, because bullets kill, removing the bullets is a good idea. It is likely that the trauma and infection of removing the bullet is what killed Lincoln, Garfield, and McKinley. Theodore Roosevelt was shot too, but decided to leave his bullet where it was, noticing that many shot animals and soldiers lived for years with bullets in them; and Roosevelt lived for 8 more years. Don’t make these last-minute missteps: though it’s logical to think that removing guns will reduce crime, the evidence does not support that. Don’t let a leap of bad deduction at the end ruin a line of good science. “A few flies make the ointment rancid,” said Solomon. Here’s how to do statistics on data that’s taken randomly.

Dr. Robert E. Buxbaum, scientist and Holmes fan wrote this, Sept 2, 2013. My thanks to Lou Manzione, a friend from college and grad school, who suggested I reread all of Holmes early in my PhD work, and to Wikiquote, a wonderful site where I found the Holmes quotes; the Solomon quote I knew, and the others I made up.

Ozone hole shrinks to near minimum recorded size

The hole in the ozone layer, prominently displayed in Al Gore’s 2006 movie, an inconvenient truth has been oscillating in size and generally shrinking since 1996. It’s currently reached its second lowest size on record.

South pole ozone hole shrinks to 2nd smallest size on record. Credit: BIRA/IASB

South pole ozone hole (blue circle in photo), shrinks to its 2nd smallest size on record. Note outline of antarctica plus end of south america and africa. Photo Credit: BIRA/IASB

The reason for the oscillation is unknown. The ozone hole is small this year, was large for the last few years, and was slightly smaller in 2002. My guess is that it will be big again in 2013. Ozone is an alternate form of oxygen containing three oxygen atoms instead of the usual two. It is an unstable compound formed by ions in the upper atmosphere acting on regular oxygen. Though the ozone concentration in the atmosphere is low, ozone is important because it helps shield people from UV radiation — radiation that could otherwise cause cancer (it also has some positive effects on bones, etc.).

An atmospheric model of ozone chemistry implicated chlorofluorocarbons (freons) as a cause of observed ozone depletion. In the 1980s, this led to countries restricting the use of freon refrigerants. Perhaps these laws are related to the shrinkage of the ozone hole, perhaps not. There has been no net decrease in the amount of chlorofluorocarbons in the atmosphere, and the models that led to banning them did not predicted the ozone oscillations we now see are common — a fault also found with models of global warming and of stock market behavior. Our best computer models do not do well with oscillatory behaviors. As Alan Greenspan quipped, our best models successfully predicted eight of the last five recessions. Whatever the cause, the good news is that the ozone hole has closed, at least temporarily. Here’s why the sky is blue, and some thoughts on sunlight, radiation and health.

by Dr. Robert E. Buxbaum, dedicated to bringing good news to the perpetually glum.