It’s funny because …. the normal distribution curve looks sort-of like a ghost. It’s also funny because it would be possible to imagine data being distributed like the ghost, and most people would be totally clue-less as to how to deal with data like that — abnormal statistics. They’d find it scary and would likely try to ignore the problem. When faced with a statistics problem, most people just hope that the data is normal; they then use standard mathematical methods with a calculator or simulation package and hope for the best.
Take the following example: you’re interested in buying a house near a river. You’d like to analyze river flood data to know your risks. How high will the river rise in 100 years, or 1000. Or perhaps you would like to analyze wind data to know how strong to make a sculpture so it does not blow down. Your first thought is to use the normal distribution math in your college statistics book. This looks awfully daunting (it doesn’t have to) and may be wrong, but it’s all you’ve got.
The normal distribution graph is considered normal, in part, because it’s fairly common to find that measured data deviates from the average in this way. Also, this distribution can be derived from the mathematics of an idealized view of the world, where any variety derives from multiple small errors around a common norm, and not from some single, giant issue. It’s not clear this is a realistic assumption in most cases, but it is comforting. I’ll show you how to do the common math as it’s normally done, and then how to do it better and quicker with no math at all, and without those assumptions.
Lets say you want to know the hundred-year maximum flood-height of a river near your house. You don’t want to wait 100 years, so you measure the maximum flood height every year over five years, say, and use statistics. Lets say you measure 8 foot, 6 foot, 3 foot (a draught year), 5 feet, and 7 feet.
The “normal” approach (pardon the pun), is to take a quick look at the data, and see that it is sort-of normal (many people don’t bother). One now takes the average, calculated here as (8+6+3+5+7)/5 = 5.8 feet. About half the times the flood waters should be higher than this (a good researcher would check this, many do not). You now calculate the standard deviation for your data, a measure of the width of the ghost, generally using a spreadsheet. The formula for standard deviation of a sample is s = √{[(8-5.8)2 + (6-5.8)2 + (3-5.8)2 + (5-5.8)2 + (7-5.8)2]/4} = 1.92. The use of 4 here in the denominator instead of 5 is called the Brussels correction – it refers to the fact that a standard of deviation is meaningless if there is only one data point.
For normal data, the one hundred year maximum height of the river (the 1% maximum) is the average height plus 2.2 times the deviation; in this case, 5.8 + 2.2 x 1.92 = 10.0 feet. If your house is any higher than this you should expect few troubles in a century. But is this confidence warranted? You could build on stilts or further from the river, but you don’t want to go too far. How far is too far?
So let’s do this better. We can, with less math, through the use of probability paper. As with any good science we begin with data, not assumptions, like that the data is normal. Arrange the river height data in a list from highest to lowest (or lowest to highest), and plot the values in this order on your probability paper as shown below. That is on paper where likelihoods from .01% to 99.99% are arranged along the bottom — x axis, and your other numbers, in this case the river heights, are the y values listed at the left. Graph paper of this sort is sold in university book stores; you can also get jpeg versions on line, but they don’t look as nice.
For the x axis values of the 5 data points above, I’ve taken the likelihood to be the middle of its percentile. Since there are 5 data points, each point is taken to represent its own 20 percentile; the middles appear at 10%, 30%, 50%, etc. I’ve plotted the highest value (8 feet) at the 10% point on the x axis, that being the middle of the upper 20%. I then plotted the second highest (7 feet) at 30%, the middle of the second 20%; the third, 6 ft at 50%; the fourth at 70%; and the draught year maximum (3 feet) at 90%. When done, I judge if a reasonably straight line would describe the data. In this case, a line through the data looks reasonably straight, suggesting a fairly normal distribution of river heights. I notice that, if anything the heights drop off at the left suggesting that really high river levels are less likely than normal. The points will also have to drop off at the right since a negative river height is impossible. Thus my river heights describe a version of the ghost distribution in the cartoon above. This is a welcome finding since it suggests that really high flood levels are unlikely. If the data were non-normal, curving the other way we’d want to build our house higher than a normal distribution would suggest.
You can now find the 100 year flood height from the graph above without going through any the math. Just draw your best line through the data, and look where it crosses the 1% value on your graph (that’s two major lines from the left in the graph above — you may have to expand your view to see the little 1% at top). My extrapolation suggests the hundred-year flood maximum will be somewhere between about 9.5 feet, and 10.2 feet, depending on how I choose my line. This prediction is a little lower than we calculated above, and was done graphically, without the need for a spreadsheet or math. What’s more, our predictions is more accurate, since we were in a position to evaluate the normality of the data and thus able to fit the extrapolation line accordingly. There are several ways to handle extreme curvature in the line, but all involve fitting the curve some way. Most weather data is curved, e.g. normal against a fractal, I think, and this affects you predictions. You might expect to have an ice age in 10,000 years.
The standard deviation we calculated above is related to a quality standard called six sigma — something you may have heard of. If we had a lot of parts we were making, for example, we might expect to find that the size deviation varies from a target according to a normal distribution. We call this variation σ, the greek version of s. If your production is such that the upper spec is 2.2 standard deviations from the norm, 99% of your product will be within spec; good, but not great. If you’ve got six sigmas there is one-in-a-billion confidence of meeting the spec, other things being equal. Some companies (like Starbucks) aim for this low variation, a six sigma confidence of being within spec. That is, they aim for total product uniformity in the belief that uniformity is the same as quality. There are several problems with this thinking, in my opinion. The average is rarely an optimum, and you want to have a rational theory for acceptable variation boundaries. Still, uniformity is a popular metric in quality management, and companies that use it are better off than those that do nothing. At REB Research, we like to employ the quality methods of W. Edwards Deming; we assume non-normality and aim for an optimum (that’s subject matter for a further essay). If you want help with statistics, or a quality engineering project, contact us.
I’ve also meant to write about the phrase “other things being equal”, Ceteris paribus in Latin. All this math only makes sense so long as the general parameters don’t change much. Your home won’t flood so long as they don’t build a new mall up river from you with runoff in the river, and so long as the dam doesn’t break. If these are concerns (and they should be) you still need to use statistics and probability paper, but you will now have to use other data, like on the likelihood of malls going up, or of dams breaking. When you input this other data, you will find the probability curve is not normal, but typically has a long tail (when the dam breaks, the water goes up by a lot). That’s outside of standard statistic analysis, but why those hundred year floods come a lot more often than once in 100 years. I’ve noticed that, even at Starbucks, more than 1/1,000,000,000 cups of coffee come out wrong. Even in analyzing a common snafu like this, you still use probability paper, though. It may be ‘situation normal”, but the distribution curve it describes has an abnormal tail.
by Dr. Robert E. Buxbaum, November 6, 2013. This is my second statistics post/ joke, by the way. The first one dealt with bombs on airplanes — well, take a look.