Category Archives: Business

Where does industrial CO2 come from? China mostly.

The US is in the process of imposing strict regulations on carbon dioxide as a way to stop global warming and climate change. We have also closed nearly new power plants, replacing them with cleaner options like a 2.2 billion dollar solar-electric generator in lake Ivanpah, and this January our president imposed a ban on lightbulbs of 60 W and higher. But it might help to know that China produced twice as much of the main climate change gas, carbon dioxide (CO2) as the US in 2012, and the ratio seems to be growing. One reason China produces so much CO2 is that China generates electricity from dirty coal using inefficient turbines.

Where the CO2 is coming from: a fair amount from the US and Europe, but mostly from China and India too.

From EDGAR 4.2; As of 2012 twice as much carbon dioxide, CO2 is coming from China as from the US and Europe.

It strikes me that a good approach to reducing the world’s carbon-dioxide emissions is to stop manufacturing so much in China. Our US electric plants use more efficient generating technology and burn lower carbon fuels than China does. We then add scrubbers and pollution reduction equipment that are hardly used in China. US manufacture thus produces not only less carbon dioxide than China, it also avoids other forms of air pollution, like NOx and SOx. Add to this the advantage of having fewer ships carrying products to and from China, and it’s clear that we could significantly reduce the world’s air problems by moving manufacture back to the USA.

I should also note that manufacture in the US helps the economy by keeping jobs and taxes here. A simple way to reduce purchases from China and collect some tax revenue would be to impose an import tariff on Chinese goods based, perhaps on the difference in carbon emissions or other pollution involved in Chinese manufacture and transport. While I have noted a lack of global warming, sixteen years now, that doesn’t mean I like pollution. It’s worthwhile to clean the air, and if we collect tariffs from the Chinese and help the US economy too, all the better.

Robert E. Buxbaum, February 24, 2014. Nuclear power produces no air pollution and uses a lot less land area compared to solar and wind projects.

Hydrogen cars and buses are better than Tesla

Hydrogen fueled cars and buses are as clean to drive as battery vehicles and have better range and faster fueling times. Cost-wise, a hydrogen fuel tank is far cheaper and lighter than an equivalent battery and lasts far longer. Hydrogen is likely safer because the tanks do not carry their oxidant in them. And the price of hydrogen is relatively low, about that of gasoline on a per-mile basis: far lower than batteries when the cost of battery wear-out is included. Both Presidents Clinton and Bush preferred hydrogen over batteries, but the current administration favors batteries. Perhaps history will show them correct, but I think otherwise. Currently, there is not a hydrogen bus, car, or boat making runs at Disney’s Experimental Community of Tomorrow (EPCOT), nor is there an electric bus car or boat. I suspect it’s a mistake, at least convening the lack of a hydrogen vehicle. 

The best hydrogen vehicles on the road have more range than the best electric vehicle, and fuel faster. The hydrogen powered, Honda Clarity debuted in 2008. It has a 270 mile range and takes 3-5 minutes to fuel with hydrogen at 350 atm, 5150 psi. By contrast, the Tesla S-sedan that debuted in 2012 claims only a 208 mile range for its standard, 60kWh configuration (the EPA claims: 190 miles) and requires three hours to charge using their fastest charger, 20 kW.

What limits the range of battery vehicles is that the stacks are very heavy and expensive. Despite using modern lithium-ion technology, Tesla’s 60 kWh battery weighs 1050 lbs including internal cooling, and adds another 250 lbs to the car for extra structural support. The Clarity fuel system weighs a lot less. The hydrogen cylinders weigh 150 lb and require a fuel cell stack (30 lb) and a smaller lithium-ion battery for start-up (90 lb). The net effect is that the Clarity weighs 3582 lbs vs 4647 lbs for the Tesla S. This extra weight of the Tesla seems to hurt its mileage by about 10%. The Tesla gets about 3.3 mi/kWh or 0.19 mile/lb of battery versus 60 miles/kg of hydrogen for the Clarity suggesting  3.6 mi/kWh at typical efficiencies. 

High pressure hydrogen tanks are smaller than batteries and cheaper per unit range. The higher the pressure the smaller the tank. The current Clarity fuels with 350 atm, 5,150 psi hydrogen, and the next generation (shown below) will use higher pressure to save space. But even with 335 atm hydrogen (5000 psi) a Clarity could fuel a 270 mile range with four, 8″ diameter tanks (ID), 4′ long. I don’t know how Honda makes its hydrogen tanks, but suitable tanks might be made from 0.065″ Maranging (aged) stainless steel (UTS = 350,000 psi, density 8 g/cc), surrounded by 0.1″ of aramid fiber (UTS = 250,000 psi, density = 1.6 g/cc). With this construction, each tank would weigh 14.0 kg (30.5 lbs) empty, and hold 11,400 standard liters, 1.14 kg (2.5 lb) of hydrogen at pressure. These tanks could cost $1500 total; the 270 mile range is 40% more Than the Tesla S at about 1/10 the cost of current Tesla S batteries The current price of a replacement Tesla battery pack is $12,000, subsidized by DoE; without the subsidy, the likely price would be $40,000.

Next generation Honda fuel cell vehicle prototype at the 2014 Detroit Auto Show.

Next generation Honda fuel cell vehicle prototype at the 2014 Detroit Auto Show.

Currently hydrogen is more expensive than electricity per energy value, but my company has technology to make it cheaply and more cleanly than electricity. My company, REB Research makes hydrogen generators that produce ultra pure hydrogen by steam reforming wow alcohol in a membrane reactor. A standard generator, suitable to a small fueling station outputs 9.5 kg of hydrogen per day, consuming 69 gal of methanol-water. At 80¢/gal for methanol-water, and 12¢/kWh for electricity, the output hydrogen costs $2.50/kg. A car owner who drove 120,000 miles would spend $5,000 on hydrogen fuel. For that distance, a Tesla owner would spend only $4400 on electricity, but would have to spend another $12,000 to replace the battery. Tesla batteries have a 120,000 mile life, and the range decreases with age. 

For a bus or truck at EPCOT, the advantages of hydrogen grow fast. A typical bus is expected to travel much further than 120,000 miles, and is expected to operate for 18 hour shifts in stop-go operation getting perhaps 1/4 the miles/kWh of a sedan. The charge time and range advantages of hydrogen build up fast. it’s common to build a hydrogen bus with five 20 foot x 8″ tanks. Fueled at 5000 psi., such buses will have a range of 420 miles between fill-ups, and a total tank weight and cost of about 600 lbs and $4000 respectively. By comparison, the range for an electric bus is unlikely to exceed 300 miles, and even this will require a 6000 lb., 360 kWh lithium-ion battery that takes 4.5 hours to charge assuming an 80 kW charger (200 Amps at 400 V for example). That’s excessive compared to 10-20 minutes for fueling with hydrogen.

While my hydrogen generators are not cheap: for the one above, about $500,000 including the cost of a compressor, the cost of an 80 kW DC is similar if you include the cost to run a 200 Amp, 400 V power line. Tesla has shown there are a lot of people who value clean, futuristic transport if that comes with comfort and style. A hydrogen car can meet that handily, and can provide the extra comforts of longer range and faster refueling.

Robert E. Buxbaum, February 12, 2014 (Lincoln’s birthday). Here’s an essay on Lincoln’s Gettysburg address, on the safety of batteries, and on battery cost vs hydrogen. My company, REB Research makes hydrogen generators and purifiers; we also consult.

Stoner’s prison and the crack mayor

With the release of a video of Rob Ford, the Mayor of Toronto, smoking crack while in office, and the admission that at least two US presidents smoked pot, as did the Beatles, Stones, and most of Hollywood, it seems worthwhile to consider the costs and benefits of our war on drugs, especially pot. Drugs are typically bad for productivity and usually bad for health. Thus, it seems worthwhile to regulate it, but most countries do not punish drug sale or use nearly as harshly as we do in the US.

The Freak Brothers by Gilbert Shelton. Clearly these boys were not improved by drugs, but perhaps we could do better than incarcerating them, and their fans, for years, or life.

The Freak Brothers by Gilbert Shelton. Clearly these boys were not improved by drugs, but perhaps we could do better than incarcerating them, and their fans, for years, or life.

While US penalties vary state by state, most states have high minimum penalties that a judge can not go below. In Michigan, where I live, medical marijuana is legalized, but all supply is still illegal. Marijuana cultivation, even for personal medical use, is a felony carrying a minimum punishment of 4 years in state prison and a $20,000 fine. For cultivation of more than 20 plants the minimum sentence is 7 years in prison and $500,000; and cultivating 200 or more plants results in 15 years plus a $10,000,000 fine. These are first-time, minimum sentences where the judge can not consider mitigating circumstances, like a prescription, for a drug that was accepted for use in the US in the 70s, is legal in Holland, legalized in Colorado, and is near-legal in Belgium. While many pot smokers were not served by the herb, many went on to be productive, e.g. our current president and the Beatles.

In Michigan, the mandatory minimums get worse if you are a repeat offender, especially a 3 time offender. Possession of hard drugs; and sales or cultivation of marijuana makes you a felon; a gun found on a felon adds 2 years and another felony. With three felonies you go to prison for life, effectively, so there is little difference between the sentence of a repeat violent mugger and a kid selling $10 rocks of crack in Detroit. America has more people in prison than Russia, China, or almost every industrialized nation, per capita, and the main cause is long minimum sentences.

In 2011, Michigan spent an average of $2,343 per month per prisoner, or $28,116/year: somewhat over 1.3 billion dollars per year in total. To this add the destruction of the criminal’s family, and the loss of whatever value he/she might have added to society. Reducing sentences by 10 or 20% would go a long way towards paying off Detroit’s bankruptcy, and would put a lot of useful people back into the work-force where they might do some good for themselves and the state. 60.8% of drug arrestees were employed before they were arrested for drugs, with an average income of $1050/month. That’s a lot of roofers, electricians, carpenters, and musicians — useful people. As best we can tell, the long sentences don’t help, but lead to higher rates of recidivism and increased violent behavior. If you spend years in jail, you are likely to become more violent, rather than less. Some 75% of drug convicts have no prior record of violent crime, so why does a first-time offense have to be a felony. If we need minimums, couldn’t it be 6 months and a $1000 fine, or only apply if there is violence.

Couldn’t we allow judges more leeway in sentencing, especially for drugs? Recall that Michiganders thought they’d legalized marijuana for medical use, and that even hard-drugs were legal not that long ago. There was a time when Coca-Cola contained cocaine and when Pope Leo was a regular drinker of cocaine laced wine. If the two presidents smoked pot, and the Mayor of Toronto could do a decent job after cocaine, why should we incarcerate them for life? Let’s balance strict justice with mercy; so the fabric of society is not strained to breaking.

Robert Buxbaum, Jan 16, 2014. Here are some other thoughts on Detroit and crime.

Ocean levels down from 3000 years ago; up from 20,000 BC

In 2006 Al Gore claimed that industry was causing 2-5°C of global warming per century, and that this, in turn, would cause the oceans to rise by 8 m by 2100. Despite a record cold snap this week, and record ice levels in the antarctic, the US this week banned all incandescent light bulbs of 40W and over in an effort to stop the tragedy. This was a bad move, in my opinion, for a variety of reasons, not least because it seems the preferred replacement, compact fluorescents, produce more pollution than incandescents when you include disposal of the mercury and heavy metals they contain. And then there is the weak connection between US industry and global warming.

From the geologic record, we know that 2-5° higher temperatures have been seen without major industrial outputs of pollution. These temperatures do produce the sea level rises that Al Gore warns about. Temperatures and sea levels were higher 3200 years ago (the Trojan war period), without any significant technology. Temperatures and sea levels were also higher 1900 years ago during the Roman warming. In those days Pevensey Castle (England), shown below, was surrounded by water.

During Roman times Pevensey Castle (at right) was surrounded by water at high tide.If Al Gore is right, it will be surrounded by water again soon.

During Roman times the world was warmer, and Pevensey Castle (right) was surrounded by water;. If Al Gore is right about global warming, it will be surrounded by water again by 2100.

From a plot of sea level and global temperature, below, we see that during cooler periods the sea was much shallower than today: 140 m shallower 20,000 years ago at the end of the last ice age, for example. In those days, people could walk from Asia to Alaska. Climate, like weather appears to be cyclically chaotic. I don’t think the last ice age ended because of industry, but it is possible that industry might help the earth to warm by 2-5°C by 2100, as Gore predicts. That would raise the sea levels, assuming there is no new ice age.

Global temperatures and ocean levels rise and sink together

Global temperatures and ocean levels change by a lot; thousands of years ago.

While I doubt there is much we could stop the next ice age — it is very hard to change a chaotic cycle — trying to stop global cooling seems more worthwhile than trying to stop warming. We could survive a 2 m rise in the seas, e.g. by building dykes, but a 2° of cooling would be disastrous. It would come with a drastic reduction in crops, as during the famine year of 1814. And if the drop continued to a new ice age, that would be much worse. The last ice age included mile high glaciers that extended over all of Canada and reached to New York. Only the polar bear and saber-toothed tiger did well (here’s a Canada joke, and my saber toothed tiger sculpture).

The good news is that the current global temperature models appear to be wrongor highly over-estimated. Average global temperatures have not changed in the last 16 years, though the Chinese keep polluting the air (for some reason, Gore doesn’t mind Chinese pollution). It is true that arctic ice extent is low, but then antarctic ice is at record high levels. Perhaps it’s time to do nothing. While I don’t want more air pollution, I’d certainly re-allow US incandescent light bulbs. In cases where you don’t know otherwise, perhaps the wisest course is to do nothing.

Robert Buxbaum, January 8, 2014

Near-Poisson statistics: how many police – firemen for a small city?

In a previous post, I dealt with the nearly-normal statistics of common things, like river crests, and explained why 100 year floods come more often than once every hundred years. As is not uncommon, the data was sort-of like a normal distribution, but deviated at the tail (the fantastic tail of the abnormal distribution). But now I’d like to present my take on a sort of statistics that (I think) should be used for the common problem of uncommon events: car crashes, fires, epidemics, wars…

Normally the mathematics used for these processes is Poisson statistics, and occasionally exponential statistics. I think these approaches lead to incorrect conclusions when applied to real-world cases of interest, e.g. choosing the size of a police force or fire department of a small town that rarely sees any crime or fire. This is relevant to Oak Park Michigan (where I live). I’ll show you how it’s treated by Poisson, and will then suggest a simpler way that’s more relevant.

First, consider an idealized version of Oak Park, Michigan (a semi-true version until the 1980s): the town had a small police department and a small fire department that saw only occasional crimes or fires, all of which required only 2 or 4 people respectively. Lets imagine that the likelihood of having one small fire at a given time is x = 5%, and that of having a violent crime is y =5% (it was 6% in 2011). A police department will need to have to have 2 policemen on call at all times, but will want 4 on the 0.25% chance that there are two simultaneous crimes (.05 x .05 = .0025); the fire department will want 8 souls on call at all times for the same reason. Either department will use the other 95% of their officers dealing with training, paperwork, investigations of less-immediate cases, care of equipment, and visiting schools, but this number on call is needed for immediate response. As there are 8760 hours per year and the police and fire workers only work 2000 hours, you’ll need at least 4.4 times this many officers. We’ll add some more for administration and sick-day relief, and predict a total staff of 20 police and 40 firemen. This is, more or less, what it was in the 1980s.

If each fire or violent crime took 3 hours (1/8 of a day), you’ll find that the entire on-call staff was busy 7.3 times per year (8x365x.0025 = 7.3), or a bit more since there is likely a seasonal effect, and since fires and violent crimes don’t fall into neat time slots. Having 3 fires or violent crimes simultaneously was very rare — and for those rare times, you could call on nearby communities, or do triage.

In response to austerity (towns always overspend in the good times, and come up short later), Oak Park realized it could use fewer employees if they combined the police and fire departments into an entity renamed “Public safety.” With 45-55 employees assigned to combined police / fire duty they’d still be able to handle the few violent crimes and fires. The sum of these events occurs 10% of the time, and we can apply the sort of statistics above to suggest that about 91% of the time there will be neither a fire nor violent crime; about 9% of the time there will be one or more fires or violent crimes (there is a 5% chance for each, but also a chance that 2 happen simultaneously). At least two events will occur 0.9% of the time (2 fires, 2 crimes or one of each), and they will have 3 or more events .09% of the time, or twice per year. The combined force allowed fewer responders since it was only rarely that 4 events happened simultaneously, and some of those were 4 crimes or 3 crimes and a fire — events that needed fewer responders. Your only real worry was when you have 3 fires, something that should happen every 3 years, or so, an acceptable risk at the time.

Before going to what caused this model of police and fire service to break down as Oak Park got bigger, I should explain Poisson statistics, exponential Statistics, and Power Law/ Fractal Statistics. The only type of statistics taught for dealing with crime like this is Poisson statistics, a type that works well when the events happen so suddenly and pass so briefly that we can claim to be interested in only how often we will see multiples of them in a period of time. The Poisson distribution formula is, P = rke/r! where P is the Probability of having some number of events, r is the total number of events divided by the total number of periods, and k is the number of events we are interested in.

Using the data above for a period-time of 3 hours, we can say that r= .1, and the likelihood of zero, one, or two events begin in the 3 hour period is 90.4%, 9.04% and 0.45%. These numbers are reasonable in terms of when events happen, but they are irrelevant to the problem anyone is really interested in: what resources are needed to come to the aid of the victims. That’s the problem with Poisson statistics: it treats something that no one cares about (when the thing start), and under-predicts the important things, like how often you’ll have multiple events in-progress. For 4 events, Poisson statistics predicts it happens only .00037% of the time — true enough, but irrelevant in terms of how often multiple teams are needed out on the job. We need four teams no matter if the 4 events began in a single 3 hour period or in close succession in two adjoining periods. The events take time to deal with, and the time overlaps.

The way I’d dealt with these events, above, suggests a power law approach. In this case, each likelihood was 1/10 the previous, and the probability P = .9 x10-k . This is called power law statistics. I’ve never seen it taught, though it appears very briefly in Wikipedia. Those who like math can re-write the above relation as log10P = log10 .9 -k.

One can generalize the above so that, for example, the decay rate can be 1/8 and not 1/10 (that is the chance of having k+1 events is 1/8 that of having k events). In this case, we could say that P = 7/8 x 8-k , or more generally that log10P = log10 A –kβ. Here k is the number of teams required at any time, β is a free variable, and Α = 1-10 because the sum of all probabilities has to equal 100%.

In college math, when behaviors like this appear, they are incorrectly translated into differential form to create “exponential statistics.” One begins by saying ∂P/∂k = -βP, where β = .9 as before, or remains some free-floating term. Everything looks fine until we integrate and set the total to 100%. We find that P = 1/λ e-kλ for k ≥ 0. This looks the same as before except that the pre-exponential always comes out wrong. In the above, the chance of having 0 events turns out to be 111%. Exponential statistics has the advantage (or disadvantage) that we find a non-zero possibility of having 1/100 of a fire, or 3.14159 crimes at a given time. We assign excessive likelihoods for fractional events and end up predicting artificially low likelihoods for the discrete events we are interested in except going away from a calculus that assumes continuity in a world where there is none. Discrete math is better than calculus here.

I now wish to generalize the power law statistics, to something similar but more robust. I’ll call my development fractal statistics (there’s already a section called fractal statistics on Wikipedia, but it’s really power-law statistics; mine will be different). Fractals were championed by Benoit B. Mandelbrot (who’s middle initial, according to the old joke, stood for Benoit B. Mandelbrot). Many random processes look fractal, e.g. the stock market. Before going here, I’d like to recall that the motivation for all this is figuring out how many people to hire for a police /fire force; we are not interested in any other irrelevant factoid, like how many calls of a certain type come in during a period of time.

To choose the size of the force, lets estimate how many times per year some number of people are needed simultaneously now that the city has bigger buildings and is seeing a few larger fires, and crimes. Lets assume that the larger fires and crimes occur only .05% of the time but might require 15 officers or more. Being prepared for even one event of this size will require expanding the force to about 80 men; 50% more than we have today, but we find that this expansion isn’t enough to cover the 0.0025% of the time when we will have two such major events simultaneously. That would require a 160 man fire-squad, and we still could not deal with two major fires and a simultaneous assault, or with a strike, or a lot of people who take sick at the same time. 

To treat this situation mathematically, we’ll say that the number times per year where a certain number of people are need, relates to the number of people based on a simple modification of the power law statistics. Thus:  log10N = A – βθ  where A and β are constants, N is the number of times per year that some number of officers are needed, and θ is the number of officers needed. To solve for the constants, plot the experimental values on a semi-log scale, and find the best straight line: -β is the slope and A  is the intercept. If the line is really straight, you are now done, and I would say that the fractal order is 1. But from the above discussion, I don’t expect this line to be straight. Rather I expect it to curve upward at high θ: there will be a tail where you require a higher number of officers. One might be tempted to modify the above by adding a term like but this will cause problems at very high θ. Thus, I’d suggest a fractal fix.

My fractal modification of the equation above is the following: log10N = A-βθ-w where A and β are similar to the power law coefficients and w is the fractal order of the decay, a coefficient that I expect to be slightly less than 1. To solve for the coefficients, pick a value of w, and find the best fits for A and β as before. The right value of w is the one that results in the straightest line fit. The equation above does not look like anything I’ve seen quite, or anything like the one shown in Wikipedia under the heading of fractal statistics, but I believe it to be correct — or at least useful.

To treat this politically is more difficult than treating it mathematically. I suspect we will have to combine our police and fire department with those of surrounding towns, and this will likely require our city to revert to a pure police department and a pure fire department. We can’t expect other cities specialists to work with our generalists particularly well. It may also mean payments to other cities, plus (perhaps) standardizing salaries and staffing. This should save money for Oak Park and should provide better service as specialists tend to do their jobs better than generalists (they also tend to be safer). But the change goes against the desire (need) of our local politicians to hand out favors of money and jobs to their friends. Keeping a non-specialized force costs lives as well as money but that doesn’t mean we’re likely to change soon.

Robert E. Buxbaum  December 6, 2013. My two previous posts are on how to climb a ladder safely, and on the relationship between mustaches in WWII: mustache men do things, and those with similar mustache styles get along best.

A Masculinist History of the Modern World, pt. 1: Beards

Most people who’ve been in university are familiar with feminist historical analysis: the history of the world as a long process of women’s empowerment. I thought there was a need for a masculinist history of the world, too, and as this was no-shave November, I thought it should focus on the importance of face hair in the modern world. I’d like to focus this post on the importance of beards, particularly in the rise of communism and of the Republican party. I note that all the early communists and Republicans were bearded. More-so, the only bearded US presidents have been Republicans, and that their main enemies from Boss Tweed, to Castro to Ho Chi Minh, have all been bearded too. I note too, that communism and the Republican party have flourished and stagnated along with the size of their beards, with a mustache interlude of the early to mid 20th century. I’ll shave that for my next post.

Marxism and the Republican Party started at about the same time, bearded. They then grew in parallel, with each presenting a face of bold, rugged, machismo, fighting the smooth tongues and chins of the Democrats and of Victorian society,and both favoring extending the franchise to women and the oppressed through the 1800s against opposition from weak-wristed, feminine liberalism.

Marx and Engles (middle) wrote the Communist Manifesto in 1848, the same year that Lincoln joined the new Republican Party, and the same year that saw Louis Napoleon (right) elected in France. The communists both wear full bards, but there is something not-quite sincere in the face hair at right and left.

Marx and Engels (middle) wrote the Communist Manifesto in 1848, the same year that Lincoln joined the new Republican Party, and the same year that saw Louis Napoleon (right) elected in France. The communists both wear full bards, but there is something not-quite sincere in the face hair at right and left.

Karl Marx (above, center left, not Groucho, left) founded the Communist League with Friedrich Engels, center right, in 1847 and wrote the communist manifesto a year later, in 1848. In 1848, too, Louis Napoleon would be elected, and the same year 1848 the anti-slavery free-soil party formed, made up of Whigs and Democrats who opposed extending slavery to the free soil of the western US. By 1856 the Free soils party had collapsed, along with the communist league. The core of the free soils formed the anti-slavery Republican party and chose as their candidate, bearded explorer John C. Fremont under the motto, “Free soil, free silver, free men.” For the next century, virtually all Republican presidential candidates would have face hair.

Lincoln the Whig had no beard -- he was the western representative of the party of Eastern elites. Lincoln the Republican grew whiskers. He was a log-cabin frontiersman, rail -splitter.

Lincoln, the Whig, had no beard — he was the western representative of the party of eastern elites. Lincoln, the Republican, grew whiskers. He was now a log-cabin frontiersman, rail-splitter.

In Europe, revolution was in the air: the battle of the barricades against clean-chined, Louis Napoleon. Marx (Karl) writes his first political economic work, the Critique of Political Economy, in 1857 presenting a theory of freedom by work value. The political economic solution of slavery: abolish property. Lincoln debates Douglas and begins a run for president while still clean-shaven. While Mr. Lincoln did not know about Karl Marx, Marx knew about Lincoln. In the 1850s and 60s he was employed as a correspondent  for the International Herald Tribune, writing about American politics, in particular about the American struggle with slavery and inflation/ deflation cycles.

William Jennings Bryan, 3 time Democrat presidential candidate, opponent of alcohol, evolution, and face hair.

William Jennings Bryan was three-times the Democratic presidential candidate; more often than anyone else. He opposed alcohol, gambling, big banks, intervention abroad, monopoly business, teaching evolution, and gold — but he supported the KKK, and unlike most Democrats, women’s suffrage.

As time passed, bearded frontier Republicans would fight against the corruption of Tammany Hall, and the offense to freedom presented by prohibition, anti industry sentiment, and anti gambling laws. Against them, clean-shaven Democrat elites could claim they were only trying to take care of a weak-willed population that needed their help. The Communists would gain power in Russia, China, and Vietnam fighting against elites too, not only in their own countries but American and British elites who (they felt) were keeping them down by a sort of mommy imperialism.

In the US, moderate Republicans (with mustaches) would try to show a gentler side to this imperialism, while fighting against Democrat isolationism. Mustached Communists would also present a gentler imperialism by helping communist candidates in Europe, Cuba, and the far east. But each was heading toward a synthesis of ideas. The republicans embraced (eventually) the minimum wage and social security. Communists embraced (eventually) some limited amount of capitalism as a way to fight starvation. In my life-time, the Republicans could win elections by claiming to fight communism, and communists could brand Republicans as “crazy war-mongers”, but the bureaucrats running things were more alike than different. When the bureaucrats sat down together, it was as in Animal Farm, you could look from one to the other and hardly see any difference.

The history of Communism seen as a decline in face hair. The long march from the beard to the bare.

The history of Communism seen as a decline in face hair. The long march from the beard to the bare. From rugged individualism to mommy state socialism. Where do we go from here?

Today both movements provide just the barest opposition to the Democratic Party in the US, and to bureaucratic socialism in China and the former Soviet Union. All politicians oppose alcohol, drugs, and gambling, at least officially; all oppose laser faire, monopoly business and the gold standard in favor of government created competition and (semi-controlled) inflation. All oppose wide-open immigration, and interventionism (the Republicans and Communists a little less). Whoever is in power, it seems the beardless, mommy conservatism of William Jennings Bryan has won. Most people are happy with the state providing our needs, and protecting our morals. is this to be the permanent state of the world? There is no obvious opposition to the mommy state. But without opposition won’t these socialist elites become more and more oppressive? I propose a bold answer, not one cut from the old cloth; the old paradigms are dead. The new opposition must sprout from the bare chin that is the new normal. Behold the new breed of beard.

The future opposition must grow from the barren ground of the new normal.

The future opposition must grow from the barren ground of the new normal. Another random thought on the political implications of no-shave November.

by Robert E. Buxbaum, No Shave, November 15, 2013. Keep watch for part 2 in this horrible (tongue in) cheek series: World War 2: Big mustache vs little mustache. See also: Roosevelt: a man, a moose, a mustache, and The surrealism of Salvador: man on a mustache.

 

Ab Normal Statistics and joke

The normal distribution of observation data looks sort of like a ghost. A Distribution  that really looks like a ghost is scary.

The normal distribution of observation data looks sort of like a ghost. A Distribution that really looks like a ghost is scary.

It’s funny because …. the normal distribution curve looks sort-of like a ghost. It’s also funny because it would be possible to imagine data being distributed like the ghost, and most people would be totally clue-less as to how to deal with data like that — abnormal statistics. They’d find it scary and would likely try to ignore the problem. When faced with a statistics problem, most people just hope that the data is normal; they then use standard mathematical methods with a calculator or simulation package and hope for the best.

Take the following example: you’re interested in buying a house near a river. You’d like to analyze river flood data to know your risks. How high will the river rise in 100 years, or 1000. Or perhaps you would like to analyze wind data to know how strong to make a sculpture so it does not blow down. Your first thought is to use the normal distribution math in your college statistics book. This looks awfully daunting (it doesn’t have to) and may be wrong, but it’s all you’ve got.

The normal distribution graph is considered normal, in part, because it’s fairly common to find that measured data deviates from the average in this way. Also, this distribution can be derived from the mathematics of an idealized view of the world, where any variety derives from multiple small errors around a common norm, and not from some single, giant issue. It’s not clear this is a realistic assumption in most cases, but it is comforting. I’ll show you how to do the common math as it’s normally done, and then how to do it better and quicker with no math at all, and without those assumptions.

Lets say you want to know the hundred-year maximum flood-height of a river near your house. You don’t want to wait 100 years, so you measure the maximum flood height every year over five years, say, and use statistics. Lets say you measure 8 foot, 6 foot, 3 foot (a draught year), 5 feet, and 7 feet.

The “normal” approach (pardon the pun), is to take a quick look at the data, and see that it is sort-of normal (many people don’t bother). One now takes the average, calculated here as (8+6+3+5+7)/5 = 5.8 feet. About half the times the flood waters should be higher than this (a good researcher would check this, many do not). You now calculate the standard deviation for your data, a measure of the width of the ghost, generally using a spreadsheet. The formula for standard deviation of a sample is s = √{[(8-5.8)2 + (6-5.8)2 + (3-5.8)2 + (5-5.8)2 + (7-5.8)2]/4} = 1.92. The use of 4 here in the denominator instead of 5 is called the Brussels correction – it refers to the fact that a standard of deviation is meaningless if there is only one data point.

For normal data, the one hundred year maximum height of the river (the 1% maximum) is the average height plus 2.2 times the deviation; in this case, 5.8 + 2.2 x 1.92 = 10.0 feet. If your house is any higher than this you should expect few troubles in a century. But is this confidence warranted? You could build on stilts or further from the river, but you don’t want to go too far. How far is too far?

So let’s do this better. We can, with less math, through the use of probability paper. As with any good science we begin with data, not assumptions, like that the data is normal. Arrange the river height data in a list from highest to lowest (or lowest to highest), and plot the values in this order on your probability paper as shown below. That is on paper where likelihoods from .01% to 99.99% are arranged along the bottom — x axis, and your other numbers, in this case the river heights, are the y values listed at the left. Graph paper of this sort is sold in university book stores; you can also get jpeg versions on line, but they don’t look as nice.

probability plot of maximum river height over 5 years -- looks reasonably normal, but slightly ghost-like.

Probability plot of the maximum river height over 5 years. If the data suggests a straight line, like here the data is reasonably normal. Extrapolating to 99% suggests the 100 year flood height would be 9.5 to 10.2 feet, and that it is 99.99% unlikely to reach 11 feet. That’s once in 10,000 years, other things being equal.

For the x axis values of the 5 data points above, I’ve taken the likelihood to be the middle of its percentile. Since there are 5 data points, each point is taken to represent its own 20 percentile; the middles appear at 10%, 30%, 50%, etc. I’ve plotted the highest value (8 feet) at the 10% point on the x axis, that being the middle of the upper 20%. I then plotted the second highest (7 feet) at 30%, the middle of the second 20%; the third, 6 ft at 50%; the fourth at 70%; and the draught year maximum (3 feet) at 90%.  When done, I judge if a reasonably straight line would describe the data. In this case, a line through the data looks reasonably straight, suggesting a fairly normal distribution of river heights. I notice that, if anything the heights drop off at the left suggesting that really high river levels are less likely than normal. The points will also have to drop off at the right since a negative river height is impossible. Thus my river heights describe a version of the ghost distribution in the cartoon above. This is a welcome finding since it suggests that really high flood levels are unlikely. If the data were non-normal, curving the other way we’d want to build our house higher than a normal distribution would suggest. 

You can now find the 100 year flood height from the graph above without going through any the math. Just draw your best line through the data, and look where it crosses the 1% value on your graph (that’s two major lines from the left in the graph above — you may have to expand your view to see the little 1% at top). My extrapolation suggests the hundred-year flood maximum will be somewhere between about 9.5 feet, and 10.2 feet, depending on how I choose my line. This prediction is a little lower than we calculated above, and was done graphically, without the need for a spreadsheet or math. What’s more, our predictions is more accurate, since we were in a position to evaluate the normality of the data and thus able to fit the extrapolation line accordingly. There are several ways to handle extreme curvature in the line, but all involve fitting the curve some way. Most weather data is curved, e.g. normal against a fractal, I think, and this affects you predictions. You might expect to have an ice age in 10,000 years.

The standard deviation we calculated above is related to a quality standard called six sigma — something you may have heard of. If we had a lot of parts we were making, for example, we might expect to find that the size deviation varies from a target according to a normal distribution. We call this variation σ, the greek version of s. If your production is such that the upper spec is 2.2 standard deviations from the norm, 99% of your product will be within spec; good, but not great. If you’ve got six sigmas there is one-in-a-billion confidence of meeting the spec, other things being equal. Some companies (like Starbucks) aim for this low variation, a six sigma confidence of being within spec. That is, they aim for total product uniformity in the belief that uniformity is the same as quality. There are several problems with this thinking, in my opinion. The average is rarely an optimum, and you want to have a rational theory for acceptable variation boundaries. Still, uniformity is a popular metric in quality management, and companies that use it are better off than those that do nothing. At REB Research, we like to employ the quality methods of W. Edwards Deming; we assume non-normality and aim for an optimum (that’s subject matter for a further essay). If you want help with statistics, or a quality engineering project, contact us.

I’ve also meant to write about the phrase “other things being equal”, Ceteris paribus in Latin. All this math only makes sense so long as the general parameters don’t change much. Your home won’t flood so long as they don’t build a new mall up river from you with runoff in the river, and so long as the dam doesn’t break. If these are concerns (and they should be) you still need to use statistics and probability paper, but you will now have to use other data, like on the likelihood of malls going up, or of dams breaking. When you input this other data, you will find the probability curve is not normal, but typically has a long tail (when the dam breaks, the water goes up by a lot). That’s outside of standard statistic analysis, but why those hundred year floods come a lot more often than once in 100 years. I’ve noticed that, even at Starbucks, more than 1/1,000,000,000 cups of coffee come out wrong. Even in analyzing a common snafu like this, you still use probability paper, though. It may be ‘situation normal”, but the distribution curve it describes has an abnormal tail.

by Dr. Robert E. Buxbaum, November 6, 2013. This is my second statistics post/ joke, by the way. The first one dealt with bombs on airplanes — well, take a look.

An Aesthetic of Mechanical Strength

Back when I taught materials science to chemical engineers, I used the following poem to teach my aesthetic for the strength target for product design:

The secret to design, as the parson explained, is that the weakest part must withstand the strain. And if that part is to withstand the test, then it must be made as strong as all the rest. (by R.E. Buxbaum, based on “The Wonderful, One-hoss Shay, by Oliver Wendell Holmes, 1858).

My thought was, if my students had no idea what good mechanical design looked like, they’d never  be able to it well. I wanted them to realize that there is always a weakest part of any device or process for every type of failure. Good design accepts this and designs everything else around it. You make sure that the device will fail at a part of your choosing, when it fails, preferably one that you can repair easily and cheaply (a fuse, or a door hinge), and which doesn’t cause too much mayhem when it fails. Once this failure part is chosen and in place, I taught that the rest should be stronger, but there is no point in making any other part of that failure chain significantly stronger than the weakest link. Thus for example, once you’ve decided to use a fuse of a certain amperage, there is no point in making the rest of the wiring take more than 2-3 times the amperage of the fuse.

This is an aesthetic argument, of course, but it’s important for a person to know what good work looks like (to me, and perhaps to the student) — beyond just by compliments from the boss or grades from me. Some day, I’ll be gone, and the boss won’t be looking. There are other design issues too: If you don’t know what the failure point is, make a prototype and test it to failure, and if you don’t like what you see, remodel accordingly. If you like the point of failure but decide you really want to make the device stronger or more robust, be aware that this may involve strengthening that part only, or strengthening the entire chain of parts so they are as failure resistant as this part (the former is cheaper).

I also wanted to teach that there are many failure chains to look out for: many ways that things can wrong beyond breaking. Check for failure by fire, melting, explosion, smell, shock, rust, and even color change. Color change should not be ignored, BTW; there are many products that people won’t use as soon as they look bad (cars, for example). Make sure that each failure chain has it’s own known, chosen weak link. In a car, the paint on a car should fade, chip, or peel some (small) time before the metal underneath starts rusting or sagging (at least that’s my aesthetic). And in the DuPont gun-powder mill below, one wall should be weaker so that the walls should blow outward the right way (away from traffic).Be aware that human error is the most common failure mode: design to make things acceptably idiot-proof.

Dupont powder mills had a thinner wall and a stronger wall so that, if there were an explosion it would blow out towards the river. This mill has a second wall to protect workers. The thinner wall should be barely strong enough to stand up to wind and rain; the stronger walls should stand up to explosions that blow out the other wall.

Dupont powder mills had a thinner wall and a stronger wall so that, if there were an explosion, it would blow out ‘safely.’ This mill has a second wall to protect workers. The thinner wall must be strong enough to stand up to wind and rain; the stronger walls should stand up to all likely explosions.

Related to my aesthetic of mechanical strength, I tried to teach an aesthetic of cost, weight, appearance, and green: Choose materials that are cheaper, rather than more expensive; use less weight rather than more if both ways worked equally well. Use materials that look better if you’ve got the choice, and use recyclable materials. These all derive from the well-known axiom, omit needless stuff. Or, as William of Occam put it, “Entia non sunt multiplicanda sine necessitate.” As an aside, I’ve found that, when engineers use Latin, we look smart: “lingua bona lingua motua est.” (a good language is a dead language) — it’s the same with quoting 19th century poets, BTW: dead 19th century poets are far better than undead ones, but I digress.

Use of recyclable materials gets you out of lots of problems relative to materials that must be disposed of. E.g. if you use aluminum insulation (recyclable) instead of ceramic fiber, you will have an easier time getting rid of the scrap. As a result, you are not as likely to expose your workers (or you) to mesothelioma, or similar disease. You should not have to pay someone to haul away excess or damaged product; a scraper will oblige, and he may even pay you for it if you have enough. Recycling helps cash flow with decommissioning too, when money is tight. It’s better to find your $1 worth of scrap is now worth $2 instead of discovering that your $1 worth of garbage now costs $2 to haul away. By the way, most heat loss is from black body radiation, so aluminum foil may actually work better than ceramics of the same thermal conductivity.

Buildings can be recycled too. Buy them and sell them as needed. Shipping containers make for great lab buildings because they are cheap, strong, and movable. You can sell them off-site when you’re done. We have a shipping container lab building, and a shipping container storage building — both worth more now than when I bought them. They are also rather attractive with our advertising on them — attractive according to my design aesthetic. Here’s an insight into why chemical engineers earn more than chemists; and insight into the difference between mechanical engineering and civil engineering. Here’s an architecture aesthetic. Here’s one about the scientific method.

Robert E. Buxbaum, October 31, 2013

Why random experimental design is better

In a previous post I claimed that, to do good research, you want to arrange experiments so there is no pre-hypothesis of how the results will turn out. As the post was long, I said nothing direct on how such experiments should be organized, but only alluded to my preference: experiments should be organized at randomly chosen conditions within the area of interest. The alternative, shown below is that experiments should be done at the cardinal points in the space, or at corner extremes: the Wilson Box and Taguchi design of experiments (DoE), respectively. Doing experiments at these points implies a sort of expectation of the outcome; generally that results will be linearly, orthogonal related to causes; in such cases, the extreme values are the most telling. Sorry to say, this usually isn’t how experimental data will fall out. First experimental test points according to a Wilson Box, a Taguchi, and a random experimental design. The Wilson box and Taguchi are OK choices if you know or suspect that there are no significant non-linear interactions, and where experiments can be done at these extreme points. Random is the way nature works; and I suspect that's best -- it's certainly easiest.

First experimental test points according to a Wilson Box, a Taguchi, and a random experimental design. The Wilson box and Taguchi are OK choices if you know or suspect that there are no significant non-linear interactions, and where experiments can be done at these extreme points. Random is the way nature works; and I suspect that’s best — it’s certainly easiest.

The first test-points for experiments according to the Wilson Box method and Taguchi method of experimental designs are shown on the left and center of the figure above, along with a randomly chosen set of experimental conditions on the right. Taguchi experiments are the most popular choice nowadays, especially in Japan, but as Taguchi himself points out, this approach works best if there are “few interactions between variables, and if only a few variables contribute significantly.” Wilson Box experimental choices help if there is a parabolic effect from at least one parameter, but are fairly unsuited to cases with strong cross-interactions.

Perhaps the main problems with doing experiments at extreme or cardinal points is that these experiments are usually harder than at random points, and that the results from these difficult tests generally tell you nothing you didn’t know or suspect from the start. The minimum concentration is usually zero, and the minimum temperature is usually one where reactions are too slow to matter. When you test at the minimum-minimum point, you expect to find nothing, and generally that’s what you find. In the data sets shown above, it will not be uncommon that the two minimum W-B data points, and the 3 minimum Taguchi data points, will show no measurable result at all.

Randomly selected experimental conditions are the experimental equivalent of Monte Carlo simulation, and is the method evolution uses. Set out the space of possible compositions, morphologies and test conditions as with the other method, and perhaps plot them on graph paper. Now, toss darts at the paper to pick a few compositions and sets of conditions to test; and do a few experiments. Because nature is rarely linear, you are likely to find better results and more interesting phenomena than at any of those at the extremes. After the first few experiments, when you think you understand how things work, you can pick experimental points that target an optimum extreme point, or that visit a more-interesting or representative survey of the possibilities. In any case, you’ll quickly get a sense of how things work, and how successful the experimental program will be. If nothing works at all, you may want to cancel the program early, if things work really well you’ll want to expand it. With random experimental points you do fewer worthless experiments, and you can easily increase or decrease the number of experiments in the program as funding and time allows.

Consider the simple case of choosing a composition for gunpowder. The composition itself involves only 3 or 4 components, but there is also morphology to consider including the gross structure and fine structure (degree of grinding). Instead of picking experiments at the maximum compositions: 100% salt-peter, 0% salt-peter, grinding to sub-micron size, etc., as with Taguchi, a random methodology is to pick random, easily do-able conditions: 20% S and 40% salt-peter, say. These compositions will be easier to ignite, and the results are likely to be more relevant to the project goals.

The advantages of random testing get bigger the more variables and levels you need to test. Testing 9 variables at 3 levels each takes 27 Taguchi points, but only 16 or so if the experimental points are randomly chosen. To test if the behavior is linear, you can use the results from your first 7 or 8 randomly chosen experiments, derive the vector that gives the steepest improvement in n-dimensional space (a weighted sum of all the improvement vectors), and then do another experimental point that’s as far along in the direction of that vector as you think reasonable. If your result at this point is better than at any point you’ve visited, you’re well on your way to determining the conditions of optimal operation. That’s a lot faster than by starting with 27 hard-to-do experiments. What’s more, if you don’t find an optimum; congratulate yourself, you’ve just discovered an non-linear behavior; something that would be easy to overlook with Taguchi or Wilson Box methodologies.

The basic idea is one Sherlock Holmes pointed out (Study in Scarlet): It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.” (Case of Identity). Life is infinitely stranger than anything which the mind of man could invent.

Robert E. Buxbaum, September 11, 2013. A nice description of the Wilson Box method is presented in Perry’s Handbook (6th ed). SInce I had trouble finding a free, on-line description, I linked to a paper by someone using it to test ingredient choices in baked bread. Here’s a link for more info about random experimental choice, from the University of Michigan, Chemical Engineering dept. Here’s a joke on the misuse of statistics, and a link regarding the Taguchi Methodology. Finally, here’s a pointless joke on irrational numbers, that I posted for pi-day.

Detroit Teachers are not paid too much

Detroit is bankrupt financially, but not because the public education teachers have negotiated rich contracts. If anything Detroit teachers are paid too little given the hardship of their work. The education problem in Detroit, I think, is with the quality of education, and of life. Parents leave Detroit, if they can afford it; students who can’t leave the city avoid the Detroit system by transferring to private schools, by commuting to schools in the suburbs, or by staying home. Fewer than half of Detroit students are in the Detroit public schools.

The average salary for a public school teacher in Detroit is (2013) $51,000 per year. That’s 3% less than the national average and $3,020/year less than the Michigan average. While some Detroit teachers are paid over $100,000 per year, a factoid that angers some on the right, that’s a minority of teachers, only those with advanced degrees and many years of seniority. For every one of these, the Detroit system has several assistant teachers, substitute teachers, and early childhood teachers earning $20,000 to $25,000/ year. That’s an awfully low salary given their education and the danger and difficulty of their work. It’s less than janitors are paid on an annual basis (janitors work more hours generally). This is a city with 25 times the murder rate in the rest of the state. If anything, good teachers deserve a higher salary.

Detroit public schools provide among the worst math education in the US. In 2009, showing the lowest math proficiency scores ever recorded in the 21-year history of the national math proficiency test. Attendance and graduation are low too: Friday attendance averages 71.2%, and is never as high as 80% on any day. The high-school graduation rate in Detroit is only 29.4%. Interested parents have responded by shifting their children out of the Detroit system at the rate of 8000/year. Currently, less than half of school age children go to Detroit public schools (51,070 last year); 50,076 go to charter schools, some 9,500 go to schools in the suburbs, and 8,783, those in the 5% in worst-performing schools, are now educated by the state reform district.

Outside a state run reform district school, The state has taken over the 5% worst performing schools.

The state of Michigan has taken over the 5% worst performing schools in Detroit through their “Reform District” system. They provide supplies and emphasize job-skills.

Poor attendance and the departure of interested students makes it hard for any teacher to handle a class. Teachers must try to teach responsibility to kids who don’t show up, in a high crime setting, with only a crooked city council to look up to. This is a city council that oversaw decades of “pay for play,” where you had to bribe the elected officials to bid on projects. Even among officials who don’t directly steal, there is a pattern of giving themselves and their families fancy cars or gambling trips to Canada using taxpayers dollars. The mayor awarded Cadillac Escaldes to his family and friends, and had a 22-man team of police to protect him. On this environment, a teacher has to be a real hero to achieve even modest results.

Student departure means there a surfeit of teachers and schools, but it is hard to see what to do. You’d like to reassign teachers who are on the payroll, but doing little, and fire the worst teachers. Sorry to say, it’s hard to fire anyone, and it’s hard to figure out which are the bad teachers; just because your class can’t read doesn’t mean you are a bad teacher. Recently a teacher of the year was fired because the evaluation formula gave her a low rating.

Making changes involves upending union seniority rules. Further, there is an Americans with Disability Act that protects older teachers, along with the lazy, the thief, and the drug addict — assuming they claim disability by frailty, poor upbringing or mental disease. To speed change along, I would like to see the elected education board replaced by an appointed board with the power to act quickly and the responsibility to deliver quality education within the current budget. Unlike the present system, there must be oversight to keep them from using the money on themselves.

She state could take over more schools into the reform school district, or they could remove entire school districts from Detroit incorporation and make them Michigan townships. A Michigan township has more flexibility in how they run schools, police, and other services. They can run as many schools as they want, and can contract with their neighbors or independent suppliers for the rest. A city has to provide schools for everyone who’s not opted out. Detroit’s population density already matches that of rural areas; rural management might benefit some communities.

I would like to see the curriculum modified to be more financially relevant. Detroit schools could reinstate classes in shop and trade-skills. In effect that’s what’s done at Detroit’s magnet schools, e.g. the Cass Academy and the Edison Academy. It’s also the heart of several charter schools in the state-run reform district. Shop class teaches math, an important basis of science, and responsibility. If your project looks worse than your neighbor’s, you can only blame yourself, not the system. And if you take home your work, there is that reward for doing a good job. As a very last thought, I’d like to see teachers paid more than janitors; this means that the current wage structure has to change. If nothing else, a change would show that there is a monetary value in education.

Robert Buxbaum, August 16, 2013; I live outside Detroit, in one of the school districts that students go to when they flee the city.