Tag Archives: energy

Automobile power 2021: Batteries vs gasoline and hydrogen

It’s been a while since I did an assessment of hydrogen and batteries for automobile propulsion, and while some basics have not changed, the price and durability of batteries has improved, the price of gasoline has doubled, and the first commercial fuel cell cars have appeared in the USA. The net result (see details below) is that I find the cost of ownership for a gasoline and a battery car is now about the same, depending on usage and location, and that hydrogen, while still more pricey, is close to being a practical option.

EV Chargers. They look so much cooler than gasoline hoses, and the price per mile is about the same.

Lithium battery costs are now about $150/kwh. That’s $10,000 for a 70 kWh battery. That’s about 1/5 the price of a Tesla Model 3. The reliability that Tesla claims is 200,000 miles or more, but that’s with slow charging. For mostly fast charging, Car and Driver’s expectation is 120,000 miles. That’s just about the average life-span of a car these days.

The cost of the battery and possible replacement adds to the cost of the vehicle, but electricity is far cheaper than gasoline, per mile. The price of gasoline has doubled to, currently, $3.50 per gallon. A typical car will get about 24 mpg, and that means a current operation cost of 14.6¢/mile. That’s about $1,460/year for someone who drives 10,000 miles per year. I’ll add about $150 for oil and filter changes, and figure that operating a gas-powered car engine costs about $1,610 per year.

If you charge at home, your electricity costs, on average, 14¢/kWh. This is a bargain compared to gasoline since electricity is made from coal and nuclear, mostly, and is subsidized while gasoline is taxed. At level 2 charging stations, where most people charge, electricity costs about 50¢/kWh. This is three times the cost of home electricity, but it still translates to only about $32 for a fill-up that take 3 hours. According to “Inside EVs”, in moderate temperatures, a Tesla Model 3 uses 14.59 kWh/100 km with range-efficient driving. This translates to 11.7¢ per mile, or $1170/year, assuming 10,000 miles of moderate temperature driving. If you live in moderate climates: Californian, Texas or Florida, an electric car is cheaper to operate than a gasoline car. In cold weather gasoline power still makes sense since a battery-electric car uses battery power for heat, while a gasoline powered car uses waste heat from the engine.

Battery cars are still somewhat of more expensive than the equivalent gasoline car, but not that much. In a sense you can add $400/year for the extra cost of the Tesla above, but that just raises the effective operating cost to about $1,570/year, about the same as for the gasoline car. On the other hand, many folks drive less than 50 miles per day and can charge at home each night. This saves most of the electric cost. In sum, I find that EVs have hit a tipping point, and Tesla lead the way.

Now to consider hydrogen. When most people think hydrogen, they think H2 fuel, and a PEM fuel cell car. The problem here is that hydrogen is expensive, and PEM FCs aren’t particularly efficient. Hydrogen costs about $10/kg at a typical fueling station and, with PEM, that 1 kg of hydrogen takes you only about 25 miles. The net result is that the combination hydrogen + PEM results in a driving cost of about 40¢/mile, or about three times the price of gasoline. But Toyota has proposed two better options. The fist is a PEM hybrid, the hydrogen Prius. It’s for the commuter who drives less than about 40 miles per day. It has a 10kWh battery, far cheaper than the Tesla above, but enough for the daily commute. He or she would use charge at home at night, and use hydrogen fuel only when going on longer trips. If there are few long trips, you come out way ahead.

Toyota 2021 Mirai, hydrogen powered vehicle

Toyota also claims to have a hydrogen powered Corolla or debut in 2023. This car will have a standard engine, and I would expect (hope) will drive also — preferably — on hythane, a mix of hydrogen and methane. Hythane is much cheaper per volume, and more energy dense, see my analysis. While Toyota has not said that their Corolla would run on hythane, it is supposed to have an internal combustion engine, and that suggests that hythane will work in it.

A more advanced option for Toyota or any other car/truck manufacturer would be to design to use solid oxide fuel cells, SOFCs, either with hydrogen or hythane. SOFCs are significantly more efficient than PEM, and they are capable of burning hythane, and to some extent natural gas too. Hythane is not particularly available, but it could be. Any station that currently sells natural gas could sell hythane. As for delivery to the station, natural gas lines already exist underground, and the station would just blend in hydrogen, produced at the station by electrolysis, or delivered. Hythane can also be made locally from sewer gas methane, and wind-power hydrogen. Yet another SOFC option is to start with natural gas and convert some of the natural gas to hydrogen on-board using left-over heat from the SOFC. I’ve a patent for this process.

Speaking of supply network, I should mention the brown outs we’ve been having in Detroit. Electric cars are part of the stress to the electric grid, but I believe that, with intelligent charging (and discharging) the concern is more than manageable. The driver who goes 10,000 miles per year only adds about 2,350 kWh/year of extra electric demand. This is a small fraction of the demand of a typical home, 12,154 kWh/year.It’s manageable. Then again, hythane adds no demand to the electric grid and the charge time is quicker — virtually instantaneous.

Robert Buxbaum, September 3, 2021

A useful chart, added September 20, 2021. Battery prices are likely to keep falling.

Upgrading landfill and digester gas for sale, methanol

We live in a throw-away society, and the majority of it, eventually makes its way to a landfill. Books, food, grass clippings, tree-products, consumer electronics; unless it gets burnt or buried at sea, it goes to a landfill and is left to rot underground. The product of this rot is a gas, landfill gas, and it has a fairly high energy content if it could be tapped. The composition of landfill gas changes, but after the first year or so, the composition settles down to a nearly 50-50 mix of CO2 and methane. There is a fair amount of water vapor too, plus some nitrogen and hydrogen, but the basic process is shown below for wood decomposition, and the products are CO2  and methane.

System for sewage gas upgrading, uses REB membranes.

C6 H12 O6  –> 3 CO2  + 3 CH4 

This mix can not be put in the normal pipeline: there is too much CO2  and there are too many other smelly or condensible compounds (water, methanol, H2S…). This gas is sometimes used for heat on site, but there is a limited need for heat near a landfill. For the most part it is just vented or flared off. The waste of a potential energy source is an embarrassment. Besides, we are beginning to notice that methane causes global-warming with about 50 times the effect of CO2, so there is a strong incentive to capture and burn this gas, even if you have no use for the heat. I’d like to suggest a way to use the gas.

We sell small membrane modules too.

The landfill gas can be upgraded by removing the CO2. This can be done via a membrane, and REB Research sells a membranes that can do this. Other companies have other membranes that can do this too, but ours are smaller, and more suitable to small operations in my opinion. Our membrane are silicone-based. They retain CH4 and CO and hydrogen, while extracting water, CO2 and H2S, see schematic. The remainder is suited for local use in power generation, or in methanol production. It can also be used to run trucks. Also the gas can be upgraded further and added to a pipeline for shipping elsewhere. The useless parts can be separated for burial. Find these membranes on the REB web-site under silicone membranes.

Garbage trucks in New York powered by natural gas. They could use landfill gas.

There is another gas source whose composition is nearly identical to that of landfill gas; it’s digester gas, the output of sewage digesters. I’ve written about sewage treatment mostly in terms of aerobic bio treatment, for example here, but sewage can be treated anaerobically too, and the product is virtually identical to landfill gas. I think it would be great to power garbage trucks and buses with this. Gas. In New York, currently, some garbage trucks are powered by natural gas.

As a bonus, here’s how to make methanol from partially upgraded landfill or digester gas. As a first step 2/3 of the the CO2 removed. The remained will convert to methanol. by the following overall chemistry:

3 CH4 + CO2 + 2 H2O –> 4 CH3OH. 

When you removed the CO2., likely most of the water will leave with it. You add back the water as steam and heat to 800°C over Ni catalyst to make CO and H2. That’s done at about 800°C and 200 psi. Next, at lower temperature, with an appropriate catalyst you recombine the CO and H2 into methanol; with other catalysts you can make gasoline. These are not trivial processes, but they are doable on a smallish scale, and make economic sense where the methane is essentially free and there is no CNG customer. Methanol sells for $1.65/gal when sold by the tanker full, but $5 to $10/gal at the hardware store. That’s far higher than the price of methane, and methanol is far easier to ship and sell in truckload quantities.

Robert Buxbaum, June 8, 2021

Isotopic effects in hydrogen diffusion in metals

For most people, there is a fundamental difference between solids and fluids. Solids have long-term permanence with no apparent diffusion; liquids diffuse and lack permanence. Put a penny on top of a dime, and 20 years later the two coins are as distinct as ever. Put a layer of colored water on top of plain water, and within a few minutes you’ll see that the coloring diffuse into the plain water, or (if you think the other way) you’ll see the plain water diffuse into the colored.

Now consider the transport of hydrogen in metals, the technology behind REB Research’s metallic  membranes and getters. The metals are clearly solid, keeping their shapes and properties for centuries. Still, hydrogen flows into and through the metals at a rate of a light breeze, about 40 cm/minute. Another way of saying this is we transfer 30 to 50 cc/min of hydrogen through each cm2 of membrane at 200 psi and 400°C; divide the volume by the area, and you’ll see that the hydrogen really moves through the metal at a nice clip. It’s like a normal filter, but it’s 100% selective to hydrogen. No other gas goes through.

To explain why hydrogen passes through the solid metal membrane this way, we have to start talking about quantum behavior. It was the quantum behavior of hydrogen that first interested me in hydrogen, some 42 years ago. I used it to explain why water was wet. Below, you will find something a bit more mathematical, a quantum explanation of hydrogen motion in metals. At REB we recently put these ideas towards building a membrane system for concentration of heavy hydrogen isotopes. If you like what follows, you might want to look up my thesis. This is from my 3rd appendix.

Although no-one quite understands why nature should work this way, it seems that nature works by quantum mechanics (and entropy). The basic idea of quantum mechanics you will know that confined atoms can only occupy specific, quantized energy levels as shown below. The energy difference between the lowest energy state and the next level is typically high. Thus, most of the hydrogen atoms in an atom will occupy only the lower state, the so-called zero-point-energy state.

A hydrogen atom, shown occupying an interstitial position between metal atoms (above), is also occupying quantum states (below). The lowest state, ZPE is above the bottom of the well. Higher energy states are degenerate: they appear in pairs. The rate of diffusive motion is related to ∆E* and this degeneracy.

A hydrogen atom, shown occupying an interstitial position between metal atoms (above), is also occupying quantum states (below). The lowest state, ZPE is above the bottom of the well. Higher energy states are degenerate: they appear in pairs. The rate of diffusive motion is related to ∆E* and this degeneracy.

The fraction occupying a higher energy state is calculated as c*/c = exp (-∆E*/RT). where ∆E* is the molar energy difference between the higher energy state and the ground state, R is the gas constant and T is temperature. When thinking about diffusion it is worthwhile to note that this energy is likely temperature dependent. Thus ∆E* = ∆G* = ∆H* – T∆S* where asterisk indicates the key energy level where diffusion takes place — the activated state. If ∆E* is mostly elastic strain energy, we can assume that ∆S* is related to the temperature dependence of the elastic strain.

Thus,

∆S* = -∆E*/Y dY/dT

where Y is the Young’s modulus of elasticity of the metal. For hydrogen diffusion in metals, I find that ∆S* is typically small, while it is often typically significant for the diffusion of other atoms: carbon, nitrogen, oxygen, sulfur…

The rate of diffusion is now calculated assuming a three-dimensional drunkards walk where the step lengths are constant = a. Rayleigh showed that, for a simple cubic lattice, this becomes:

D = a2/6τ

a is the distance between interstitial sites and t is the average time for crossing. For hydrogen in a BCC metal like niobium or iron, D=

a2/9τ; for a FCC metal, like palladium or copper, it’s

a2/3τ. A nice way to think about τ, is to note that it is only at high-energy can a hydrogen atom cross from one interstitial site to another, and as we noted most hydrogen atoms will be at lower energies. Thus,

τ = ω c*/c = ω exp (-∆E*/RT)

where ω is the approach frequency, or the amount of time it takes to go from the left interstitial position to the right one. When I was doing my PhD (and still likely today) the standard approach of physics writers was to use a classical formulation for this time-scale based on the average speed of the interstitial. Thus, ω = 1/2a√(kT/m), and

τ = 1/2a√(kT/m) exp (-∆E*/RT).

In the above, m is the mass of the hydrogen atom, 1.66 x 10-24 g for protium, and twice that for deuterium, etc., a is the distance between interstitial sites, measured in cm, T is temperature, Kelvin, and k is the Boltzmann constant, 1.38 x 10-16 erg/°K. This formulation correctly predicts that heavier isotopes will diffuse slower than light isotopes, but it predicts incorrectly that, at all temperatures, the diffusivity of deuterium is 1/√2 that for protium, and that the diffusivity of tritium is 1/√3 that of protium. It also suggests that the activation energy of diffusion will not depend on isotope mass. I noticed that neither of these predictions is borne out by experiment, and came to wonder if it would not be more correct to assume ω represent the motion of the lattice, breathing, and not the motion of a highly activated hydrogen atom breaking through an immobile lattice. This thought is borne out by experimental diffusion data where you describe hydrogen diffusion as D = D° exp (-∆E*/RT).

Screen Shot 2018-06-21 at 12.08.20 AM

You’ll notice from the above that D° hardly changes with isotope mass, in complete contradiction to the above classical model. Also note that ∆E* is very isotope dependent. This too is in contradiction to the classical formulation above. Further, to the extent that D° does change with isotope mass, D° gets larger for heavier mass hydrogen isotopes. I assume that small difference is the entropy effect of ∆E* mentioned above. There is no simple square-root of mass behavior in contrast to most of the books we had in grad school.

As for why ∆E* varies with isotope mass, I found that I could get a decent explanation of my observations if I assumed that the isotope dependence arose from the zero point energy. Heavier isotopes of hydrogen will have lower zero-point energies, and thus ∆E* will be higher for heavier isotopes of hydrogen. This seems like a far better approach than the semi-classical one, where ∆E* is isotope independent.

I will now go a bit further than I did in my PhD thesis. I’ll make the general assumption that the energy well is sinusoidal, or rather that it consists of two parabolas one opposite the other. The ZPE is easily calculated for parabolic energy surfaces (harmonic oscillators). I find that ZPE = h/aπ √(∆E/m) where m is the mass of the particular hydrogen atom, h is Plank’s constant, 6.63 x 10-27 erg-sec,  and ∆E is ∆E* + ZPE, the zero point energy. For my PhD thesis, I didn’t think to calculate ZPE and thus the isotope effect on the activation energy. I now see how I could have done it relatively easily e.g. by trial and error, and a quick estimate shows it would have worked nicely. Instead, for my PhD, Appendix 3, I only looked at D°, and found that the values of D° were consistent with the idea that ω is about 0.55 times the Debye frequency, ω ≈ .55 ωD. The slight tendency for D° to be larger for heavier isotopes was explained by the temperature dependence of the metal’s elasticity.

Two more comments based on the diagram I presented above. First, notice that there is middle split level of energies. This was an explanation I’d put forward for quantum tunneling atomic migration that some people had seen at energies below the activation energy. I don’t know if this observation was a reality or an optical illusion, but present I the energy picture so that you’ll have the beginnings of a description. The other thing I’d like to address is the question you may have had — why is there no zero-energy effect at the activated energy state. Such a zero energy difference would cancel the one at the ground state and leave you with no isotope effect on activation energy. The simple answer is that all the data showing the isotope effect on activation energy, table A3-2, was for BCC metals. BCC metals have an activation energy barrier, but it is not caused by physical squeezing between atoms, as for a FCC metal, but by a lack of electrons. In a BCC metal there is no physical squeezing, at the activated state so you’d expect to have no ZPE there. This is not be the case for FCC metals, like palladium, copper, or most stainless steels. For these metals there is a much smaller, on non-existent isotope effect on ∆E*.

Robert Buxbaum, June 21, 2018. I should probably try to answer the original question about solids and fluids, too: why solids appear solid, and fluids not. My answer has to do with quantum mechanics: Energies are quantized, and always have a ∆E* for motion. Solid materials are those where ω exp (-∆E*/RT) has unit of centuries. Thus, our ability to understand the world is based on the least understandable bit of physics.

Alkaline batteries have second lives

Most people assume that alkaline batteries are one-time only, throwaway items. Some have used rechargeable cells, but these are Ni-metal hydride, or Ni-Cads, expensive variants that have lower power densities than normal alkaline batteries, and almost impossible to find in stores. It would be nice to be able to recharge ordinary alkaline batteries, e.g. when a smoke alarm goes off in the middle of the night and you find you’re out, but people assume this is impossible. People assume incorrectly.

Modern alkaline batteries are highly efficient: more efficient than even a few years ago, and that always suggests reversibility. Unlike the acid batteries you learned about in highschool chemistry class (basic chemistry due to Volta) the chemistry of modern alkaline batteries is based on Edison’s alkaline car batteries. They have been tweaked to an extent that even the non-rechargeable versions can be recharged. I’ve found I can reliably recharge an ordinary alkaline cell, 9V, at least once using the crude means of a standard 12 V car battery charger by watching the amperage closely. It only took 10 minutes. I suspect I can get nine lives out of these batteries, but have not tried.

To do this experiment, I took a 9 V alkaline that had recently died, and finding I had no replacement, I attached it to a 6 Amp, 12 V, car battery charger that I had on hand. I would have preferred to use a 2 A charger and ideally a charger designed to output 9-10 V, but a 12 V charger is what I had available, and it worked. I only let it charge for 10 minutes because, at that amperage, I calculated that I’d recharged to the full 1 Amp-hr capacity. Since the new alkaline batteries only claimed 1 amp hr, I figured that more charge would likely do bad things, even perhaps cause the thing to blow up.  After 5 minutes, I found that the voltage had returned to normal and the battery worked fine with no bad effects, but went for the full 10 minutes. Perhaps stopping at 5 would have been safer.

I changed for 10 minutes (1/6 hour) because the battery claimed a capacity of 1 Amp-hour when new. My thought was 1 amp-hour = 1 Amp for 1 hour, = 6 Amps for 1/6 hour = ten minutes. That’s engineering math for you, the reason engineers earn so much. I figured that watching the recharge for ten minutes was less work and quicker than running to the store (20 minutes). I used this battery in my firm alarm, and have tested it twice since then to see that it works. After a few days in my fire alarm, I took it out and checked that the voltage was still 9 V, just like when the battery was new. Confirming experiments like this are a good idea. Another confirmation occurred when I overcooked some eggs and the alarm went off from the smoke.

If you want to experiment, you can try a 9V as I did, or try putting a 1.5 volt AA or AAA battery in a charger designed for rechargeables. Another thought is to see what happens when you overcharge. Keep safe: do this in a wood box outside at a distance, but I’d like to know how close I got to having an exploding energizer. Also, it would be worthwhile to try several charge/ discharge cycles to see how the energy content degrades. I expect you can get ~9 recharges with a “non-rechargeable” alkaline battery because the label says: “9 lives,” but even getting a second life from each battery is a significant savings. Try using a charger that’s made for rechargeables. One last experiment: If you’ve got a cell phone charger that works on a car battery, and you get the polarity right, you’ll find you can use a 9V alkaline to recharge your iPhone or Android. How do I know? I judged a science fair not long ago, and a 4th grader did this for her science fair project.

Robert Buxbaum, April 19, 2018. For more, semi-dangerous electrochemistry and biology experiments.

Keeping your car batteries alive.

Lithium-battery cost and performance has improved so much that no one uses Ni-Cad or metal hydride batteries any more. These are the choice for tools, phones, and computers, while lead acid batteries are used for car starting and emergency lights. I thought I’d write about the care and trade-offs of these two remaining options.

As things currently stand, you can buy a 12 V, lead-acid car battery with 40 Amp-h capacity for about $95. This suggests a cost of about $200/ kWh. The price rises to $400/kWh if you only discharge half way (good practice). This is cheaper than the per-power cost of lithium batteries, about $500/ kWh or $1000/ kWh if you only discharge half-way (good practice), but people pick lithium because (1) it’s lighter, and (2) it’s generally longer lasting. Lithium generally lasts about 2000 half-discharge cycles vs 500 for lead-acid.

On the basis of cost per cycle, lead acid batteries would have been replaced completely except that they are more tolerant of cold and heat, and they easily output the 400-800 Amps needed to start a car. Lithium batteries have problems at these currents, especially when it’s hot or cold. Lithium batteries deteriorate fast in the heat too (over 40°C, 105°F), and you can not charge a lithium car battery at more than 3-4 Amps at temperatures below about 0°C, 32°F. At higher currents, a coat of lithium metal forms on the anode. This lithium can react with water: 2Li + H2O –> Li2O + H2, or it can form dendrites that puncture the cell separators leading to fire and explosion. If you charge a lead acid battery too fast some hydrogen can form, but that’s much less of a problem. If you are worried about hydrogen, we sell hydrogen getters and catalysts that remove it. Here’s a description of the mechanisms.

The best thing you can do to keep a lead-acid battery alive is to keep it near-fully charged. This can be done by taking long drives, by idling the car (warming it up), or by use of an external trickle charger. I recommend a trickle charger in the winter because it’s non-polluting. A lead-acid battery that’s kept at near full charge will give you enough charge for 3000 to 5000 starts. If you let the battery completely discharge, you get only 50 or so deep cycles or 1000 starts. But beware: full discharge can creep up on you. A new car battery will hold 40 Ampere-hours of current, or 65,000 Ampere-seconds if you half discharge. Starting the car will take 5 seconds of 600 Amps, using 3000 Amp-s or about 5% of the battery’s juice. The battery will recharge as you drive, but not that fast. You’ll have to drive for at least 500 seconds (8 minutes) to recharge from the energy used in starting. But in the winter it is common that your drive will be shorter, and that a lot of your alternator power will be sent to the defrosters, lights, and seat heaters. As a result, your lead-acid battery will not totally charge, even on a 10 minute drive. With every week of short trips, the battery will drain a little, and sooner or later, you’ll find your battery is dead. Beware and recharge, ideally before 50% discharge

A little chemistry will help explain why full discharging is bad for battery life (for a different version see Wikipedia). For the first half discharge of a lead-acid battery, the reaction Is:

Pb + 2PbO2 + 2H2SO4  –> PbSO4 + Pb2O2SO4 + 2H2O.

This reaction involves 2 electrons and has a -∆G° of >394 kJ, suggesting a reversible voltage more than 2.04 V per cell with voltage decreasing as H2SO4 is used up. Any discharge forms PbSO4 on the positive plate (the lead anode) and converts lead oxide on the cathode (the negative plate) to Pb2O2SO4. Discharging to more than 50% involves this reaction converting the Pb2O2SO4 on the cathode to PbSO4.

Pb + Pb2O2SO4 + 2H2SO4  –> 2PbSO4 + 2H2O.

This also involves two electrons, but -∆G < 394 kJ, and voltage is less than 2.04 V. Not only is the voltage less, the maximum current is less. As it happens Pb2O2SO4 is amorphous, adherent, and conductive, while PbSO4 is crystalline, not that adherent, and not-so conductive. Operating at more than 50% results in less voltage, increased internal resistance, decreased H2SO4 concentrations, and lead sulfate flaking off the electrode. Even letting a battery sit at low voltage contributes to PbSO4 flaking off. If the weather is cold enough, the low concentration H2SO4 freezes and the battery case cracks. My advice: Get out your battery charger and top up your battery. Don’t worry about overcharging; your battery charger will sense when the charge is complete. A lead-acid battery operated at near full charge, between 67 and 100% will provide 1500 cycles, about as many as lithium. 

Trickle charging my wife's car. Good for battery life. At 6 Amps, expect this to take 3-6 hours.

Trickle charging my wife’s car: good for battery life. At 6 Amps, expect a full charge to take 6 hours or more. You might want to recharge the battery in your emergency lights too. 

Lithium batteries are the choice for tools and electric vehicles, but the chemistry is different. For longest life with lithium batteries, they should not be charged fully. If you change fully they deteriorate and self-discharge, especially when warm (100°F, 40°C). If you operate at 20°C between 75% and 25% charge, a lithium-ion battery will last 2000 cycles; at 100% to 0%, expect only 200 cycles or so.

Tesla cars use lithium batteries of a special type, lithium cobalt. Such batteries have been known to explode, but and Tesla adds sophisticated electronics and cooling systems to prevent this. The Chevy Volt and Bolt use lithium batteries too, but they are less energy-dense. In either case, assuming $1000/kWh and a 2000 cycle life, the battery cost of an EV is about 50¢/kWh-cycle. Add to this the cost of electricity, 15¢/kWh including the over-potential needed to charge, and I find a total cost of operation of 65¢/kWh. EVs get about 3 miles per kWh, suggesting an energy cost of about 22¢/mile. By comparison, a 23 mpg car that uses gasoline at $2.80 / gal, the energy cost is 12¢/mile, about half that of the EVs. For now, I stick to gasoline for normal driving, and for long trips, suggest buses, trains, and flying.

Robert Buxbaum, January 4, 2018.

Change home air filters 3 times per year

Energy efficient furnaces use a surprisingly large amount of electricity to blow the air around your house. Part of the problem is the pressure drop of the ducts, but quite a lot of energy is lost bowing air through the dust filter. An energy-saving idea: replace the filter on your furnace twice a year or more. Another idea, you don’t have to use the fanciest of filters. Dirty filters provide a lot of back-pressure especially when they are dirty.

I built a water manometer, see diagram below to measure the pressure drop through my furnace filters. The pressure drop is measured from the difference in the height of the water column shown. Each inch of water is 0.04 psi or 275 Pa. Using this pressure difference and the flow rating of the furnace, I calculated the amount of power lost by the following formula:

W = Q ∆P/ µ.

Here W is the amount of power use, Watts, Q is flow rate m3/s, ∆P = the pressure drop in Pa, and µ is the efficiency of the motor and blower, typically about 50%.

With clean filters (two different brands), I measured 1/8″ and 1/4″ of water column, or a pressure drop of 0.005 and 0.01 psi, depending on the filter. The “better the filter”, that is the higher the MERV rating, the higher the pressure drop. I also measured the pressure drop through a 6 month old filter and found it to be 1/2″ of water, or 0.02 psi or 140 Pa. Multiplying this by the amount of air moved, 1000 cfm =  25 m3 per minute or 0.42 m3/s, and dividing by the efficiency, I calculate a power use of 118 W. That is 0.118 kWh/hr. or 2.8 kWh/day.

water manometer used to measure pressure drop through the filter of my furnace. I stuck two copper tubes into the furnace, and attached a plastic hose. Pressure was measured from the difference in the water level in the hose.

The water manometer I used to measure the pressure drop through the filter of my furnace. I stuck two copper tubes into the furnace, and attached a plastic tube half filled with water between the copper tubes. Pressure was measured from the difference in the water level in the plastic tube. Each 1″ of water is 280 Pa or 0.04psi.

At the above rate of power use and a cost of electricity of 11¢/kWhr, I find it would cost me an extra 4 KWhr or about 31¢/day to pump air through my dirty-ish filter; that’s $113/year. The cost through a clean filter would be about half this, suggesting that for every year of filter use I spend an average of $57t where t is the use life of the filter.

To calculate the ideal time to change filters I set up the following formula for the total cost per year $, including cost per year spent on filters (at $5/ filter), and the pressure-induced electric cost:

$ = 5/t + 57 t.

The shorter the life of the filter, t, the more I spend on filters, but the less on electricity. I now use calculus to find the filter life that produces the minimum $, and determine that $ is a minimum at a filter life t = √5/57 = .30 years.  The upshot, then, if you filters are like mine, you should change your three times a year, or so; every 3.6 months to be super-exact. For what it’s worth, I buy MERV 5 filters at Ace or Home Depot. If I bought more expensive filters, the optimal change time would likely be once or twice per year. I figure that, unless you are very allergic or make electronics in your basement you don’t need a filter with MERV rating higher than 8 or so.

I’ve mentioned in a previous essay/post that dust starts out mostly as dead skin cells. Over time dust mites eat the skin, some pretty nasty stuff. Most folks are allergic to the mites, but I’m not convinced that the filter on your furnace dies much to isolate you from them since the mites, etc tend to hang out in your bed and clothes (a charming thought, I know).

Old fashioned, octopus furnace. Free convection.

Old fashioned, octopus furnace. Free convection.

The previous house I had, had no filter on the furnace (and no blower). I noticed no difference in my tendency to cough or itch. That furnace relied for circulation on the tendency for hot air to rise. That is, “free convection” circulated air through the home and furnace by way of “Octopus” ducts. If you wonder what a furnace like that looks like here’s a picture.

I calculate that a 10 foot column of air that is 30°C warmer than that in a house will have a buoyancy of about 0.00055 psi (1/8″ of water). That’s enough pressure to drive circulation through my home, and might have even driven air through a clean, low MERV dust filter. The furnace didn’t use any more gas than a modern furnace would, as best I could tell, since I was able to adjust the damper easily (I could see the flame). It used no electricity except for the thermostat control, and the overall cost was lower than for my current, high-efficiency furnace with its electrical blower and forced convection.

Robert E. Buxbaum, December 7, 2017. I ran for water commissioner, and post occasional energy-saving or water saving ideas. Another good energy saver is curtains. And here are some ideas on water-saving, and on toilet paper.

The energy cost of airplanes, trains, and buses

I’ve come to conclude that airplane travel makes a lot more sense than high-speed trains. Consider the marginal energy cost of a 90kg (200 lb) person getting on a 737-800, the most commonly flown commercial jet in US service. For this plane, the ratio of lift/drag at cruise speed is 19, suggesting an average value of 15 or so for a 1 hr trip when you include take-off and landing. The energy cost of his trip is related to the cost of jet fuel, about $3.20/gallon, or about $1/kg. The heat energy content of jet fuel is 44 MJ/kg. Assuming an average engine efficiency of 21%, we calculate a motive-energy cost of 1.1 x 10-7 $/J. The amount of energy per mile is just force times distance. Force is the person’s weight in (in Newtons) divided by 15, the lift/drag ratio. The energy use per mile (1609 m) is 90*9.8*1609/15 = 94,600 J. Multiplying by the $-per-Joule we find the marginal cost is 1¢ per mile: virtually nothing compared to driving.

The Wright brothers testing their gliders in 1901 (left) and 1902 (right). The angle of the tether reflects the dramatic improvement in the lift-to-drag ratio.

The Wright brothers testing their gliders in 1901 (left) and 1902 (right). The angle of the tether reflects a dramatic improvement in lift-to-drag ratio; the marginal cost per mile is inversely proportional to the lift-to-drag ratio.

The marginal cost of 1¢/passenger mile explains why airplanes offer crazy-low, fares to fill seats. But this is just the marginal cost. The average energy cost is higher since it includes the weight of the plane. On a reasonably full 737 flight, the passengers and luggage  weigh about 1/4 as much as the plane and its fuel. Effectively, each passenger weighs 800 lbs, suggesting a 4¢/mile energy cost, or $20 of energy per passenger for the 500 mile flight from Detroit to NY. Though the fuel rate of burn is high, about 5000 lbs/hr, the mpg is high because of the high speed and the high number of passengers. The 737 gets somewhat more than 80 passenger miles per gallon, far less than the typical person driving — and the 747 does better yet.

The average passengers must pay more than $20 for a flight to cover wages, capital, interest, profit, taxes, and landing fees. Still, one can see how discount airlines could make money if they have a good deal with a hub airport, one that allows them low landing fees and allows them to buy fuel at near cost.

Compare this to any proposed super-fast or Mag-lev train. Over any significant distance, the plane will be cheaper, faster, and as energy-efficient. Current US passenger trains, when fairly full, boast a fuel economy of 200 passenger miles per gallon, but they are rarely full. Currently, they take some 15 hours to go Detroit to NY, in part because they go slow, and in part because they go via longer routes, visiting Toronto and Montreal in this case, with many stops along the way. With this long route, even if the train got 150 passenger mpg, the 750 mile trip would use 5 gallons per passenger, compared to 6.25 for the flight above. This is a savings of $5, at a cost of 20 hours of a passenger’s life. Even train speeds were doubled, the trip would still take 10 hours including stops, and the energy cost would be higher. As for price, beyond the costs of wages, capital, interest, profit, taxes, and depot fees, trains have to add the cost of new track and track upkeep. Wages too will be higher because the trip takes longer. While I’d be happy to see better train signaling to allow passenger trains to go 100 mph on current, freight-compatible lines, I can’t see the benefit of government-funded super-track for 150+ mph trains that will still take 10 hours and will still be half-full.

Something else removing my enthusiasm for super trains is the appearance of new short take-off and landing jets. Some years ago, I noted that Detroit’s Coleman Young airport no longer has commercial traffic because its runway was too short, 1550 m. I’m happy to report that Bombardier’s new CS100s should make small airports like this usable. A CS100 will hold 120 passengers, requires only 1509m of runway, and is quiet enough for city use. Similarly, the venerable Q-400 carries 72 passengers and requires 1425m. The economics of these planes is such that it’s hard to imagine mag-lev beating them for the proposed US high-speed train routes: Dallas to Houston; LA to San José to San Francisco; or Chicago-Detroit-Toledo-Cleveland-Pittsburgh. So far US has kept out these planes because Boeing claims unfair competition, but I trust that this is just a delay. For shorter trips, I note that modern busses are as fast and energy-efficient as trains, and far cheaper because they share the road costs with cars and trucks.

If the US does want to spend money, I’d suggest improving inner-city airports, and to improve roads for higher speed car and bus traffic. If you want low pollution transport at high efficiency, how about hydrogen hybrid buses? The range is high and the cost per passenger mile remains low because busses use very little energy per passenger mile.

Robert Buxbaum, October 30, 2017. I taught engineering for 10 years at Michigan State, and my company, REB Research, makes hydrogen generators and hydrogen purifiers.

Advanced windmills + 20 years = field of junk

Everything wears out. This can be a comforting or a depressing thought, but it’s a truth. No old mistake, however egregious, lasts forever, and no bold advance avoids decay. At best, last year’s advance will pay for itself with interest, will wear out gracefully, and will be recalled fondly by aficionados after it’s replaced by something better. Water wheels, and early steamships are examples of this type of bold advance. Unfortunately, it is often the case that last years innovation turns out to be no advance at all: a technological dead end that never pays for itself, and becomes a dangerous, rotting eyesore or worse, a laughing-stock blot or a blot on the ecology. Our first two generations of advanced windmill farms seem to match this description; perhaps the next generation will be better, but here are some thoughts on lessons learned from the existing fields of rotting windmills.

The ancient design windmills of Don Quixote’s Spain (1300?) were boons. Farmers used them to grind grain or cut wood, and to to pump drinking water. Holland used similar early windmills to drain their land. So several American presidents came to believe advanced design windmills would be similar boons if used for continuous electric power generation. It didn’t work, and many of the problems could have been seen at the start. While the farmer didn’t care when his water was pumped, or when his wood is cut. When you’re generating electricity, there is a need to match the power demand exactly. Whenever the customer turns on the switch, electricity is expected to flow at the appropriate amount of Wattage; at other times any power generated is a waste or a nuisance. But electric generator-windmills do not produce power on demand, they produce power when the wind blows. The mismatch of wind and electric demand has bedeviled windmill reliability and economic return. It will likely continue to do so until we find a good way to store electric power cheaply. Until then windmills will not be able to produce electricity at competitive prices to compete with cheap coal and nuclear power.

There is also the problem of repair. The old windmills of Holland still turn a century later because they were relatively robust, and relatively easy to maintain. The modern windmills of the US stand much taller and move much faster. They are often hit, and damaged by lightning strikes, and their fast-turning gears tend to wear out fast, Once damaged, modern windmills are not readily fix, They are made of advanced fiberglass materials spun on special molds. Worse yet, they are constructed in mountainous, remote locations. Such blades can not be replaces by amateurs, and even the gears are not readily accessed to repair. More than half of the great power-windmills built in the last 35 years have worn out and are unlikely to ever get repair. Driving past, you see fields of them sitting idle; the ones still turning look like they will wear out soon. The companies that made and installed these behemoth are mostly out of the business, so there is no-one there to take them down even if there were an economic incentive to do so. Even where a company is found to fix the old windmills, no one would as there is not sufficient economic return — the electricity is worth less than the repair.

Komoa Wind Farm in Kona, Hawaii June 2010; Friends of Grand Ronde Valley.

Komoa Wind Farm in Kona, Hawaii, June 2010; A field of modern design wind-turbines already ruined by wear, wind, and lightning. — Friends of Grand Ronde Valley.

A single rusting windmill would be bad enough, but modern wind turbines were put up as wind farms with nominal power production targeted to match the output of small coal-fired generators. These wind farms require a lot of area,  covering many square miles along some of the most beautiful mountain ranges and ridges — places chosen because the wind was strong

Putting up these massive farms of windmills lead to a situation where the government had pay for construction of the project, and often where the government provided the land. This, generous spending gives the taxpayer the risk, and often a political gain — generally to a contributor. But there is very little political gain in paying for the repair or removal of the windmills. And since the electricity value is less than the repair cost, the owners (friends of the politician) generally leave the broken hulks to sit and rot. Politicians don’t like to pay to fix their past mistakes as it undermines their next boondoggle, suggesting it will someday rust apart without ever paying for itself.

So what can be done. I wish I could suggest less arrogance and political corruption, but I see no way to achieve that, as the poet wrote about Ozymandias (Ramses II) and his disastrous building projects, the leader inevitably believes: “I am Ozymandias, king of kings; look on my works ye mighty and despair.” So I’ll propose some other, less ambitious ideas. For one, smaller demonstration projects closer to the customer. First see if a single windmill pays for itself, and only then build a second. Also, electricity storage is absolutely key. I think it is worthwhile to store excess wind power as hydrogen (hydrogen storage is far cheaper than batteries), and the thermodynamics are not bad

Robert E. Buxbaum, January 3, 2016. These comments are not entirely altruistic. I own a company that makes hydrogen generators and hydrogen purifiers. If the government were to take my suggestions I would benefit.

my electric cart of the future

Buxbaum and Sperka cart of future

Buxbaum and Sperka show off the (shopping) cart of future, Oak Park parade July 4, 2015.

A Roman chariot did quite well with only 1 horse-power, while the average US car requires 100 horses. Part of the problem is that our cars weigh more than a chariot and go faster, 80 mph vs of 25 mph. But most city applications don’t need all that weight nor all of that speed. 20-25 mph is fine for round-town errands, and should be particularly suited to use by young drivers and seniors.

To show what can be done with a light vehicle that only has to go 20 mph, I made this modified shopping cart, and fitted it with a small, 1 hp motor. I call it the cart-of the future and paraded around with it at our last 4th of July parade. It’s high off the ground for safety, reasonably wide for stability, and has the shopping cart cage and seat-belts for safety. There is also speed control. We went pretty slow in the parade, but here’s a link to a video of the cart zipping down the street at 17.5 mph.

In the 2 months since this picture was taken, I’ve modified the cart to have a chain drive and a rear-wheel differential — helpful for turning. My next modification, if I get to it, will be to switch to hydrogen power via a fuel cell. One of the main products we make is hydrogen generators, and I’m hoping to use the cart to advertise the advantages of hydrogen power.

Robert E. Buxbaum, August 28, 2015. I’m the one in the beige suit.

The mass of a car and its mpg.

Back when I was an assistant professor at Michigan State University, MSU, they had a mileage olympics between the various engineering schools. Michigan State’s car got over 800 mpg, and lost soundly. By contrast, my current car, a Saab 9,2 gets about 30 miles per gallon on the highway, about average for US cars, and 22 to 23 mpg in the city in the summer. That’s about 1/40th the gas mileage of the Michigan State car, or about 2/3 the mileage of the 1978 VW rabbit I drove as a young professor, or the same as a Model A Ford. Why so low? My basic answer: the current car weighs a lot more.

As a first step to analyzing the energy drain of my car, or MSU’s, the energy content of gasoline is about 123 MJ/gallon. Thus, if my engine was 27% efficient (reasonably likely) and I got 22.5 mpg (36 km/gallon) driving around town, that would mean I was using about .922 MJ/km of gasoline energy. Now all I need to know is where is this energy going (the MSU car got double this efficiency, but went 40 times further).

The first energy sink I considered was rolling drag. To measure this without the fancy equipment we had at MSU, I put my car in neutral on a flat surface at 22 mph and measured how long it took for the speed to drop to 19.5 mph. From this time, 14.5 sec, and the speed drop, I calculated that the car had a rolling drag of 1.4% of its weight (if you had college physics you should be able to repeat this calculation). Since I and the car weigh about 1700 kg, or 3790 lb, the drag is 53 lb or 233 Nt (the MSU car had far less, perhaps 8 lb). For any friction, the loss per km is F•x, or 233 kJ/km for my vehicle in the summer, independent of speed. This is significant, but clearly there are other energy sinks involved. In winter, the rolling drag is about 50% higher: the effect of gooey grease, I guess.

The next energy sink is air resistance. This is calculated by multiplying the frontal area of the car by the density of air, times 1/2 the speed squared (the kinetic energy imparted to the air). There is also a form factor, measured on a wind tunnel. For my car this factor was 0.28, similar to the MSU car. That is, for both cars, the equivalent of only 28% of the air in front of the car is accelerated to the car’s speed. Based on this and the density of air in the summer, I calculate that, at 20 mph, air drag was about 5.3 lbs for my car. At 40 mph it’s 21 lbs (95 Nt), and it’s 65 lbs (295 Nt) at 70 mph. Given that my city driving is mostly at <40 mph, I expect that only 95 kJ/km is used to fight air friction in the city. That is, less than 10% of my gas energy in the city or about 30% on the highway. (The MSU car had less because of a smaller front area, and because it drove at about 25 mph)

The next energy sink was the energy used to speed up from a stop — or, if you like, the energy lost to the brakes when I slow down. This energy is proportional to the mass of the car, and to velocity squared or kinetic energy. It’s also inversely proportional to the distance between stops. For a 1700 kg car+ driver who travels at 38 mph on city streets (17 m/s) and stops, or slows every 500m, I calculate that the start-stop energy per km is 2 (1/2 m v2 ) = 1700•(17)2  = 491 kJ/km. This is more than the other two losses combined and would seem to explain the majority cause of my low gas mileage in the city.

The sum of the above losses is 0.819 MJ/km, and I’m willing to accept that the rest of the energy loss (100 kJ/km or so) is due to engine idling (the efficiency is zero then); to air conditioning and headlights; and to times when I have a passenger or lots of stuff in the car. It all adds up. When I go for long drives on the highway, this start-stop loss is no longer relevant. Though the air drag is greater, the net result is a mileage improvement. Brief rides on the highway, by contrast, hardly help my mileage. Though I slow down less often, maybe every 2 km, I go faster, so the energy loss per km is the same.

I find that the two major drags on my gas mileage are proportional to the weight of the car, and that is currently half-again the weight of my VW rabbit (only 1900 lbs, 900 kg). The MSU car was far lighter still, about 200 lbs with the driver, and it never stopped till the gas ran out. My suggestion, if you want the best gas milage, buy one light cars on the road. The Mitsubishi Mirage, for example, weighs 1000 kg, gets 35 mpg in the city.

A very aerodynamic, very big car. It's beautiful art, but likely gets lousy mileage -- especially in the city.

A very aerodynamic, very big car. It’s beautiful art, but likely gets lousy mileage — especially in the city.

Short of buying a lighter car, you have few good options to improve gas mileage. One thought is to use better grease or oil; synthetic oil, like Mobil 1 helps, I’m told (I’ve not checked it). Alternately, some months ago, I tried adding hydrogen and water to the engine. This helps too (5% -10%), likely by improving ignition and reducing idling vacuum loss. Another option is fancy valving, as on the Fiat 500. If you’re willing to buy a new car, and not just a new engine, a good option is a hybrid or battery car with regenerative breaking to recover the energy normally lost to the breaks. Alternately, a car powered with hydrogen fuel cells, — an option with advantages over batteries, or with a gasoline-powered fuel cell

Robert E. Buxbaum; July 29, 2015 I make hydrogen generators and purifiers. Here’s a link to my company site. Here’s something I wrote about Peter Cooper, an industrialist who made the first practical steam locomotive, the Tom Thumb: the key innovation here: making it lighter by using a forced air, fire-tube boiler.