Posted by: bmeverett | January 23, 2014

“Sustainability” at US Universities

Most American Universities now have an Office of Sustainability or something similar. If you Google “university, office of sustainability”, you will get roughly 217,000 hits. Tufts University, where I teach, certainly has one with a Program Director, an Education and Outreach Coordinator, a Communications and Outreach Coordinator plus 5 interns and numerous “partners” from other university departments. So what exactly does an Office of Sustainability do?

The Merriam-Webster dictionary definitions of “sustainable” are (1) able to be used without being completely used up or destroyed, (2) involving methods that do not completely use up or destroy natural resources, (3) able to last or continue for a long time. So how are any of these definitions to be applied on college campuses? The mission of the Tufts Office of Sustainability (which you can find at http://sustainability.tufts.edu/about-the-office-of-sustainability/mission-statement/) isn’t much help.
Tufts Office of Sustainability serves as a resource, a catalyst, and an advocate for environmental sustainability at Tufts. Often we serve as a bridge between ideas and their practical implementation. The OOS is supported by the university and works to:
Enhance Tufts’ reputation as a leader.
Ensure that Tufts’ efforts are comprehensive and focused on meaningful projects
Identify, evaluate, and implement opportunities for leadership
Promote the strength of Tufts sustainability efforts, including those of faculty and students.
Measure Tufts’ progress toward commitments and regional goals
Identify sustainability opportunities that may provide Tufts with significant benefit such as reduced risk, financial savings, and avoided problems/fines.
Coordinate among other existing programs and sustainability efforts (e.g., food, transportation, landscaping, planning and policy, personal action, and related citizenship activities)
Integrate sustainability issues into research, scholarship and student life.
Work with interested faculty to develop meaningful class projects and support students in those efforts.
Work with interested students on academic and extra-curricular projects
Provide student internships when possible.
Identify appropriate ways for faculty and student research to be pilot tested on campus

This is all fluff. Tufts bases its general environmental policy on the 1990 Talloires Declaration, signed by 440 institutions in 50 countries. The Talloires declaration includes 10 points:
1. Increase Awareness of Environmentally Sustainable Development
2. Create an Institutional Culture of Sustainability
3. Educate for Environmentally Responsible Citizenship
4. Foster Environmental Literacy For All
5. Practice Institutional Ecology
6. Involve All Stakeholders
7. Collaborate for Interdisciplinary Approaches
8. Enhance Capacity of Primary and Secondary Schools
9. Broaden Service and Outreach Nationally and Internationally
10. Maintain the Movement

All this verbiage suggests that universities should enhance, foster, educate, involve, enhance, collaborate, etc. But what is the real objective?

Although the word “sustainability” has come into widespread use only recently, the idea has a long and checkered past. When I was a student at the Tufts in the early 1970s, a fashionable view on campus was that industrial society itself was “unsustainable” because growing populations and increasing living standards inevitably created air and water pollution of such magnitude that it would kill us off. In 1970, Life magazine claimed, “In a decade, urban dwellers will have to wear gas masks to survive air pollution … by 1985 air pollution will have reduced the amount of sunlight reaching earth by one half.” Serious allegations.

The solution proposed by the environmental community was a radical and coercive restructuring of society in which free markets would be replaced by central planning and regulation. The wishes of the population would have to be subordinated to the decisions of smart people who really understood what was going on. True, this approach had failed in the Soviet Union and China, but smart, properly motivated people could make it work.

The problem for the environmentalists was that their predictions turned out to be untrue. The industrialized countries, including the US, have enormous strides in reducing air and water pollution through reasonable, cost-effective steps. Current environmentalists often take credit for this progress, claiming that we would never have taken these actions without activists scaring the daylights out of us. This is essentially the model of parents telling us to go to sleep or the bogeyman would get us. Most of the air and water quality problems today occur in countries with moderate, not high living standards. As incomes rise, people in these countries will follow our path and take the steps needed to improve their environment. In any case, industrial society turns out to be sustainable after all.

The environmental community, however, came to believe that its solution – the substitution of central planning for free-market economies – was the right one, even if the problem had to be redefined. After the oil crisis of 1973-74, air pollution was replaced by natural resource depletion as the imminent crisis of western civilization. Industrial society was based on fossil fuels which were in limited supply. Therefore, we couldn’t possibly continue with society as then structured, because we would run out of everything and start killing each other. This was not a fringe view in the 1970s. Reputable groups like the Club of Rome warned in their famous treatise “The Limits to Growth” that rapid resource depletion required urgent action. The term “renewable energy” was coined to contrast solar and wind energy, which were inexhaustible, with “depletable energy” like oil and natural gas. This view reached its zenith in the “peak oil” argument of the 1990s.

A funny thing happened, however. In 1970, global proven reserves of oil were about 600 billion barrels. With production running at about 12 billion barrels per year, the world would run out of oil in about 50 years. In the last 43 years, we have consumed about 850 billion barrels, yet current global oil reserves are about 1,600 billion barrels or about 52 years’ supply. Instead of running out of oil, we have found more than we have used decade after decade. The same is true for natural gas. In both cases, the technology available for oil and gas exploration has outpaced the depletion of the resource. There are still a few holdouts, put “peak oil” alarmists have pretty much been pushed to the side. Facts, as they say, are inconvenient things.

Having failed to convince the public that either air pollution or resource depletion was an existential problem, where were environmentalists to go next? The answer of course is climate change. As discussed in many previously posts, there are real scientific issues around climate change. The Environmental Community (now renamed the Climate Community), however, has worked hard to establish the alarmist view of climate science as orthodoxy or, as many environmentalists like to say, as a “fact”. They have failed in the society as a whole, but they have largely succeeded on college campuses. Herein lies the real meaning of “sustainability.”

Since you can’t find any meaningful definition of the term just by looking at the public statements of Offices of Sustainability, let me offer my own.

Roughly 1% of their mission is to evaluate issues affecting the actual quality of life on campus, such as toxic materials and air quality. The other 99% is pure political advocacy in support of drastic reductions in greenhouse gas emissions. The Tufts Office of Sustainability has only one set of meaningful commitments (http://sustainability.tufts.edu/greenhouse-gas-emissions/) – the reduction of greenhouse gas emissions. In order to achieve these goals, the University must establish in the Tufts Community, particularly among incoming students, that there is no debate over catastrophic climate change, that we are headed for disaster and that any reductions in greenhouse gas emissions are valuable, regardless of cost.

Most of the actions that the Tufts Office of Sustainability takes are relatively innocuous. Campus-wide programs to reduce energy use. Increasing student activity fees by $20 per year to fund the purchase of green energy. Changing out lightbulbs. Participating in meaningless carbon trading systems. These programs are largely symbolic “feel good” efforts with little cost and even less impact on atmospheric carbon levels. There are, however, hidden non-financial costs which are much greater. Sustainability programs teach students to accept without question the anti-market views of the political left. They support a distorted view of science in which authority and “consensus” trump empirical research. They reinforce an environmental view that many of these students have been steeped in since kindergarten that passion and commitment are more important than analysis and understanding and that life is about making moral statements rather than having positive impacts. They try to convince students that anyone who disagrees with the established University view is to be treated with some mixture of amusement and suspicion. They consistently argue that classical economics is not only wrong, but immoral. In this context, “environmental literacy” does not meet knowledge of environmental issues and how to analyze them, but rather the ability to recite the talking points of the Climate Community and to refute alternative views. In many ways, the Tufts Office of Sustainability runs against the grain of what the University is supposed to be teaching.

There has been a great deal of debate recently about the skyrocketing costs of a college education and its declining value in society. At many universities, non-teaching staff now outnumber faculty. “Sustainability” programs are one symptom of this bloating, and they should be examined critically (along with “diversity” programs). I would encourage my readers to have a look at the programs in place at their own alma maters and decide for themselves whether these efforts are truly worthwhile.

Posted by: bmeverett | January 15, 2014

Fusion confusion

George F. Will is one of the very best political commentators on the scene today. His writings have been a treasure trove of thoughtful and articulate conservative thought for decades. Unfortunately, his December 21, 2013 column entitled “The Fusion in our Future” is way off the mark. The column is a call for government spending on fusion technology. OK, but I was saddened by his decision to ignore economics – a rookie mistake for a veteran thinker.

Nuclear fusion is a well-understood process in physics. When certain light atoms are fused into a heavier element, the force required to hold the heavier nucleus together is less than the force required to hold the light nuclei together. The difference is released as energy. The ideal fusion reaction involves deuterium, an isotope of hydrogen with a nucleus of one proton and one neutron, and tritium, another hydrogen isotope with one proton and two neutrons. If these two atoms can be brought close enough together, they will fuse to form helium with two protons and two neutrons, releasing one neutron plus 17.6 million electron volts of energy. This huge energy release is the source that powers the Sun and the basis for thermonuclear bombs.

The fission reaction, in which the nuclei of heavier atoms are split into lighter elements, is the basis for the much smaller atomic bombs, such as those dropped on Hiroshima and Nagasaki. The fission process turned out to be relatively easy to control for civilian applications, like nuclear power plants. A fission reactor contains rods of fissionable material, like uranium and plutonium, interspersed with rods of material that absorbs some of the reaction energy, allowing the total energy release to be controlled.

Scientists have been working on controlled fusion energy since the early 1950s, but the process has been frustrating. Forcing the nuclei of light elements together essentially means heating them to very high temperatures – on the order of 100 million degrees Celsius. Thermonuclear bombs generate this heat by exploding a fission bomb. Generating 100 million degrees C in a contained environment without melting the entire installation turns out to be a challenge. Furthermore, although hydrogen is the most common element in nature, only about 1 out of every 6,500 hydrogen atoms are the deuterium isotope and only one out of 1,000,000,000,000,000,000 hydrogen atoms are tritium. Obtaining these materials is difficult, expensive and energy-intensive.

During the late 1970s, there was a school of thought in Washington that the problems of fusion energy were about to be solved and that we should anticipate having commercially viable fusion power plants around the year 2000. As a result, we faced only a couple of decades of transition before we could eliminate fossil fuels and enter an energy nirvana. Needless to say, that view proved overly optimistic.

The US and several other countries continue to fund fusion research, although at lower levels than in the past. The US program is about $250 million per year. Mr. Will sees this funding level as inadequate, since he has apparently determined that more money will produce the success that has eluded us for all these years. His view is that research outputs are proportional to financial inputs. Wrong.

Technologies undergo three basic stages of development. The first stage is conceptual – a good idea. We know that fusion works, since can make fusion bombs. The second stage is technical – making machines that actually work. Solar and wind energy, for example, reached this stage many years ago. We can produce solar panels and wind turbines that generate electricity. Fusion has not reached this stage, since we have never been able to produce a machine, even at the research level, that can generate more energy than is required to run the machine. The final stage is commercial – the ability of the technology to replace existing technologies with better and/or cheaper results. Wind and solar still have not reached this stage, since both technologies underperform their competition and cost much more.

Mr. Will’s mistake is his focus on the technical stage as the indicator of success. It is possible (although by no means assured) that a substantial increase in funding could allow researchers to build a fusion reactor that actually works. It’s quite another issue to design a fusion reactor that can produce electricity at a competitive cost. Just for calibration, modern combined cycle power plants burning natural gas can generate electricity in the US at about 6¢ per kilowatt-hour (kWh). By contrast, onshore wind and nuclear power cost about 10¢. Offshore wind costs much more – about 20¢ and solar in the range of 30¢.

Even if we can solve the technical problems around fusion, how much will it cost? Well, as Mr. Will points out, tritium can be produced in the reactor itself, while the oceans contain 10 million million tons of deuterium. Sure, but how much would it cost to extract the deuterium from the ocean? Every cubic kilometer of seawater contains about 13,000 grams of gold worth about $500,000. Why aren’t people extracting gold from seawater? Because it costs more to get the gold out than it’s worth.

Even if the fuel turns out to be free, how much would the reactor cost? Suppose we could build a functioning fusion reactor for $20,000 per kilowatt, and it could operate 95% of the time. The reactor could generate 8,322 kWh per year (24 hours X 365 days X 95%). Even if the reactor were financed with 5% 30 year Treasury bills, the capital cost alone would require revenue of 12¢ per kWh. At commercial return rates of 12-15%, the capital cost of such a facility would be in the range of 30-35¢ per kWh. That doesn’t include the cost of operating and maintaining the power plant.

Surprisingly, Mr. Will has made the mistake for which he usually criticizes the political left. He believes the government can make better commercial decisions than the market. I’ll grant Mr. Will at least this much. There is an argument for the government to do basic science research that the private sector is unwilling to do. When it comes to the government’s ability to pick commercial technologies, however, just remember Solyndra and back away slowly.

Posted by: bmeverett | January 8, 2014

How to talk about climate change

[I apologize to my faithful blog readers for the lack of postings over the last few months. The semester at Fletcher was extremely busy. I am not teaching this semester, so I promise to post more frequently.]

At the end of every semester, the Fletcher School collects feedback from students through anonymous on-line course evaluations. I find this feedback useful and use it extensively to improve my class. The one area where I really struggle every year to find the right content and balance is climate change. Many of my students react strongly to my lectures (which take about 1 hour out of the 26 hours of total class time). This is one lecture I have changed virtually every year to try to get it right, but clearly I’m not there yet. The student comments cast an interesting light on the climate change debate in general.

Take these comments for example:
The lecture on externalities was painful. It seemed tangential to the course objectives…
[The climate change lecture] ultimately did not even seem that relevant to the content of the course.

These comments surprised me. On today’s college campuses (and in many parts of the federal government), the urgent need for an accelerated replacement of fossil fuels, even at great cost, is an article of faith. This is not a new idea, but the rationale has changed over time. When I was a student at Fletcher in the early 1970s, a common view on campus and elsewhere was that modern industrial society was inherently unsustainable because it inevitably made the air unbreathable and the water undrinkable. The increasing use of fossil fuels was considered a major component of this problem. In fact, cleaning up the air and water turned out to be a quite manageable problem. Although there are still air and water quality issues, the situation in the US has improved dramatically over the past forty years or so. The real pollution problems today are in poor, not rich countries.

Later in the 1970s, the complaint against fossil fuels was their rapid depletion, which would cause economic growth to hit a brick wall with disastrous social consequences when the tap ran dry. This prediction also proved untrue as the advance in oil and gas exploration and extraction technology has outpaced resource depletion and is likely to continue to do so for many decades (if not centuries) to come.

Here’s where climate change comes in. According to today’s prevailing view on college campuses, we have to get rid of fossil fuels as quickly as possible because of the climate consequences of carbon dioxide emissions from burning fossil fuels. Note that carbon dioxide is not a pollutant in the traditional sense. It’s a natural component of the atmosphere and has no known adverse health effects at the concentration levels we are talking about.

Unfortunately, climate change is a given on college campuses, and the discussion of the role of fossil fuels in society is supposed to start with this “fixed point”. That, dear students, is why climate change is not only relevant but essential to any discussion of the oil market.

Here’s another comment:
I really disliked the climate change class. Not only did I find the content weak, I thought it was a waste of time to spend the entire class arguing about whether or not climate change was real.

This student didn’t listen to the lecture. I started out the discussion by stating that the reality of climate change is not the issue. Most people agree that the climate has warmed. The question is whether we understand the climate well enough to make meaningful predictions and take serious (and extremely expensive) policy steps in response.

Here’s another:
The climate change discussion was not constructive, as the lecture was built on a strategy of undermining a few select studies in the hopes that it would undercut the plausibility of climate change occurring…

Students have been taught over their entire lives that all intelligent and caring people believe that increasing fossil fuel use will destroy the planet and that those who disagree are either anti-science Luddites or evil corporations who would happily kill millions to make a buck. The student who made this comment clearly sees me as having a “strategy” of undermining good and promoting evil. After all, he or she has been warned about this danger since first grade. In fact, the only point of my lecture, which I repeated many times over the course of the hour, was that climate change is complicated and uncertain and that we should not shut down the debate.

Here’s another comment:
Skip the lecture on climate change. Climate science is not your area of expertise…

Again, students have been told that only climate scientists have the standing to weigh in on this issue. Since scientists all supposedly agree, the issue is solved. The Fletcher School conducts a constant stream of conferences, lectures and panel discussions on every major foreign policy issue. When students attend a discussion of the situation in Syria, for example, they expect to hear a variety of views argued by students and faculty members with a wide range of backgrounds, including economic, legal, military, political, historical and others. Participants assume that everyone in the Fletcher community is entitled to hold and express a view, not just Middle East experts. Not so with climate change. The Climate Community claims that only scientists have standing to discuss this issue. Why should this be true? We don’t leave military policy to the military or economic policy to economists? On balance, I find it remarkable that a university as strong as Tufts would tell its students that climate change is not an issue that should be debated and that views outside the prevailing orthodoxy should be treated with suspicion.

Just for the record, here’s what I’m trying to say:
First, climate science is complicated, and many factors, not just carbon emissions, influence its course. Second, science is not a process of consensus building, but of empirical testing. So far, the climate models on which the Climate Community depends cannot make any meaningful predictions. None. As a result, the science is inconclusive by definition, not by my opinion. Third, the costs of reducing carbon emissions are extremely high, and the steps that are often discussion at the international, federal and state level would be ineffective, even if the Climate Community has the science right.

I should also note a glimmer of hope here. One student made the following (clearly facetious) comment:
Europeans and tree-huggers shouldn’t be allowed to join the course.

Many students comment to me privately that they appreciate hearing a different viewpoint on climate change. It’s interesting that very few of these students would dare say so publicly.

For next fall, I have to bear in mind that many of my students hear what they expect to hear from a “climate skeptic” and don’t necessarily listen to what I am actually trying to say. I take their comments seriously, and hope to do better.

Posted by: bmeverett | October 14, 2013

Asthma and solar energy

No, I am not going to argue that installing solar panels on your roof will give you asthma, but there is an interesting philosophical connection. The October 12 edition of the New York Times, included an article entitled “The Soaring Cost of a Simple Breath” by Elisabeth Rosenthal, which you can find at http://www.nytimes.com/2013/10/13/us/the-soaring-cost-of-a-simple-breath.html?_r=0. Ms. Rosenthal’s article addresses the high cost of life-saving asthma medication in the US, a very serious issue, but the standard NYT ideological blinders prevents her from understanding the cause of the problem.

Ms. Rosenthal starts her article with a clear statement of a problem: “Pulmicort, a steroid inhaler, generally retails for over $175 in the United States, while pharmacists in Britain buy the identical product for about $20…” Her explanation for this disparity? “Unlike other countries, where the government directly or indirectly sets an allowed national wholesale price for each drug, the United States leaves prices to market competition among pharmaceutical companies.” Furthermore, she notes that “businesses often successfully blunt market forces.” In other words, the problem is the drug companies. That’s the natural reaction of the political left, but Ms. Rosenthal’s own article tells a rather different story.

According to the article, there are three political issues influencing the price of drugs: patents, anti-trust laws and regulatory control by the Food and Drug Administration. Patents are a simple and powerful idea. Inventors are granted an exclusive right to sell their invention but only for a limited period of time, after which the idea enters the public domain for anyone to use. In the case of drugs, once the patent expires, other companies can produce generic versions of the drug at prices that reflect only the cost of production and not the R&D and risk born by the patent holder. This principle is enshrined in Article 1(8) of the US Constitution.

There are in fact some special issues related to drug patents. A company’s drug patent is valid for 20 years from the date at which the patent application is filed. The company’s inability to market the drug during years of clinical trials may effectively deprive the company of the profits from its invention. This special situation encourages drug companies to circumvent the patent law by seeking new patents on slightly altered versions of the drug or on new uses of the drug. The Patent Office may also grant patents for drug delivery devices, such as inhaler pumps, which limit the availability of generic medications.

Ms. Rosenthal tells the particularly interesting story of the impact of the federal government’s decision to ban chlorofluorocarbons, which were thought to be damaging the Earth’s ozone layer. This decision effectively banned the generic inhaler pumps whose patents had expired and required patented new designs which could be sold once again at monopoly prices.

In telling this story, Ms. Rosenthal offers a strange, but rather telling quote: “Dr. Robert Lionberger, the [FDA’s] acting deputy director in the office of generic drugs, said that research into the development of generic inhaled medicines was the agency’s highest priority but that the effort had been stalled because of budget cuts imposed by Congress.” Logically, one would assume that when an agency’s budget is cut, the lowest priority projects are cut first, allowing the agency to concentrate on the highest priority needs. Why is the FDA cutting its highest priority research? Did Congress actually order them to cut out research into generic inhaled medicines? Did they cut all activities equally regardless of their priority? I can’t speak to Dr. Lionberger’s motivations, but “reverse prioritization” is the standard tactic for elected officials and bureaucrats to punish the public for refusing to grant them unlimited access to the Treasury.

It should come as no surprise that the drug companies will hire teams of the best patent lawyers to try to extend their patents, “re-patent” their drugs for new uses or otherwise delay the release of their patents into the public domain. The real question is why the Patent Office complies so easily if these practices are, as Ms. Rosenthal argues, so detrimental to the public welfare. After all, the government, not the drug companies, is the decision-maker here.

Anti-trust law is another bedrock of the American legal system. Companies must compete and not collude in setting monopolistic prices. Anti-trust violations are criminal, not civil, and managers will go to jail for anti-competitive practices. Ms. Rosenthal is outraged that Congress passed a special exemption to the antitrust laws, called “pay for delay”, that permits drug companies to pay potential generic manufacturers not to enter the market. This law is clearly anti-competitive, and I share Ms. Rosenthal’s outrage, but who is to blame? In her view, the drug companies’ $250 million in lobbying efforts are the culprit. But what about the elected officials who granted the drug companies this special favor?

Ms. Rosenthal’s final argument is that the drug companies often choose to maintain their drugs as prescription only rather than over-the-counter (OTC) medications. This decision not only raises the price of the medicine, but requires the patient to pay for a doctor’s visit to get the prescription. Again, whose fault is this? The switch from prescription to OTC is a regulatory step made by the FDA, not the drug company. Why is the FDA blocking these switches if they are so clearly in the public interest?

Ms. Rosenthal’s article attempts to blame corporations and their lobbying efforts for the high cost of asthma medication. Her narrative indicates instead, however, that the fault lies with the willingness of Congress and federal regulatory agencies to grant special favors which are clearly detrimental to the interest of consumers. Corporations can’t obtain these favors without the active connivance of elected and unelected officials. If we discovered that we could gain legal immunity from parking and traffic offenses by donating money to the campaigns of local officials, most of us would do it in a heartbeat. We rely on both laws and ethical standards to keep municipal officials from selling such favors.

The Founding Fathers recognized this problem and attempted to address it, at least at the federal level, by limiting the powers of the federal government. Madison and company understood that politicians will always try to expand their authority and that private individuals will always seek favors from their government. In some cases, such as the First Amendment limitations on government control over speech, peaceable assembly, petitioning for redress, religion and the press, the Constitution has held up reasonably well, despite constant assaults from various interest groups. In the commercial area, however, all constitutional restrain is gone. The Founding Fathers believed that they were limiting the federal government’s power to the narrow realm of interstate commerce (Article(8)(3)). Over the years, however, the term “interstate commerce” has lost all its meaning, and the federal government can now do essentially whatever it pleases in the commercial sphere. When these restraints are gone, politicians will use their power to grant favors to preferred constituents, like the drug companies. Here is where the real blame lies.

Returning now to solar energy, most people on the political left see the federal government as the proper locus of decisions on how much energy we should use and how that energy should be produced and distributed. In their view, Congress and the federal energy bureaucracy consist of experts who spend their days planning the economy to ensure growth and fairness. There are, of course, many people in the Department of Energy and other parts of the government who try very hard to do just that. Their efforts, however, are overwhelmed by politicians, who have no interest other than using their power and money to build and reinforce electoral coalitions. Solyndra is the poster boy for this problem. A powerful constituent with ties to the White House received $500 million in public funds for a bad idea that quickly went bankrupt.

Just as the federal government has contributed to the increased cost of asthma medication, it is contributing to raising the price of energy. The stated rationale is to address environmental and national security problems, but the real driver is private interest.

There would be no ethanol or wind power in the US were it not for the granting of special favors to companies that make money at the public expense. The market, left to its own devices, would never produce this result. Consumer interest enters the political equation only when the public becomes upset enough to throw people out of office.

NYT articles on public policy issues seem to follow a standard template. First, they identify a problem. Second, they note the pervasive failure of the free market to solve the problem and the need for strong corrective action by the government. Third, they express disappointment or outrage that private interests subvert the government’s efforts though lobbying and campaign contributions and thus make the problem worse. Fourth, they support stronger government authority to counteract the insidious influence of the lobbyists. One would think that the smart reporters on the NYT staff would figure this out. Don’t hold your breath.

Posted by: bmeverett | August 22, 2013

Here comes the IPCC (and the New York Times) again

The United Nations Intergovernmental Panel on Climate Change (IPCC) is about to release its Fifth Assessment Report on the state of climate science. Previous reports were released in 1990, 1995, 2001 and 2007. The New York Times trumpets the release in an article entitled “Climate Panel Cites Near Certainty on Warming”. The Times is always eager to opine rather than report, and the title of this article strays beyond misleading into the realm of the just plain wrong.

The actual statement in the report is “It is extremely likely that human influence on climate caused more than half of the observed increase in global average surface temperature from 1951 to 2010…” The Fourth Assessment Report from 2007 assigned a 90% confidence level to this statement, while the newest report assigns a 95% confidence levels. Bear in mind that these numbers represent the Committee’s opinion regarding the likelihood that the Committee’s opinion is correct. In essence, the new report says, “Six years ago I was 90% sure I was right, but now I’m 95% sure I’m right.” This statement is blather, not science.

The IPCC is a panel of experts appointed by governments to assess the current state of climate science by counting up the number of peer-reviewed journal articles supporting various viewpoints on climate. It is not a scientific body, and it does no research on climate. Although a large number of scientists participate in the IPCC’s activities, its management is firmly in the hands of people advocating for strong action to reduce carbon emissions.

As I have stated many times in this blog, science is not a process by which we survey the opinions of scientists. For example, we cannot find out whether there is life on Mars by asking planetary scientists their opinions. The reality is, we just don’t know whether there is life on Mars and will not know until we have actual evidence.

During the Middle Ages, the Catholic Church addressed questions about the natural world by convening conferences of scholars, usually clerics, to review scripture and reach conclusions. People who challenged those conclusions were branded heretics and subject to oppression, torture and even death for their impertinence. Copernicus, Galileo and a few other brave souls offered an alternative approach: Establish a hypothesis (e.g., the Earth revolves around the Sun, not vice-versa) and then test the hypothesis against empirical evidence. The IPCC is behaving like the Catholic Church, not like Galileo.

To apply science to the climate change issue, we need to be precise about the hypotheses being debated. There are in fact not one but four. The first hypothesis is that the Earth has warmed at a rate of about 1º C over the last 100 years. There are some methodological issues around temperature measurements, but the empirical support for this hypothesis is pretty good.

The second hypothesis is that this temperature rise is unusual in the geologic record. The problem here is that we have excellent temperature records since about 1970, good records from 1900 to 1970, scattered records from about 1800 to 1900 and not much before 1800. Two hundred years is nothing in geological time, so some climate change advocates have attempted to create indirect data series by examining tree rings, ice cores and other indicators. There is absolutely no consensus on whether these data sets are meaningful, and the “climategate” email scandal strongly suggests a great deal of mischief and manipulation in their construction.

The third hypothesis is that human emissions of carbon dioxide and other greenhouse gases (ghg) are the main cause of the most recent hundred-year warming trend. It’s this point that the Times is addressing when it calls warming “near certain”. This may be correct, but it’s not the central issue. The heart of the debate is the fourth hypothesis which is that continued growth in atmospheric ghg concentrations will cause warming to accelerate. There is no empirical support for this view and some strong logical reasons it may not be true.

The Times, as always, focuses on the very high end of the IPCC projections, which shows a temperature increase of 5º C in response to a doubling of atmospheric carbon dioxide. The Times is happy to point out that the consequences of such warming would be dire indeed. The new IPCC report, however, shows a new low estimate of 2.7º C, rather than the 3.6º C number from the 2007 report. In other words, the IPCC is now saying that the problem could be just as bad as they have said before, but could be significantly less severe. The Times downplays this powerful statement as meaningless, claiming that the new low estimate is only “possible”, not “likely”. To further undermine the 2.7º C number, the Times article informs readers that “Michael E. Mann, a climate scientist at Pennsylvania State University, said he feared the intergovernmental panel, in writing its draft, had been influenced by criticism from climate doubters, who advocate even lower numbers.” In other words, the IPCC is authoritative when it presents an apocalyptic vision, but not when it offers any cautionary notes.

How likely is the high end estimate? We know from basic physics the direct warming impacts of carbon dioxide concentrations in the atmosphere. Expected growth in ghg emissions, however, would suggest a temperature increase of only about 1º C by 2100, below even the IPCC’s low end. The higher numbers come from a series of highly questionable assumptions regarding the impact of feedback loops in the climate system. For example, as temperature rises, cloud formation increases. The bottoms of clouds can trap additional heat, amplifying the warming. The tops of clouds, however, can reflect sunlight back into space, reducing warming. Which effect will predominate? How much carbon can the oceans absorb, and what impact would that have on climate? What about surface vegetation? The IPCC numbers assume that all the feedback effects are positive. In other words, direct warming is always aggravated by indirect effects, and never mitigated. Is that a good assumption?

There’s no theoretical or empirical evidence to suggest that feedback loops are positive rather than negative. In fact, a basic principle of physical science, known as “Le Chatelier’s principle” states that “Any change in status quo prompts an opposing reaction in the responding system.” The key tenet of the environmental movement is that natural systems are fragile and prone to collapse if disturbed. In contrast, Le Chatelier’s principle says that natural systems are robust and can compensate for disturbances. Note that Le Chatelier did not claim that disturbances don’t change or disrupt natural systems or that there is no cost to these changes, only that natural systems tend to recover to a new equilibrium. Anthropogenic ghg emissions may indeed lead to a warmer world, but there is no basis at present for believing the “tipping point” argument which predicts climate catastrophe. One reason for the high warming numbers generated by the IPCC is that all the climate models considered by the IPCC use similar sets of assumptions. The IPCC does not use any models which take a fundamentally different view of the issue.

If the climate science is so weak, how can the IPCC reach such powerful and definitive conclusions? The main reason is that the IPCC was established precisely for the purpose of advocating for carbon reductions, and its management is more than happy to feed the public a diet of scary scenarios. There are more subtle reasons as well.

Under more normal, less politicized circumstances, a literature review might give you a good sense of the range of scientific views on a particular subject. If you want to know, for example, how old the universe is, a literature review would probably tell you something about the state of the science. Very few people, however, have a vested interest in the outcome of that debate. Climate change is different. Tens of billions of dollars in funds are for climate research, but with a major bias. Check, for example, the climate research funding by the US National Science Foundation, which you can find at http://www.nsf.gov/news/news_summ.jsp?cntn_id=128415. The funding description includes the following telling statement, “The consequences of climate variability and change are becoming more immediate and profound than anticipated, research has found.” This is a clear signal to applicants on what the NSF wants. Examples of the topics for NSF research grants are “Quantifying and conveying the risk of prolonged drought in coming decades” and “Exploring the connection between wildfires and regional climate variability”. Little if any of this money is going to scientists who question the catastrophic climate scenarios. To get this money, you need to be a team player.

Moreover, many of the scientific and climate journals are edited by people who are strong political advocates for carbon reduction policies. As a result, there is a biased selection of articles accepted for publication. By analogy, the American electorate is divided roughly 50/50 between the political left and the political right. You could never find this out by counting the articles printed by the New York Times, the Washington Post, the Los Angeles Times and the Boston Globe. Such a review would suggest that 95% of the political opinion in the country is on the political left. These publications are not dishonest, just biased.

I also wonder how many scientists, most of whom live in academic communities, are dissuaded from either researching climate or expressing their real views because they are frightened of the reputational, social and career consequences of expressing politically incorrect opinions.

The bottom line here is that the empirical support for the catastrophic climate hypothesis has been deteriorating not growing over the years, primarily because climate models remain unable to make any correct predictions about the response of climate to increased atmospheric carbon concentrations. Regarding the biggest problem of climate models – the 15-year plateau of global temperatures – the Times is contemptuous, claiming that “The [IPCC] scientists, whose findings are reported in a draft summary of the next big United Nations climate report, largely dismiss a recent slowdown in the pace of warming, which is often cited by climate change doubters, attributing it most likely to short-term factors.” If “short-term factors” overwhelm your calculations, then you clearly do not understand the phenomenon you are studying. Perhaps the Sixth Assessment Report, due around 2020, will begin to get this right, but I wouldn’t hold my breath.

Posted by: bmeverett | July 23, 2013

Havin’ a Heat Wave

It’s been really hot in Boston this summer. It’s been hot before, of course, but this time I hear friends, neighbors and people on the street blaming global warming. I fully expect that when I get back to class at Fletcher this fall, I’ll hear again how only the stupidest people could question climate change after the horrible summer we just had. This is the same argument we heard after Hurricane Sandy.

As I discussed in previous posts, this argument is just plain false. Hurricane Sandy was a terrible storm, bringing a lot of damage and suffering. It was caused by a relatively small late-season hurricane getting wrapped up with a Nor’easter. This happens several times each century. The infamous “Perfect Storm” of October, 1991 was a similar event. Sandy differed from The Perfect Storm only because it veered to the west and hit the New York/New Jersey area rather than staying out to sea.

So what conclusions can we draw about our hot summer? If we define a Boston heat wave as 6 or more consecutive days above 90º F, we could say that heat waves have become more common in Boston over the years. We’ve had 7 such heat waves in the last 50 years while we had only two in the preceding 50 years. Sounds like it’s getting hotter. On the other hand, let’s look at the problem a different way. Consider July temperatures for Boston since 1920. The mean July temperature over this period was 73.3º. So far the 2010s have been pretty darn hot, with July temperatures averaging 77º. If, on the other hand, the mean July temperature in Boston correlates with atmospheric carbon, we ought to see a trend over time. In fact, there is no trend. Before the 2010s, the warmest decadal average for July was the 1980s, followed by the 1950s, the 1970s, the 1990s, the 1940s, the 2000s, the 1960s, the 1930s and the 1920s.

How about July in New York? Would we get the same result? Hardly. The 2010s have been hot in New York as well, but the history is very different. The New York decades, in order of decreasing average July temperatures were the 2010s, the 1950s, the 1970s, the 2000s, the 1990s, the 1960s, the 1930s, the 1940s, the 1980s and the 1920s. The temperature data for individual cities don’t disprove the climate change hypothesis, but they don’t support it either. Picking a hot year or a hot decade or a hot location means absolutely nothing.

Furthermore, the term “global warming” has two components. It’s not only “warming” but “global”. In fact, we have a global temperature data series, and the data were developed by climate change advocates at NASA. This data series, which you can find at http://data.giss.nasa.gov/gistemp/graphs_v3/Fig.A2.txt, shows that global temperatures increased until 1998 and have leveled off since. What does this summer’s Boston heat wave mean in that context?

To maintain its credibility, the Climate Community needs to stop pointing to random anecdotes and start offering some real science. As discussed in previous posts, science involves stating a hypothesis (e.g., increasing human greenhouse gas emissions will accelerate global temperatures bringing catastrophic results) and then making predictions to see whether they support the hypothesis or not. The key issue here is that the hypothesis must be falsifiable. In other words, the prediction must provide information as to whether the hypothesis is correct or not.

It seems rather obvious that the use of individual data points or anecdotes tells us nothing about broader trends, but that’s exactly what the Climate Community continues to do. Extreme weather events, such as hurricanes or heat waves may seem compelling but people do not carry around in their heads a sufficiently long or complete data series to calculate meaningful trends. For example, my Fletcher students average about 28 years of age. That means they were adults when Hurricane Sandy hit, but only 6 years old when The Perfect Storm struck in 1991. Sandy was thus their only experience with storms of this type. Those students who grew up in Boston have experienced only two heat waves as adults and perhaps one as a teenager. That may suggest to them that heat waves are a rare and recent phenomenon. People in their 80s, on the other hand, may have experienced 8 or 9 heat waves and may thus conclude that they occur routinely. Intuitive judgments are of limited value. Only a proper review of all the available data offers real insight.

Let’s take an example and hypothesize that human beings are getting taller over time. How would we test that hypothesis to see if it’s true? You might structure a survey of people carefully selected by age, geography, racial/ethnic background, nutrition, etc, calculate their average height and then calculate the average height of a similar group a generation later. There would be lots of methodological arguments about how to conduct such a survey, and the results might be controversial, but the approach is theoretically correct. Nobody would accept the statement “Bill is seven feet tall” as support for the hypothesis.

The Climate Community shies away from making predictions, perhaps out of fear that the predictions may not come true, thereby undermining their political position. Their preference seems to be to wait to see the millions of weather data points generated annually and then point out the ones that are anomalous as proof of the “weird weather” that the Climate Community sees all around us. The problem with this approach is that weather has huge short-term variability. There have been 203 days so far in 2013. Each day has a recorded high and low temperature, giving us 406 temperature data points for the year to date. Meteorologists can also tell us the “normal” or historical average temperature for each day. Only 20 of the actual temperatures readings so far in 2013 matched their historical “normal” values. The other 183 high and low temperatures were “abnormal”. This is always true, and a long string of “abnormal” weather says nothing at all about Climate Change.

Rather than just complain about the Climate Community, let me offer an example of real science: Einstein’s general theory of relativity. Einstein’s hypothesis was that gravity is not a force, but rather a curvature space-time, caused by the presence of matter. Einstein, as a true scientist, did not argue that his hypothesis was correct because he was smarter than everyone else or because it was supported by a committee of other scientists. He didn’t try to demonize people who disagreed with him or to convince politicians to declare his hypothesis legally correct. In fact, Einstein proposed three tests that would either prove or disprove his hypothesis, and he was quite honest about the process, stating, “If a single one of the conclusions drawn from it [the general theory of relativity] proves wrong, it must be given up”.

The first test was the change in the orbit of the planet Mercury. All other things being equal, Mercury’s orbit around the Sun should be an ellipse with the Sun at one focus, and that orbit should not change over time. Scientists had recognized since the 1850s that Mercury’s orbit in fact changes slightly over time and were able to explain 92% of the observed changes in terms of the gravitational pull of the other planets and the fact that the Sun is not a perfect sphere. The remaining 8% was unexplained. Einstein argued that the general theory of relativity should produce a more precise estimate of Mercury’s orbital change because it added a third factor: the curvature of space-time due to gravity. Einstein’s calculations predict almost 100% of the observed orbital change, and the more precise the observations have become over time, the better the fit. Unlike the Climate Community, Einstein put his hypothesis on the line. This is true science.

The other two tests of general relativity produced comparable results. Newtonian physics predicted that the Sun’s gravity would bend starlight, but could account for only half the observed result. General relativity predicted close to 100% of the observed bending. The third test was the gravitational red shift of light, a test which could not be made until the late 1950s when sufficiently sensitive instruments had been developed. Again, Einstein’s predictions matched the observations very well.

It’s worth noting that general relativity cannot be tested against an individual person’s life experiences and observations. People can see Mercury, and have been able to make broadly correct predictions of its movement for at least a few thousand years. More precise predictions, however, require sophisticated instruments and careful observations of Mercury’s movements over centuries – well beyond the perceptual or chronological capabilities of an individual. No scientist worth his salt would have proposed testing the general theory of relativity by asking people to have a look at Mercury over the next few days and see what happens. Furthermore, we could not have tested the validity of general relativity unless we had a proper theoretical understanding of the other factors (gravitational pull from the other planets and the shape of the Sun) so we could isolate the impact of space-time curvature. That is how you do science.

Heat waves come and go. We need science not ideology to explain how and why.

Posted by: bmeverett | July 8, 2013

Obama’s Climate Change Speech

On Tuesday, June 25, President Obama gave what was billed as a signature speech on climate change at Georgetown. Alas, the President offered the same old tired arguments we’ve been hearing for years. I’m tempted to go through the speech and comment line by line, but let’s cut to the chase and look instead at the two critical parts of the speech.

First, the President asserted that “So the question is not whether we need to act. The overwhelming judgment of science – of chemistry and physics and millions of measurements – has put all that to rest. Ninety-seven percent of scientists, including, by the way, some who originally disputed the data, have now put that to rest.” I’m assuming that the President is referring to a May, 2013 article in Environmental Research Letters by John Cook of the University of Queensland and several colleagues entitled “Quantifying the consensus on anthropogenic global warming in the scientific literature”. Prof. Cook surveyed the abstracts of about 12,000 peer-reviewed articles on the subjects of “global climate change” or “global warming” and found that 66.4% of the articles expressed no position on anthropogenic global warming (AGW), while 32.6% endorsed AGW, 0.7% rejected AGW and the remaining 0.3% were uncertain. He then asked these authors directly about their positions on AGW. Based on these results, Prof. Cook concludes that 97.2% of the authors who expressed a position on AGW supported “the consensus”.

There are three logical problems with this analysis. The first issue is what Cook means by “consensus.” His paper defines consensus as the view that “human activity is very likely causing most of the current GW [global warming]”. This definition is inadequate. The argument for decisive (and very expensive) action on climate change is not that anthropogenic atmospheric warming has occurred but that it will accelerate based not on increased carbon emissions alone, but on the amplifying effects of the feedback loops created by cloud formation, ocean acidification and other factors. If a scientist agrees that humans have caused most of the warming to date, but doubts whether warming will accelerate, is he part of the consensus?

Prof. Cook’s work is part of a troubling trend of surveys designed to prove a point rather than gain information. (See my May 3 post on “Sexual Assault Awareness”.) If Prof. Cook wanted to know what scientists truly believe about climate change, he should have used a more rigorous methodology. First, he should have asked a much more set of detailed questions about the various components of the climate change issue. Has the atmosphere warned over the past 150 years? Is this warming beyond the range of the natural variability of the climate? Does atmospheric carbon enhance the greenhouse effect leading to warming? Are the feedback loops created by clouds, etc. positive or negative? Given the uncertainty, how much economic sacrifice should we endure to reduce our carbon emissions?

Furthermore, Cook should have been careful to differentiate between the opinion of each scientist on these issues and the scientists’ conclusions on what the evidence actually shows. As an analogy, let’s take the question of whether there is life on Mars. If you ask all the scientists studying this issue whether there is life on Mars, you might get different answers than if you asked them whether there is clear evidence of life on Mars. The first question asks an opinion and allows subjective judgment and speculation. The second does not.

The second problem with Cook’s argument is that science does not operate by consensus, but by evidence. During the Middle Ages, the Catholic Church addressed the pressing scientific issues of its day (such as whether the Sun revolves around the Earth or vice-versa) by convening councils of experts, usually clerics, who would review scripture and the writings of authoritative philosophers to determine the truth by consensus. The scientific method, as presented by Galileo and others, is quite different: Truth is established by stating a hypothesis and testing it against evidence. In reality, the evidence shows that the Earth revolves around the Sun even if 100% of the designated experts believe otherwise. The consensus offered by the Church as a basis for persecuting heliocentrists was meaningless. The hypothesis that human-generated carbon emissions will cause catastrophic climate change can be validated when and only when climate models can make reasonable predictions about how climate will respond to additional carbon concentrations. So, far, they can’t. For example, none of the models predicted or can even explain the 15-year plateau in global temperatures experienced since the late 1990s, even as carbon emissions have increased.

At any given time, the scientific community is wrong on many issues (geocentrism, Newtonian physics, the age of the universe, continental drift, acid rain and others). The only way to find out which “consensus” views are right and which are wrong is, well, science.

The third problem is a potentially severe survey bias. Published articles do not span the entire range of scientific opinion, only the views of scientists both willing and able to publish. Many scientists who have relevant knowledge don’t publish articles. Some climate scientists might like to publish articles expressing skepticism but may be deterred from doing so by the very real fear of social rejection by their colleagues or at worst the kind of vicious reputational assault the Climate Community launches on people who disagree with them. The number of peer-reviewed articles can also reflect the distribution of funding rather than the range of scientific opinion. Finally, as was so clearly demonstrated by the “Climategate” scandal, the editors of scientific journals can easily exclude views they don’t like and can even collude in determining which articles are published and which are not. Cook’s survey should have been both blind as to who was conducting the research and anonymous, allowing the respondents to state what they really believe without risk of censure by their colleagues.

Overall, President Obama’s “97% consensus” is meaningless. As one of my former Fletcher students recently said, “Pretty soon climate science will have to be replaced by actual science.”

The second critical part of the President’s speech was his “plan”. Having declared with certainty the dire consequences of climate change and stating “I refuse to condemn your generation and future generations to a planet that’s beyond fixing”, what exactly is President Obama proposing to do? Actually, all he did is reiterate his 2009 promise to reduce US greenhouse gas emissions by 17% by the year 2020 compared to the 2005 level.

Always be suspicious when someone chooses a historical year as their comparison basis. According to the Energy Information Administration (EIA), estimated US carbon emissions from energy use rose from 5.0 billion metric tonnes per year in 1990 to a 2004-2006 high of about 6 billion tonnes, and then began to decline, dropping below 5.4 billion tonnes in 2012. In other words, President Obama chose the peak of US carbon emissions as his base year. President Obama would like to claim credit for this decline, but he really can’t. Government energy policy may have made a slight contribution to the 2005-2012 decline, but high energy prices and a prolonged recession were the major drivers.

In any case, the President has committed to a 2020 emissions level of about 5 billion tonnes, 17% below the 2005 level of 6 billion tonnes. Fortunately for him, 60% of this reduction has already occurred. He is in fact promising a reduction of only about 0.4 billion tonnes or 7½% from the 2012 level – not quite so dramatic a promise. This commitment is rather like the Miami Heat promising their fans that they will win three NBA championships between 2011 and 2020.

As the President noted, carbon emissions are a global problem. Unlike smog, which can be mitigated by spreading pollution out over a wider area, it doesn’t matter where on Earth carbon is emitted. The EIA estimates that global carbon emissions (the only metric that really matters) will increase from about 32 billion tonnes in 2012 to about 35 billion tonnes in 2020. The president’s promise of a 0.4 billion tonne reduction is therefore just over 1% of the global total – essentially a rounding error.

What we see here is an astonishing gap between the desired end and the proposed means, an increasingly common problem in Washington. On July 19, President Obama spoke to a modest crown in Berlin about the need to reduce the threat of nuclear weapons. There are an estimated 4,000 nuclear weapons in the world today. Suppose President Obama had promised to address this dire threat by eliminating 40-50 nuclear weapons by 2020? There are an estimated 11 million illegal immigrants in the US today. Suppose President Obama had promised to solve this problem by giving 100,000 illegals a pathway to citizenship? President Obama ought to be embarrassed by his climate change speech, but he is counting on the public (or at least his left-wing base) not bothering to check the numbers.

I will agree with the President on one thing. He said, “We will continue to lead by the power of our example, because that’s what the United States of America has always done.” In fact, the United States has been leading the world on climate change for the last 15 years. Here’s the example we continue to set: (1) agree that anthropogenic climate change is real and a dire threat to humanity, (2) set phony targets that would have no impact on the problem even if met, (3) include lots of loopholes and accounting tricks (think emissions trading and land use) to make sure that nobody has to bear any real cost to meet the phony targets, (4) promise to do much, much more in the future and (5) pat yourself on the back as a visionary. The world is indeed following our lead.

Posted by: bmeverett | June 25, 2013

Tesla’s profits are an illusion

Tesla Motors, maker of luxury electric vehicles has earned a flood of positive press coverage lately. In May, the company announced that it had paid off nine years early its entire government loan of over $450 million. The stock jumped from its 52-week low of $26 per share to $114 on May 28. Motor Trend gave Tesla’s Model S its prestigious Car of the Year Award, and Consumer Reports graded the Model S 99 out of 100, the highest score ever given to a new automobile. Surely, here is the success the federal government has been waiting for to offset its embarrassing failures, like Solyndra. Not so fast.

Let’s look closely at Tesla’s finances. In the first quarter of 2013, the Company sold 4,900 Model S sedans generating total revenue of $555 million or about $113,000 per vehicle. It’s interesting to note here that the Model S, the only car Tesla currently sells, sells for $69,900 for a 60 kWh battery pack and $79,900 with an 85 kWh battery pack. Even the limited Signature Edition Model S that Tesla sold to hype their cars sold for $95,400 and $109,400 for the two battery packs respectively. Why are revenues so high?

Part of the answer lies in sales of powertrain equipment to other vehicle manufacturers, such as the Toyota RAV4 electric vehicle. Beyond these legitimate sales, however, are some interesting issues. First of all, Tesla gets a federal rebate of $7,500 per car plus state rebates such as $2,500 per vehicle in California and $4,000 per vehicle in Illinois. Although the rebate technically goes to the buyer, the $69,900 Tesla gets for a 60 kWh Model S in California, for example, includes $59,900 from the buyer, $7,500 from the feds and $2,500 from the totally bankrupt state of California.

Second, California requires most car companies marketing in the state to meet fleet emissions target for Zero Emission Vehicles (ZEVs) and allows companies exceeding their targets to sell credits to companies unable to meet the requirements. Since Tesla makes 100% electric vehicles, they have credits available for sale. California ZEV credits totaled $68 million in 1Q 2013, which accounts for 12% of total revenues but a full 70% of gross profit. Tesla is cagey about releasing state-specific information on its sales, but the industry buzz is that about one-third of its cars are sold in California. If so, Tesla is earning about $41,000 in ZEV credits for every car it sells in California. Consumers, of course, pay for these credits through higher prices for conventional cars.

The theory behind the California ZEV credit program is that Tesla is providing a real service to the public by producing low-emission vehicles, but is that really true? To say that California’s Zero Emission Vehicle (ZEV) targets are complex is a huge understatement. The rules governing this program are so convoluted that hundreds of lawyers are likely to find career employment in their interpretation. If you’d like to really dive into the California rules, be my guest. You can find a tutorial at http://www.arb.ca.gov/msprog/zevprog/factsheets/zev_tutorial.pdf. Be warned, however, that the tutorial contains 131 slides summarizing the rules.

The basic principle, however, is that vehicle emissions are measured at the tailpipe. In other words, the energy used and emissions generated in producing and transporting the electricity are ignored. If we are trying to reduce real-world carbon emissions, this approach is just plain wrong. I discussed this problem at length in my February 8 post “The State of Play on Electric Cars”. As noted in that post, electric cars emit less carbon than gasoline-powered vehicles, but mainly because electricity is generated on the margin by natural gas, a low carbon fuel. A better comparison would be a Tesla versus a Honda GX natural gas vehicle. The California rules define a Tesla Model S as a Zero Emission Vehicle (ZEV), but categorize the Honda GX as an Advanced Technology Partial Emission Vehicle (AT PZEV). (Only California could use a term such as “partial zero emission” with a straight face. What part of zero are we talking about?) As mentioned above, each car company has a ZEV requirement, a specified part of which must be met by pure ZEVs with the rest by various types of PZEVs. Electric cars are the only ZEV currently under production so they have considerable value in meeting the emission requirements.

If you go to the dealerships and look at the EPA labels, you will see that the Honda GX has a fuel efficiency of 31 miles per gallon while the Tesla gets 94 miles per gallon. According to EPA definitions, the Tesla Model S emits no greenhouse gases, while the Honda GX emits 227 grams of carbon dioxide per mile. Since a conventional gasoline-powered Honda Civic gets 32 miles per gallon while emitting 279 grams of carbon, California would seem justified in characterizing the Tesla as far superior to the Honda GX, which in turn is slightly better than a conventional Honda. But is this taxonomy valid?

According to Tesla, the basic Model S with a 60 kWh battery has a “real world” range of 232 miles or about 260 Watt-hours (Wh) per mile traveled. The larger, 85 kWh version is heavier and has a slightly lower efficiency, so let’s go with the 60 kWh model. An electric car battery loses about 10% of its input electricity in the charging process and another 10% in transmission and distribution from the power plant. A mile of travel in a Tesla Model S therefore requires the generation of about 320 Wh of electricity. Modern, efficient combined cycle natural gas power plants require about 6,800 British Thermal Units (Btus) of natural gas per kWh of electricity, so a mile in our Tesla requires about 2,175 Btus of natural gas. About 15% of the natural gas is lost in production and transportation to the power plant, so our Tesla mile requires about 2,560 Btus of natural gas. If we convert this energy to gallons of gasoline equivalent at 125,000 Btus per gallon, a Tesla gets about 49 miles per gallon, half the EPA label value. Total carbon dioxide emissions are really about 136 grams per mile, not zero.

How about the Honda GX? The car itself uses about 4,030 Btus per mile of natural gas plus a 15% gross-up for production and transportation or 4,640 Btus per mile. That’s equivalent to 27 mpg or 246 grams of carbon dioxide per mile. On balance, the Honda GX uses about 80% more energy than the Tesla (not 3 times as much) and emits about 80% more carbon dioxide (not infinitely more).

This difference is still significant, but what if Honda produced a hybrid version of the GX? The hybrid gasoline Civic uses about 30% less gasoline than its conventional cousin, so let’s assume that a hybrid Honda Civic GX could do the same. Such a vehicle would use 3,250 Btus of natural gas per mile (23% more than the Tesla Model S) and emit 172 grams of carbon dioxide per mile (27% more than the Tesla). A hybrid Honda Civic costs about $8,000 more than a conventional model ($26,305 MSRP versus $18,165). A Civic GX costs about $26,300, so we could guess that a GX hybrid would cost about $34,000, since the hybrid system would be essentially the same. Bottom line: if measured properly, a Honda natural gas hybrid vehicle would be 25-30% less efficient than a Tesla Model S but would cost only half as much.

We can draw several conclusions from this quick analysis. First, simple regulatory structures, such as a carbon tax, would allow consumers and auto companies to reduce carbon emissions in the most efficient way possible. (Please note that I am still opposed to a carbon tax on the grounds that it is economic damaging and likely to make a negligible contribution to atmospheric carbon concentrations). Governments, however, seem to prefer horribly complex regulatory structures that take much of the choice away from the market. Legislators simply cannot resist the temptation to specify in law how goals are to be met and which technologies are to be employed. It’s not coincidental that their preferred technologies are often those manufactured by powerful constituents in their districts.

Second, governments tend to make these decisions in ways that severely distort the outcome. All these systems create large bureaucracies overseeing programs that set powerful goals but allow numerous complex accounting tricks that often allow the legislative targets to be met without solving the problem supposedly being addressed. The Kyoto Protocol, for example, allowed the EU to meet its carbon reduction targets without reducing carbon emissions through complex and largely fictitious trading systems and meaningless land use definitions. The California ZEV system and the upcoming federal EPA vehicle efficiency systems will do exactly the same thing. (Those of you following the current debate on immigration reform should watch out for this same tendency.)

Third and finally, the California ZEV system, like the Kyoto Protocol, creates wonderful opportunities for rent-seeking, as company lawyers find ways of gaming the system to their advantage, while the consumer foots the bill. Tesla Motors is not a car company, it’s a rent-seeking machine. Although Tesla management predicts that revenue from credit sales will decline over time, the opposite is likely to be true. Tesla cannot now, and probably cannot in the foreseeable future, sell its electric cars for more than their production cost. Their ability to make a profit and provide a return to their shareholders depends primarily on subsidies and on their ability to sell credits, which cost consumers real money and carry little if any benefit.

Yet another reason to love big government.

Posted by: bmeverett | June 6, 2013

The Solyndra and IRS Scandals are the Same

Recent revelations that the IRS delayed applications from conservative organizations for tax exempt status seem to baffle the media. Could it be true, as the White House claims, that a few low-level IRS staffers concocted this scheme on their own? Did the White House issue instructions to harass these groups? These very questions show a fundamental misunderstanding of how Washington operates.

There are basically two types of people working in the government. The first and largest group is the professional employees attracted to government work for its good pay, job security and a genuine desire to provide essential services to the public. These policy professionals develop expertise on substantive and administrative issues and generally know what they are doing. The second and smaller group is political operatives – the campaign workers who attach themselves to a particular candidate or party and who devote themselves totally to the political advancement of their patrons. Successful political operatives are smart and highly skilled in their profession. They also tend to become ruthless and utterly focused on bare-knuckle politics. These people generally care little about economics, national defense, health care, the environment or other policy questions, but they understand how to use these issues for their patrons’ political benefit.

When a President takes office, one of his major tasks is to get control of the huge federal bureaucracy. This job is almost always given to the President’s closest friends and advisors – all of whom are political operatives, not policy professionals. These are the people who helped the president get elected, shielded him from the consequences of his weaknesses, attacked his enemies and undermined his opponents. Some will work in the White House, while others will be appointed to cabinet level or sub cabinet level jobs. Their message to the professional civil service is always the same. They do not say, “The president wants to run the government capably, and he is relying on all of you to do your best professional job.” Their message instead is, “We’re in charge now. We reward our friends and punish our enemies, just as we did on the campaign trail. We do not respect the ‘neutrality’ of civil servants. You’re either for us or against us, and we know where you live.”

Professional civil servants can ride out this periodic political storm if they choose because they know that government procedures make it almost impossible to fire anybody. They keep their heads down and do their jobs. The more ambitious government employees, however, will look for opportunities to advance their careers by catering to the wishes of White House staff who (temporarily at least) have a great deal of power to dispense favors, which can include plum assignments and promotions. The President and his staff do not need to issue explicit orders for this process to work. They only need to give subtle signals, which smart people can easily interpret.

We can get some real insight into the current IRS scandal by looking at the mess over Solyndra, the solar panel manufacturer that went bankrupt, defaulting on a half billion dollar loan from the American taxpayer. The problem started when Congress appropriated tens of billions of dollars for loan guarantees to advance renewable energy technology. Those who do not understand how Washington works believe that experts at the Department of Energy examine applications for these loans and decide which projects look most promising from a technical and economic standpoint. In other words, they assume that these decisions will be in the hands of professional civil servants.

In reality, political operatives have a great deal of influence over how this money is disbursed and their criteria are purely political. Entrepreneurs are very practical people and see government funds as a low-cost form of financing. In emails released to the House Energy and Commerce Committee last year, for example, the CEO of Solyndra referred to the federal government as “The Bank of Washington.” The quickest way to get access to this financing is not by sending a well-argued technical proposal to the DOE staff for evaluation, but to be a friend of the President.

The largest shareholder in Solyndra was George Kaiser, a billionaire Oklahoma oil-man who was a “bundler” for the Obama campaign in 2008. In other words, since individual campaign contributions are limited, he organized a large number of individual contributions into a bundle estimated at over $100,000. Such a contribution makes you by definition a friend of the President. Records show that Mr. Kaiser personally visited with White House staff on several occasions to discuss Solyndra. The White House staff subsequently set a date for a high-visibility visit by President Obama’s to Solyndra’s California manufacturing facility. This situation was a win-win for the White House political machine. The President could show gratitude for Mr. Kaiser’s campaign contributions while publicly demonstrating his support for clean energy and green job creation. Unfortunately for the taxpayer, all these political steps were put in place before the DOE staff had completed its review of Solyndra’s loan application – a process that was of no interest to the White House. Other released emails show that analysts at both DOE and the Office of Management and Budget recognized Solyndra’s poor prospects and warned of the likely failure of the company.

Under these circumstances, how did Solyndra’s loan get approved and subsequently restructured at taxpayer expense? The answer is simple. The White House did not need to issue an order to DOE to approve the loan. The very fact that the White House publicly supported the project was sufficient to convince ambitious people at DOE, including both the political appointees and senior civil servants, to approve the loan. Why not? Their careers might benefit, and the loans were made with Other People’s Money, for which DOE is never held accountable.

Fast forward to the IRS scandal. When President Obama took office, IRS employees were undoubtedly given the “We reward our friends and punish our enemies” message. If you are ambitious, you want to be on the lookout for opportunities to make the boss happy. In April, 2012, for example, an Obama campaign website attacked Mr. Frank VanderSloot, an Idaho businessman and donor to Mitt Romney’s campaign. The Obama campaign named Mr. VanderSloot and several others as “wealthy individuals with less-than-reputable records” and called him “litigious, combative and a bitter foe of the gay rights movement”. Shortly afterwards, Mr. VanderSloot and his wife were subjected to IRS audits of their tax records and Mr. VanderSloot was subjected to a Labor Department audit of his business.

The White House claims no connection to these actions and has essentially challenged the press to find a “smoking gun” memo from the White House directing these agencies to harass Mr. VanderSloot. No such memo was needed. All it takes is for a few senior people in the IRS and the Labor Department to decide to curry favor with the White House by taking actions they know will be viewed favorably. The same is true with the targeting of conservative political organizations for delay and harassment when they filed 502(c)4 tax exemption applications. Smart IRS officials don’t have to be told that such behavior will earn them the notice and approval of the White House.

The White House and its media allies continue to claim that the lack of a “smoking gun” exonerates the President. The politicization of the federal bureaucracy, however, does not require direct Presidential orders. Nor does this problem exist only for Democrats. Republicans do pretty much the same thing. They just have different friends and constituents. When a Republican inhabits the White House, ambitious bureaucrats will look for a way of pleasing those political operatives. It’s interesting to look at the case of Victoria Nuland, a career foreign service officer recently implicated in the distortion of the Benghazi talking points. The press is puzzled by the fact that Ms. Nuland could have served in a senior position in the Clinton Administration, then as a foreign policy advisor to Vice President Cheney and now as a loyal member of the Obama foreign policy team. Such behavior is in fact common in Washington. Loyalty to those currently in power is a good strategy for career advancement.

The root cause of this problem is not the behavior of either political operatives or professional civil servants, a situation well understood by our Founding Fathers. The problem is rather the enormous breadth of federal authority and the sheer vagueness of the laws passed by Congress. Congress appropriates huge sums of money, like the alternative energy loan guarantee program, with minimum guidance as to how the money is to be spent. Can anyone be surprised that businesses will try to access these funds or that the political operatives who run the White House will try to direct these funds to improve their political position? Congress also frequently passes laws with wonderful names like The Affordable Care Act and then delegates to the bureaucracy the job of actually defining and implementing the law. Can anyone be surprised that the political operatives who run the White House will try to use these complex and ambiguous statutes for political purposes? Those who are surprised are demonstrating one of the fundamental flaws of central planning: the belief that federal policies are implemented by experts interested primarily in helping out the citizenry.

The solution is not to throw out the Democrats and elect Republicans. That might help on the margin, but don’t expect too much. Politicians are politicians. The current White House “scandals” will serve only to reconfirm that the federal bureaucracy can be used for blatantly political purposes with complete impunity. The only solution is to limit the size and scope of the federal government.

Posted by: bmeverett | May 22, 2013

Same Old, Same Old from the New York Times

For the last 20 years or so, the Climate Community has predicted that scientific evidence would provide ever increasing support for the catastrophic climate change hypothesis. After gleefully trumpeting the rising atmospheric temperatures of the 1990s, they have been baffled by the last 15 years when global temperatures have not risen at all. On May 10, the Times published a front-page article rehashing the same old climate change arguments. The article by Justin Gillis was entitled “Heat-Trapping Gas Passes Milestone, Raising Fears”. I agree with the Times on one point: the Climate Community’s arguments offer fear, rather than science, as their primary motivation for taking action.

Although I have addressed these arguments in many previous posts, I feel obligated to repeat the arguments whenever the Times drags out these leftovers. Here goes one more time.

NYT: “The level of the most important heat-trapping gas in the atmosphere, carbon dioxide, has passed a long-feared milestone, scientists reported Friday, reaching a concentration not seen on the earth for millions of years.”
Reply: The “long-feared” milestone of 400 parts per million of carbon dioxide is entirely arbitrary. Millions of years is a long time in human history, but a very short time on a geological scale. Measurements show atmospheric carbon concentrations of 4,000 parts per million at various points in the Earth’s history. An interesting set of questions is (1) how did carbon concentrations reach such levels when there were no humans to burn fossil fuel, (2) how did carbon concentrations fall back below 200 parts per million before there were any human beings to institute carbon mitigations policies and (3) do high carbon concentrations cause temperature increases or do temperature increases cause high carbon concentrations. The troubling fact is not so much that the Climate Community has no answers to these questions, but that they have no interest in these questions.

NYT: “Indirect measurements suggest that the last time the carbon dioxide level was this high was at least three million years ago, during an epoch called the Pliocene. Geological research shows that the climate then was far warmer than today, the world’s ice caps were smaller, and the sea level might have been as much as 60 or 80 feet higher. Experts fear that humanity may be precipitating a return to such conditions — except this time, billions of people are in harm’s way.”
Reply: Some but by no means all experts believe this to be true. The Times defines an expert as someone who supports the catastrophic climate change hypothesis and then cites these experts as the definitive authority. This fallacy is known in logic as the “No True Scotsman” fallacy, which goes as follows. A Scottish politician claims that no Scotsman opposes independence from Great Britain. When someone points out that opinion polls show 59% of Scotsmen oppose independence, he restates his proposition as “No true Scotsman opposes independence”, thus reducing his statement to a tautology. If “experts” are defined as scientists supporting the catastrophic climate change hypothesis, then it is indeed true that all experts support the hypothesis. True, but meaningless.

NYT: “Countries have adopted an official target to limit the damage from global warming, with 450 parts per million seen as the maximum level compatible with that goal. “Unless things slow down, we’ll probably get there in well under 25 years,” Ralph Keeling said. Yet many countries, including China and the United States, have refused to adopt binding national targets.”
Reply: Lots of countries, particularly in Europe, talk a lot about reducing carbon emissions, but none has taken any serious steps in that direction. If you ask Europeans, they will point proudly to their successful implementation of the Kyoto Protocol, the European Carbon Trading System, their extensive subsidies for renewable energy and their even more ambitious targets for the future. Sounds great, but European carbon emissions have declined by 7% since 2000. The US, which the Times casts as the laggard, has reduced its carbon emissions by exactly the same amount.

NYT: “Climate-change contrarians, who have little scientific credibility but are politically influential in Washington, point out that carbon dioxide represents only a tiny fraction of the air — as of Thursday’s reading, exactly 0.04 percent. “The CO2 levels in the atmosphere are rather undramatic,” a Republican congressman from California, Dana Rohrabacher, said in a Congressional hearing several years ago. But climate scientists reject that argument…”
Reply: Seriously? How can a journalist write a sentence like that? Most climate change contrarians are in fact distinguished scientists themselves, not Congressmen. Check, for example, the writings of Professor William Happer of Princeton. Oh, I forgot. They are not “true” climate scientists.

NYT: “Scientists say that unless far greater efforts are made soon, the goal of limiting the warming will become impossible without severe economic disruption.”
Reply: I guess once again the Times means “true” scientists. In fact, the cost of even a small reduction in US carbon emissions today would be very high and, given China’s huge emissions growth, would contribute nothing of any significance to the concentration of carbon in the atmosphere.

NYT: Research shows that even at such low levels [those prevailing today], carbon dioxide is potent at trapping heat near the surface of the earth.
Reply: This statement is untrue. Projected levels of carbon dioxide emissions would by themselves cause only modest warming of around 1º C over the next hundred years – similar to the warming we have seen over the last hundred years. The catastrophic climate change predicted by the Climate Community is based entirely on assumptions regarding the indirect effects of warming on cloud cover, ocean absorption and other components of climate. Catastrophic scenarios assume that all feedback effects amplify rather than reduce the warming effect of carbon. In fact, the argument here is at the basis of many ecological arguments. Environmentalists often assume that ecosystems are extremely fragile and will collapse when disturbed. In reality, ecosystems – and the climate as well – are quite robust and can reestablish a new equilibrium. This tendency is known in science as Le Chatelier’s principle. The catastrophic climate hypothesis may be true, but it is not supported by any research, only by the opinions of some climate advocates.

One of my very best Fletcher students recently commented that climate science will sooner or later have to be replaced by actual science. Now would be a good time.

« Newer Posts - Older Posts »

Categories

Follow

Get every new post delivered to your Inbox.