The November 2009 issue of Scientific American has a cover story by Mark Z. Jacobson (Professor, Stanford) and Mark A. Delucchi (researcher, UC Davis). It’s entitled “A path to sustainable energy by 2030” (p 58 – 65; they call it WWS: wind, water or sunlight). This popular article is supported by a technical analysis, which the authors will apparently submit to the peer-reviewed journal Energy Policy at some point (or may have already done so). Anyway, they have made both papers available for free public download here.
So what do they say? In a nutshell, their argument is that, by the year 2030:
Wind, water and solar technologies can provide 100 percent of the world’s energy, eliminating all fossil fuels.
Big claim. Does it stack up? Short answer, no. Here I critique the 100% WWS plan (both articles).
The articles are structured around 7 parts: (1) A discussion of ‘clean energy’ technologies and some description of different plans for large-scale carbon mitigation. (2) The amount and geographic distribution of available resources [wind, solar, wave, geothermal, hydro etc.] are evaluated, globally. (3) The number of power plants or capture devices required to harness this energy is calculated. (4) A limit analysis is undertaken, to determine whether any technologies are likely to face material resource bottlenecks that risk stymieing their large-scale deployment. (5) The question of ‘reliability’ of energy generation is discussed. (6) The projected economics of this vision are forecast. (7) The policy approaches required to turn vision into reality are reviewed.
In this post I want to concentrate on (5) and (6) — what I consider to be “The Bad”. But first, let’s look quickly at “The Good” (actually, more like the “Okay”) and then the really “Ugly” parts.
The majority content of the twin papers is focused on making the banal point that there is a huge amount of energy embodied in ‘wind, water and sunlight’ (“Plenty of Supply”), and that a wide diversity of technologies have been developed to try and harness this into useable electrical power. No critic of large-scale renewable energy would argue any differently, and the size of these resources has been covered in detail by David Mackay. In that context, I wonder what they hope to add to the literature? There’s nothing wrong in this section, and well explained, but it’s just standard, rehashed fare.
Next comes a simple extrapolation of the total number of wind turbines, solar thermal facilities, etc. required to deliver 11.5 TWe of average power (close to my figure of 10 TWe in TCASE 3). This part is similar to that which I provided in TCASE 4 except they use a mix of contributing technologies rather than considering a hypothetical limit analysis for each technology individually. Curiously though, they never really explain (in either paper) how they came up with their scenario’s relative mix of hydro capacity, millions of wind turbines, billions of solar PV units, and thousands of large CSP plants, wave converters, and so on — except in pointing out that some resources are more abundant in deployable locations than others (see Table 2 of the tech paper). They do provide a useful discussion of possible material component bottlenecks for different techs (e.g. Nd for permanent magnets in wind turbines, Pt for hydrogen fuel cells, In/Ga etc. for solar PV), and argue how they can be plausibly overcome via recycling and substitution with cheaper/more abundant alternatives. This bit is quite good.
So what’s “The Ugly”? Well, it’s something utterly egregious and deceptive. In the Sci Amer article, the following objection is raised in order to dismiss the fission of uranium or thorium as clean energy:
Nuclear power results in up to 25 times more carbon emissions than wind energy, when reactor construction and uranium refining and transport are considered.
Hold on. How could this be? I’ve shown here that the “reactor construction” argument is utterly fallacious – wind has a building material footprint over 10 times larger than that of nuclear, on energy parity basis. Further, Peter Lang has shown that wind, once operating, offsets 20 times LESS carbon per unit energy than nuclear power, when a standard natural gas backup for wind is properly considered. I’ve also explained in this post that the emissions stemming from mining, milling, transport and refining of nuclear fuel is vastly overblown, and is of course irrelevant for fast spectrum and molten salt thorium reactors. So…?
Well, you have to look to the technical version of the paper to trace the source of the claim. It comes from Jacobson 2009, where he posited that nuclear power means nuclear proliferation, nuclear proliferation leads to nuclear weapons, and this chain of events lead to nuclear war, so they calculate (?!) the carbon footprint of a nuclear war! (integrating a probability of 0 — 1 over a 30 year period). I quote:
4d. Effects of nuclear energy on nuclear war and terrorism damage
Because the production of nuclear weapons material is occurring only in countries that have developed civilian nuclear energy programs, the risk of a limited nuclear exchange between countries or the detonation of a nuclear device by terrorists has increased due to the dissemination of nuclear energy facilities worldwide. As such, it is a valid exercise to estimate the potential number of immediate deaths and carbon emissions due to the burning of buildings and infrastructure associated with the proliferation of nuclear energy facilities and the resulting proliferation of nuclear weapons. The number of deaths and carbon emissions, though, must be multiplied by a probability range of an exchange or explosion occurring to estimate the overall risk of nuclear energy proliferation. Although concern at the time of an explosion will be the deaths and not carbon emissions, policy makers today must weigh all the potential future risks of mortality and carbon emissions when comparing energy sources.
Really, need I say more? Can it really be that such wildly conjectural nonsense is acceptable as a valid scientific argument in the sustainable energy peer-reviewed literature? It seems so, which suggests to me that this academic discipline needs a swift logical kick up its intellectual rear end.
So, on to the grand renewables plan. The fulcrum upon which the whole WWS analysis pivots is the section entitled “Reliability”. Here’s where the steam and mirrors of their WWS dream (sorry, solar thermal pun) really starts to blow off into the atmosphere and shatter on the ground.
First, the authors cite ‘downtime’ figures for each technology (i.e., the period of unscheduled maintenance, as opposed to scheduled outages). From this, they leave the uninitiated reader with the distinct impression (especially in the Sci Amer pap piece) that wind and solar PV is actually more ‘reliable’ than coal! (Who knew? We’d better tell the utilities). They also say that unscheduled downtimes for distributed WWS technologies will have less impact on grid stability than when a large centralised power plant suddenly drops out. Sorry, but I just don’t get this. If the downtime of solar PV is 2%, for instance, and you have 1.7 billion 3 kW units installed worldwide (their calculated figure), then 340,000 of them are out at any one time. That seems rather significant to me…
Next, to overcome intermittency, they claim that for an array of 13-19 wind farms, spread out over an 850 x 850 km region and hypothetically interconnected:
… about 33% of yearly-averaged wind power was calculated to be useable at the same reliability as a coal-fired power plant.
Let’s parse this. By reliability of the coal plant, I assume in this context that they mean its capacity factor (rather than unscheduled outages), which would be around 85% of peak output. Now, wind in excellent sites has a capacity factor of ~35%, so the yearly-averaged power of a hypothetical 10 GW peak wind array of 13-19 farms would be 3.5 GW. Now, following their statement, 33% of 3.5 GW — that is, 1.15 GW or ~12% of peak capacity — would be available 85% of the time. Or, to put it another way, we’d need to install 10 GW of peak wind to replace the output of 1.4 GW of coal? Is that what they are saying? Did they cost this? (hint: no, see below). Perhaps someone else can confirm or reject my interpretation of the statements on p19 of the tech paper.
Also, consider this. Say we instead installed 20 GW peak over this 850 x 850 km area. We’d still only be able to deliver 20 x 0.35 x 0.33 = 2.3 GW of baseload-equivalent power. That is, adding more and more wind doesn’t help with system reliability, as it would for coal. I suppose the overall system reliability might get a little better as you spread your wind farm array over increasingly large geographical areas, but I suspect that this would be a case of rapidly diminishing returns. How can such a scheme be considered economic?
(Note: I’m not arguing for coal here, just using the power technologies given in their example. For me, insert nuclear instead).
Then they introduce ‘load-matching’ renewables. For instance, they present a “Clean Electricity 24/7” figure for California (see above), in which geothermal, wind, solar and hydro together provide a perfect match to an average power demand curve for CA for a given month (July in this figure). Strangely though, they neglect to mention what happens during the many imperfect, less-than-average days, when it’s cloudy and/or calm for some or most of the day and night (or strings of days/nights), or how much extra capacity is needed in winter months. How is the gap filled if either or both of wind/solar is mostly unavailable? Do the residents of CA go without electricity on those days? Err, no. Apparently, in these instances, grid operators must ‘plan ahead for a backup energy supply’. Riiiight. Where does this come from again, and how will this be costed into the WWS economic equation?
I could go on here, but won’t. This post is already getting way too long, and besides, many of these points will be topics, in and of themselves, in future TCASE posts.
As you’d have already gathered from the above, the economics of WWS is pretty strange. Here’s another example:
Power from wind turbines, for example, already costs about the same or less than it does from a new coal or natural gas plant, and in the future is expected to be the least costly of all options.
How can they justifiably say this, and yet neglect to mention that the power these these technologies produce is variable in quanity, low quality (in terms of frequency control), not dispatchable, diffuse (thereby requiring substantial interconnection), and that their projected energy prices don’t include costs of backup? In other words, in the real world, what exactly does the above quoted statement mean? Nothing meaningful that I can see.
They make a token attempt to price in storage (e.g., compressed air for solar PV, hot salts for CSP). But tellingly, they never say HOW MUCH storage they are costing in this analysis (see table 6 of tech paper), nor how much extra peak generating capacity these energy stores will require in order to be recharged, especially on low yield days (cloudy, calm, etc). Yet, this is an absolutely critical consideration for large-scale intermittent technologies, as Peter Lang has clearly demonstrated here. Without factoring in these sort of fundamental ‘details’ — and in the absence of crunching any actual numbers in regards to the total amount of storage/backup/overbuild required to make WWS 24/365 — the whole economic and logistical foundation of the grand WWS scheme crumbles to dust. It sum, the WWS 100% renewables by 2030 vision is nothing more than an illusory fantasy. It is not a feasible, real-world energy plan.
I also see that they are happy to speculate about dramatic future price drops for solar PV and concentrating solar thermal with up to 24 hours future storage (Although even they admit it would not provide sufficient power in winter – what do we do then, I wonder? – have huge capacities of coal and gas on idle and as spinning reserve?). Well, I guess that if analysts like Jacobson and Delucchi are willing to forecast such optimistically low costs for future solar, then we can be quite comfortable doing the same for IFR and LFTR, the Gen IV nuclear. What’s good for the goose…
Finally, a quick note on the section “Policy Approaches”. I found one thing particularly amusing. They start by emphasising the critical need for feed-in tariffs (FITs), to subsidise the initial deployment of WWS technologies, because these deliver a necessary kick start towards lower future costs. It’s ironic then, that they end with a quote from Benjamin Sovacool (2009) which says:
Consumers practically ignore renewable power systems because they are not given accurate price signals about electricity consumption. Intentional market distortions (such as subsidies), and unintentional mark distortions (such as split incentives) prevent consumers from becoming fully invested in their electricity choices.
Well, excuse me, but if FITs, and WWS technologies that are priced without adequate storage/backup, are not market distortions and subsidies, then what the hell is?
By profession I do transmission studies for wind and solar clients. My company name is TAC meaning Transmission Adequacy Consulting. I currently am doing studies all across the US. “A path to sustainable energy by 2030″ omits the transmission system needed by 2030. Because the wind and solar and water and geothermal projects are not in the locations of the existing power plants, new lines will be needed.
Looking at the graph on page 63, and carefully measuring scales on the graph, I estimate that there is 40,000 MW of wind and 40,000 MW of centralized solar on that graph. The reason I omitted rooftop solar is because Jacobson has its contribution to be rather small. For example, multiplying out the numbers on page 61 you will get 5.1 TW of rooftop solar and 26.7 TW of large scale solar of 300 MW size in farms, much like wind farms. This seems reasonable since centralized solar is twice as cost effective as rooftop solar. Since the rooftop solar is small I will omit it from these comments.
That leaves us needing 80,000 MW of new wind solar and geothermal generation just to serve California. I think an estimate of 500 miles from wind and solar resources to major load centers is reasonable. A 500 kV transmission line is rated at about 2000 MW max power. But you don’t want to operate it at that power level because the losses are too high and there is no reserve capacity in the line to handle the first contingency problem. Therefore I will estimate we will load the new 500 kV lines to about 1500 MW on average.
So we have 80,000 MW of renewable sources widely scattered around the Western System (WECC) with each carrying 1500 MW so that we need roughly 50 new 500 kV lines of 500 miles each, for a total length of 25,000 miles.
The article assumes there is little solar power energy storage and it also assumes the wind be blowing at night. We know for sure that the solar power is not available at night so we are nearly totally dependent on wind for night time energy. You are going to ask about the geothermal energy. One geothermal project I recently worked on for determining the transmission access for looked like a good project until the geothermal energy extraction failed to work. Recently other geothermal projects have created human induced earthquakes. Geothermal energy seem less likely today than just a few years ago.
So we are nearly totally dependent on wind energy for the night-time CA energy as envisioned in the 100% renewables by 2030. If we plan for those few occurrences when there is no wind in the WECC system, we must interconnect WECC with the rest of the US so CA can draw power from other wind generators that do have wind (hopefully) outside the WECC area, such as the Texas coast and east of the rocky mountains where massive wind farms can be constructed. However we will need at least 40,000 MW of lines that I estimate will average 2000 miles in length. If we used 500 kV lines, we would need about 25 of these lines bridging from WECC to the US eastern grid and ERCOT and the total length would be about 50,000 miles. By 2030 we would need 75,000 miles of new 500 kV lines just to serve California with 100% renewables. Considering that we have the period from 2010 to 2030, that means we would have to construct about 4000 miles of new 500 kV lines every year from now until 2030 for the renewables plan as outlined in this article to work.
How much do these lines cost? Probably about 2 million dollars per mile. Also, the 500 miles is just an estimate. If you have specific projects in mind that eliminates some of the uncertainty in estimating costs. For example the distances might be less to wind generators. However I suspect that opposition to the wind generators unsightliness and opposition to power lines will result in longer pats for lines zig zagging around the countryside and the wind generators being not allowed anywhere on the coast, so I understand that Mexico is the desirable place for wind. But if you were to string out 40,000 MW of wind, I bet you would find the 500 miles was not that bad a guesstimate after all. The first few sites might be closer to load centers, but opposition is likely to drive them farther away. The construction time for lines is mostly how long it takes to get all the ROW and get approval to build the lines. How many years will a line be held up in hearings? Add one year to that number of years and you have roughly the time it takes to build a new line. Now try to build new lines across the Rockies and see how long that will take – decades I predict, if ever.
In sum, I do not believe this is achievable at all. Therefore the concept envisioned in the SA article is not a workable plan because the transmission problems have not been addressed. The lines aren’t going to get built. The wind is not going to interconnect. The SA article plan is not even a desirable plan. The environmental impact and cost would be horrendous. Lets get realistic.
Filed under: Renewables