Environmentalism in the mud: responding to Jim Green’s attack on Barry Brook

 Guest Post by Ben Heard. Ben is Director of Adelaide-based advisory firm ThinkClimate Consulting, a Masters graduate of Monash University in Corporate Environmental Sustainability, and a member of the TIA Environmental and Sustainability Action Committee. After several years with major consulting firms, Ben founded ThinkClimate and has since assisted a range of government, private and not-for profit organisations to measure, manage and reduce their greenhouse gas emissions and move towards more sustainable operations. Ben publishes regular articles aimed at challenging thinking and perceptions related to climate change and sustainable energy at decarbonisesa.com.

Ed: This is a cross-post from Decarbonise SA.

————

This has got to stop, and it stops when people start taking a stand… The schism in environmentalism over nuclear power is now well underway. It is sad that the other side seem to have decided in their righteousness that they are allowed to play dirty and go after individuals, using the same cherry-picking abuse of science that is all to familiar in climate change denial.

I was saddened this week to be forwarded a hatchet job on my friend and collaborator, Professor Barry Brook, authored by Jim Green of Friends of the Earth (FoE). Saddened, but not surprised. FoE has form in this department, having deployed these guerrilla tactics before against James Lovelock when he became inconveniently persuasive on the subject of nuclear power. It would seem that it is now Barry’s turn.

Jim Green, Australia’s anti-nuclear campaigner for Friends of the Earth

I have come to know Barry very well over the last 12 months. I know him well enough to know that he is both the last person who would ask for defending, and the most deserving of defence. So I offer this response to Green’s work. I really, dearly hope it will be read outside my circle of existing readers and supporters. I have some important things to say.

Green begins by getting some things really, really right. Namely, that Brook is highly qualified, highly regarded, extensively published, completely independent of the nuclear industry, and operating from a genuine concern about climate change. When you add to that the fact that he is highly influential, it becomes easy to understand why FoE have resorted to getting the hatchet out.

We are told Barry glibly believes “it’s nuclear power or it’s climate change”. This is an inaccurate and out-of-context portrayal of his position. It is a deeply considered and thoroughly researched position from a highly qualified scientist, the head of Climate Science at Adelaide University no less. It also happens to be a position that is largely shared by a long and growing list of prominent environmentalists (including the aforementioned Lovelock, James Hansen, George Monbiot and Mark Lynas) who have taken themselves through a similar process of critical examination of this problem as has Barry.

Barry Brook, Sir Hubert Wilkins Chair of Climate Change, Adelaide University. Prominent Australian nuclear advocate and founder of Brave New Climate

More times than I can recall, Barry has made the point that he does not care which technology does the job of rapid decarbonisation to avoid the worst effects of climate change. It is simply his well researched opinion that the central technology will need to be nuclear power or we will not succeed. Others are free to agree or disagree with him. But he states his case so cogently and robustly that every day more and more people are compelled to agree.

To suggest he is in error, Green refers to other, non-nuclear plans that supposedly demonstrate the redundancy of nuclear including a 2011 piece by Dr Mark Diesendorf of the University of NSW. I’m familiar with the Diesendorf study. I read both a critique of it and then a rebuttal from Diesendorf himself at this great site called Brave New Climate, run by a guy called Barry Brook. You see Barry (and therefore BNC) is not remotely concerned by robust debate on energy solutions. He positively encourages it, including running a very interesting and useful piece from none other than Jim Green! BNC is probably the best moderated and therefore most reliable place on the Australian web for robust, genuine debate.

Read more »

Dietary Guidelines Committee ignores climate change

Guest Post by Geoff Russell. Geoff is a mathematician and computer programmer and is a member of Animal Liberation SA. His recently published book is CSIRO Perfidy. His previous article on BNC was: Feeding the billions on a hotter planet (Part III).

He also wrote a brilliant recent piece for The Punch: Fukushima was no disaster, no matter how you spin it

——————

IPCC calls to reduce meat consumption

Back in 2008, head of the IPCC Rajendra Pachauri told the world to eat less meat because of its large greenhouse footprint.

At about the same time the National Health and Medical Research Council appointed a committee to update Australia’s Dietary Guidelines … last issued in 2003. The preface from the 2003 document is clear:

“The Australian Food and Nutrition Policy is based on the principles of good nutrition, ecological sustainability and equity. This third edition of the Dietary Guidelines for Australian Adults is consistent with these principles. The food system must be economically viable and the quality and integrity of the environment must be maintained. In this context, among the important considerations are conservation of scarce resources such as topsoil, water and fossil fuel energy and problems such as salinity.”

The Terms of Reference give no instructions about what the committee should do other than to update the documents with the best available science. Environmental issues were clearly worthy of lip-service in 2003, if nothing else. Any reasonable update to the 2003 document should see those issues front and center.

Our impacts on the climate will flow on into most other environmental issues, whether we are concerned with other species, or more narrowly focused on the habitability of the planet for our own. If food choices have a significant impact on climate forcings, then documenting and explaining the extent of those impacts to the public should have been front and centre in the workings of this committee. In addition to the head of the IPCC, no lesser scientific authority than NASA climate scientist James Hansen said in 2009:

If you eat further down on the food chain rather than animals, which have produced many greenhouse gases, and used much energy in the process of growing that meat, you can actually make a bigger contribution in that way than just about anything. So that, in terms of individual action, is perhaps the best thing you can do.

He made an equivalent statement to me in 2008 and advised that he was changing his own diet and was “80-90% vegetarian“.

We shall see later that Hansen’s claim is easily supported.
Read more »

Further critique of ’100% renewable electricity in Australia’ – winter demand and other problems

Recently on BNC, I ran two guest posts on the economic and technical challenges of supplying an energy-intensive, developed-world market using 100% renewable sources (under a situation where large hydro and/or conventional geothermal can provide little or no contribution). The case study was the national electricity market of Australia, with an average demand of 25-30 GWe.

100% renewable electricity for Australia – the cost

and the response, from one of the authors of the original simulation study:

100% Renewable Electricity for Australia: Response to Lang

Below is a further commentary, by Ted Trainer of UNSW, which focuses particularly on the issues of supplying winter demand, the feasibility of the biomass option for the gas backup, and the “big gaps” problem (i.e., long-run gambler’s ruin). Ted asked me to post it here on BNC to solicit constructive feedback (and has promised me he will be responding to comments!).

————-

Comments on

Simulations of scenarios with 100% renewable electricity in the Australian National Electricity Market” Solar 12011, 49th AuSES Annual Conference, 30 Nov – 2 Dec., By Ben Elliston, Mark Diesendorf and Iain Macgill, UNSW.

Ted Trainer; 21.3.2012

The paper outlines a supply pattern whereby it is claimed that 100% of present Australian electricity demand could be provided by renewable energy.

The following notes indicate why I think that although technically this could be done, we could not afford the capital cost. This is mainly because the analysis seems to significantly underestimate the amount of plant that would be required.

I think this is a valuable contribution to the discussion of the potential and limits of renewable energy. It takes the kind of approach needed, focusing on the combination of renewable sources that might meet daily demand. However it is not difficult to set out a scenario whereby this might be done technically; the problems are what quantity of redundant plant would be needed to deal with fluctuations in renewable energy sources, and what might the capital cost of this amount to?

Two of the plots given set out the contributions that might be combined to meet daily demand over about 8 days in 2010, in summer and winter. It seems to me that when these contributions are added the total capacity needed is much more than the paper states.

Australia’s recent history of energy use by source

The task is to supply 31 GW. The plots given show that at one point in time wind is contributing a maximum of 13.5 GW, but at other times its contribution is close to zero, meaning that other sources are backing up for it. The corresponding peak inputs from the other sources are, PV 9 GW, solar thermal 27, hydro 5 GW and gas from biomass 24 GW. Thus the total amount of plant required would be 75.5 GW of peak capacity… to supply an average 31 GW. (in his response to Peter Lang, Mark Diesendorf says their total requirement is 84.9 GW.) That’s the magnitude of the redundancy problem and this is the major limiting factor for renewables; the need for a lot of back up plant, which will sit idle much of the time.

Read more »

How realistic is The Economist’s cool view of nuclear power?

Last week, the influential weekly news and international affairs publication, The Economist, ran an essay on the future of nuclear energy – The dream that failed: Nuclear power will not go away, but its role may never be more than marginal.

As you might have guessed from the title, it was decidedly cool towards nuclear’s future prospects. Below I sketch some thoughts on what was wrong (and right) about the article. Interestingly, I understand that the author of this piece (Oliver Morton) will be joining us at the Breakthrough Dialogue in San Francisco in June 2012 — so I’m sure we’ll have some robust dinner conversations!

In his assessment of the current situation in Japan — 52 of its 54 reactors shuttered (at least 6 permanently), 100,000 people displaced by the evacuation resulting from the 20 km exclusion zone, and the speculation that Japan’s share of nuclear in the country’s electricity mix over the next few decades could decline rapidly or evaporate completely — the article is accurate and suitably sanguine.

The energy supply problems Japan now faces, due to the lack of baseload electricity for heavy industry and domestic consumption, is putting real pressure on the economy, and of course on the social fabric of the nation and the people’s respect for government.

As reported by The Breakthrough Institute blog  (see table to the right), costly imports of fossil fuels to partially cover the shuttered reactors has led to a chronically increasing fuel bill and the country’s first trade deficit in 30 years (to the tune of -$32 billion).

From a climate change perspective, it also looks bad — emissions are rising steeply as the Japanese electricity sector once again ‘goes fossil’, as illustrated in the carbon-intensity-from-energy chart below:

An obvious question to ask is, would Japan have faced this situation today if it had never pursued nuclear energy? I think the answer is two-fold:

Cosmo refinery fire – who knew, who cares?

Read more »

IFR FaD 11 – sodium coolant and pool design

This is the second of a four-part series of extracts from the book Plentiful Energy — The story of the Integral Fast Reactor by Chuck Till and Yoon Chang.

Reproduced with permission of the authors, these sections describe and justify some of the key design choices that went into the making the IFR a different — and highly successful — approach to fast neutron reactor technology and its associated fuel recycling.

These excerpts not only provide a fascinating insight into a truly sustainable form nuclear power; they also provide excellent reference material for refuting many of the spurious claims on the internet about IFR by people who don’t understand (or choose to wilfully misrepresent) this critically important technology. Click here for part 1 (metal fuels and plutonium).

The second extract, on coolant choice and reactor configuration, comes from pages 108-111 of Plentiful Energy. To buy the book ($18 US) and get the full story, go to Amazon or CreateSpace. (Note that the images below do not come from the book).

—————-

The Coolant Choice

Liquid sodium was the choice of coolant from the beginnings of fast reactor development, because the neutron energies must remain high for good breeding and sodium doesn’t slow the neutrons significantly. (Water does, and so nullifies breeding.) But sodium has other highly desirable properties too—it transfers heat easily and removes heat from the fuel quickly; it has a high heat capacity which allows it to absorb significant heat without excessive temperature rise; its boiling point is far too high for it to boil at operating temperatures, and importantly, even to boil at temperatures well above operating; and finally, although a solid at room temperature, it has a low enough melting point to stay liquid at temperatures not too far above that. In addition, there is no chemical reaction at all between the sodium and the structural materials making up the core (such as steel and zirconium). It is chemically stable, stable at high temperatures, stable under irradiation, cheap, and commonly available.

Further, as a metal, sodium does not react at all with metal fuel either, so there is no fuel/coolant interaction as there is for oxide fuel exposed to sodium. In oxide fuel, if the cladding develops a breach such reactions can form reaction products which are larger in volume than the original oxide. They can continue to open the breach, expel reacted product, and could possibly block the coolant channel and lead to further problems. Metal fuel eliminates this concern.

For ease of reactor operation, sodium coolant has one supreme advantage. Liquid at room pressures, it allows the reactor to operate at atmospheric pressure. This has many advantages. Water as a coolant needs very high pressures to keep it liquid at operating temperatures. A thousand- to two-thousand-psi pressure must be maintained, depending on the reactor design. Thick-walled reactor vessels are needed to contain the reactor core with coolant at these pressures.

The diameter of the vessel must be kept as small as possible, as the wall thickness necessary increases directly with diameter. With the room-pressure operation of sodium coolant, the reactor vessel, or reactor tank as it is called, can be any diameter at all; there is no pressure to contain. And leaks of sodium, if they happen, have no pressure behind them, they drip out into the atmosphere, where generally they are noticed as a wisp of smoke. The important thing is that there is no explosive flashing to steam as there is when water at high pressure and temperature finds a leakage path.

Read more »

Purpose and target audience of BraveNewClimate.com

Before I write a scientific paper, I always try to identify: (1) my main message [MM], in 25 words or less, and (2) my target audience [TA]. Doing this helps focus the ‘story’ of the manuscript on a key point. Papers that try to present multiple messages are typically confusing and/or too long for busy researchers to read. It also dictates the background and specialist terminology that the reader might be safely assumed to understand, as well as guiding the choice of journal that I will submit to. For instance, a paper written for Nature requires more general context setting than one sent to Wildlife Research.

However, it occurred to me that I’ve never tried to define the main message of the BraveNewClimate.com blog, nor really reflected on who the chief audience is. So let’s try.

In reality, both have evolved over time. Back in late 2008 – early 2009, when the blog (and my thinking on climate change policy) was in its infancy, it would have read something this:

2009 MM: Communicate the scientific evidence for anthropogenic global warming to the general public and policy makers, and advocate the need for, and urgency of, effective mitigation.

2009 TA: People seeking understanding of past climate change, current/future impacts, and the basis of modelled forecasts – all explained in relatively straightforward terms. A secondary target audience was those who were confused by, or enamored of, the repeated assertions of ‘the sceptics’.

Although I was proud to have developed the website on this scientific and philosophical foundation, neither of the above MM or TA are appropriate to BNC’s central purpose in 2012. So let’s try again.

2012 MM: To advocate an evidence-based approach to eliminating global fossil fuel emissions, based on a pragmatic and rational mix of nuclear and other low-carbon energy sources.

2012 TA: Environmentalists who disregard or oppose nuclear energy, and instead believe that renewables are sufficient (or that continuing to rely on fossil fuels is a rational energy policy).

The main message changed because I became progressively more interested in educating people on practical solutions to the problems of global change, rather than preaching doom-and-gloom. This shift in purpose was not because I don’t still consider the impacts of climate change to be incredibly serious and the evidence (ever increasingly) compelling — I do! It’s rather that I found the generic message of: “This is really bad, we must do something!” to be ineffectual, unappealing, and frankly, depressing. Besides, there are other sites that do this very well, so I now tend to leave it in their capable hands.

Instead, I became interested (okay, obsessed is a better word) with grasping and communicating the high-level issues associated with which low-carbon energy solutions will work most effectively at displacing fossil fuels and thus ‘solving’ climate change, at scale, in time, and within reasonable costs.

Read more »

The Fukushima Question: How close did Japan really get to a widespread nuclear disaster?

I think The Breakthrough Institute guys, led by Michael Shellenberger and Ted Nordhaus, are doing great working in environmental policy and thought leadership, which is why I was delighted to become a 2012 Senior Fellow. Below I reproduce an important article published today in Slate.com, on Fukushima and its ensuing hyperventilation. Much of the post-accident speculation was constrained only by people’s imagination (which can be pretty wide ranging), and utterly failed to resolve the fact that RISK is probability X impact. Instead, anti-nuclear types typically choose a huge, speculative impact, and then try to attach a large probability (often near certainty) to it. For truly catastrophic outcomes, the product of the many low-probability events required for initiation make the mathematical risk a vanishingly small one.

How close did Japan really get to a widespread nuclear disaster?

By Ted Nordhaus and Michael Shellenberger

Posted on Slate Thursday, March 1, 2012, at 4:55 PM ET

With an eye to the first anniversary of the tsunami that killed 20,000 people and caused a partial meltdown at the Fukushima power plant in Japan, a recently formed nongovernmental organization called Rebuild Japan released a report earlier this week on the nuclear incident to alarming media coverage.

The crippled Fukushima Daiichi nuclear power plant in Okuma, Fukushima prefecture as of February 2012. Issei Kato/Getty Images

Japan Weighed Evacuating Tokyo in Nuclear Crisis,” screamed the New York Times headline, above an article by Martin Fackler that claimed, “Japan teetered on the edge of an even larger nuclear crisis than the one that engulfed the Fukushima Daiichi Nuclear Power Plant.”

The larger crisis was a worst-case scenario imagined by Japanese government officials dealing with the situation. If workers at the Fukushima Daiichi plant were evacuated, Fackler writes, some worried “[t]his would have allowed the plant to spiral out of control, releasing even larger amounts of radioactive material into the atmosphere that would in turn force the evacuation of other nearby nuclear plants, causing further meltdowns.”

Fackler quotes former newspaper editor and founder of Rebuild Japan Yoichi Funabashi as saying, “We barely avoided the worst-case scenario, though the public didn’t know it at the time.”

To say that Japan “barely avoided” what another top official called a “demonic chain reaction” of plant meltdowns and the evacuation of Tokyo is to make an extraordinary claim. One shudders at the thought of the hardship, suffering, and accidents that would almost certainly have resulted from any attempt to evacuate a metropolitan area of 30 million people. The Rebuild Japan report has not yet been released to the public, but there is reason to doubt that Japan was anywhere close to executing this nightmare contingency plan.

The same day the New York Times published its story, PBS broadcast a Frontline documentary about the Fukushima meltdown that invites a somewhat different interpretation. In an interview conducted for that program, then-Prime Minister Naoto Kan suggests that the fear of cascading plant failures was nothing more than panicked speculation among some of his advisers. “I asked many associates to make forecasts,” Kan explained to PBS, “and one such forecast was a worst-case scenario. But that scenario was just something that was possible, it didn’t mean that it seemed likely to happen.”

Read more »

100% Renewable Electricity for Australia: Response to Lang

Guest post by Dr Mark Diesendorf, Institute of Environmental Studies, UNSW.

Click here for a printable 6-page PDF version of this response.

——————-

This is a personal response to Lang’s (2012) article critiquing the peer-reviewed paper Elliston, Diesendorf and MacGill (2011) ‘Simulations of scenarios with 100% renewable electricity in the Australian National Electricity Market’, referred to hereinafter as EDM (2011).

I appreciate the large amount of work that Lang has done in attempting to assess our work. However, I think his critique is premature, because he has misunderstood the intent of our work, which was clearly identified as exploratory. It is the first of a series of planned papers that will pick up on some of the issues that he has raised (and others) and step by step prepare the ground for an economic analysis. Several other questions that he raises are simply repetitions of questions that we have already raised and in some cases answered in EDM (2011).

Lang appears to be confused and mistaken in some key issues, such as the reliability of generation, where his conclusions are incorrect and potentially misleading.

Reliability of generation

Lang misunderstands and hence misrepresents our result that, in its baseline scenario, supply does not meet demand on six hours per year. He draws an incorrect conclusion from this result to claim that ‘renewable energy cannot realistically provide 100% of Australia’s electricity generation’. However, he overlooks the fact, clearly stated in the abstract, the main body and the conclusion of EDM (2011), that all our scenarios meet the same reliability criterion as the existing polluting energy system supplying the National Electricity Market (NEM), namely a maximum energy generation shortfall of 0.002%. This criterion inevitably means that any energy supply system, including the existing fossil-based system, is likely to fail to meet demand on at least several hours per year.

This is simply realistic, because no electricity supply system has 100% reliability. To achieve this ideal would require an infinite amount of back-up and hence an infinite cost. For this reason, electricity supply systems have reliability criteria such as Loss-of-Load-Probability (LOLP, the average number of hours per year that supply fails to meet demand) or energy shortfall. The NEM uses the latter. Since Lang refers to LOLP later in his article, he presumably partly understands this fundamental principle of electricity supply, yet somehow forgets this when critiquing the principal conclusion of our paper.

His oversight invalidates his conclusion. Hence our conclusion stands: namely that, subject to the conditions of the model, a 100% renewable electricity system is technically feasible for the NEM based on commercially available technologies.

Read more »

IFR FaD 10 – metal fuel and plutonium

Over the next month or two, I will publish four extracts from the book Plentiful Energy — The story of the Integral Fast Reactor by Chuck Till and Yoon Chang.

Reproduced with permission of the authors, these sections describe and justify some of the key design choices that went into the making the IFR a different — and highly successful — approach to fast neutron reactor technology and its associated fuel recycling.

These excerpts not only provide a fascinating insight into a truly sustainable form nuclear power; they also provide excellent reference material for refuting many of the spurious claims on the internet about IFR by people who don’t understand (or choose to wilfully misrepresent) this critically important technology.

The first extract, on Fuel Choice, comes from pages 104-108 of Plentiful Energy. To buy the book ($18 US) and get the full story, go to Amazon or CreateSpace.

—————-

Metal Fuel

The IFR metal alloy fuel was the single most important development decision. More flows from this than from any other of the choices. It was a controversial choice, as metal fuel had been discarded worldwide in the early sixties and forgotten. Long irradiation times in the reactor are essential, particularly if reprocessing of the fuel is expensive, yet the metal fuel of the 1960s would not withstand any more than moderate irradiation. Ceramic fuel, on the other hand, would. Oxide, a ceramic fuel developed for commercial water-cooled reactors, had been adopted for breeder reactors in every breeder program in the world. It is fully developed and it remains today the de facto reference fuel type for fast reactors elsewhere in the world. It is known. Its advantages and disadvantages in a sodium-cooled fast reactor are well established. Why then was metallic fuel the choice for the IFR?

The Integral Fast Reactor (IFR) system

In reactor operation, reactor safety, fuel recycling, and waste product—indeed, in every important element of a complete fast reactor system—it seemed to us that metallic fuel allowed tangible improvement. Such improvements would lead to cost reduction and to improved economics. Apprehension that the fast reactor and its associated fuel cycle would not be economic had always clouded fast reactor development. Sharp improvements in the economics might be possible if a metal fuel could be made to behave under the temperature and radiation conditions in a fast reactor. Not just any metal fuel, but one that contained the amounts of plutonium needed for reactor operation on recycled fuel. Discoveries at Argonne suggested it might be possible.

Metal fuel allows the highest breeding of any possible fuel. High breeding means fuel supplies can be expanded easily, maintained at a constant level, or decreased at will. Metal fuel and liquid sodium, the coolant, also a metal, do not react at all. Breaches or holes in the fuel cladding, important in oxide, don’t matter greatly with metal fuel; operation can in fact continue with impunity. The mechanisms for fuel cladding failure were now understood too, and very long irradiations had become possible. Heat transfers easily too. Very little heat is stored in the fuel. (Stored heat exacerbates accidents.) Metal couldn’t be easier to fabricate: it’s simple to cast and it’s cheap. The care that must be taken and the many steps needed in oxide fuel fabrication are replaced by a very few simple steps, all amenable to robotic equipment. And spent metal fuel can be processed with much cheaper techniques. Finally, the product fuel remains highly radioactive, a poor choice for weapons in any case, and dangerous to handle except remotely.

Read more »

The Grattan Report on low-emissions energy technology – some critical comments

Guest post by Dr Ted Trainer, University of NSW (http://ssis.arts.unsw.edu.au/tsw/).

Wood, A, T. Ellis, D. Mulloworth, and H. Morrow (2012) No Easy Choices: Which Way to Australia’s Energy Future. Technology Analysis. Grattan Institute, Melbourne.

This report is a valuable addition to the literature on the prospects for renewable energy in Australia, providing some recent data on key output and cost factors. It is especially to be commended for expressing a considerable degree of caution about this possibility, and pointing to the difficulties and problems that would have to be overcome. Almost all literature on renewable energy reinforces the faith that it can fuel energy intensive societies, and enable smooth transition to a carbon free economy. Over some years I have groped to a more confident statement of a case contradicting this position. (Trainer, 2012.)

The following brief comments indicate the strength of this case, and argues that the Grattan Report fails to recognise the reasons why it is very unlikely that the world can run on renewable energy.

The Report’s cost and output assumptions for the various renewable energy technologies seem to be inline with those in other recent documents. The explanation of the limits and difficulties associated with geothermal, carbon capture and sequestration, nuclear and biomass are especially valuable. Their estimate of biomass potential is a remarkably low c 500 PJ of primary energy, about 8% of the present Australian total, and their discussion of the logistical problems in getting large quantities of this low density material to generators is sobering.

I think that the major problem in the Report is that there is no analysis of the quantity of plant and the resulting capital cost of a total renewable energy supply system. Two years ago I published an attempt to do this, (Trainer, 2010a), and have now considerably improved the application of the approach based on more recent and more confident data. Trainer 2012 explores the amount and cost of plant needed to meet a 2050 world renewable energy demand assumed to be 1000 EJ of primary energy, about twice the present amount, in winter and net of long distance transmission energy losses and the embodied energy cost of the plant.

The conclusion arrived at is that the ratio of energy investment needed to GDP would be much less than derived in Trainer 2010a, but still unaffordable. It would be around 15 times as great as it is now – even though a number of significant factors difficult to quantify were not included in the analysis. These would multiply the ratio several times. (The output and capital cost assumptions used were more or less in line with those in the Grattan Report.) Combining more optimistic assumptions (including solar thermal plant costing one-quarter of today’s cost) would only reduce the total capital cost by 40%.

Read more »

100% renewable electricity for Australia – the cost

Download the printable 33-page PDF (includes two appendices, on scenario assumptions and transmission cost estimates) HERE.

For an Excel workbook that includes all calculations (and can be used for sensitivity analysis), click HERE.

By Peter Lang. Peter is a retired geologist and engineer with 40 years experience on a wide range of energy projects throughout the world, including managing energy R&D and providing policy advice for government and opposition. His experience includes: hydro, geothermal, nuclear, coal, oil, and gas plants and a wide range of energy end use management projects.

Summary

Here I review the paper “Simulations of Scenarios with 100% Renewable Electricity in the Australian National Electricity Market” by Elliston et al. (2011a) (henceforth EDM-2011). That paper does not analyse costs, so I have also made a crude estimate of the cost of the scenario simulated and three variants of it.

For the EDM-2011 baseline simulation, and using costs derived for the Federal Department of Resources, Energy and Tourism (DRET, 2011b), the costs are estimated to be: $568 billion capital cost, $336/MWh cost of electricity and $290/tonne CO2 abatement cost.

That is, the wholesale cost of electricity for the simulated system would be seven times more than now, with an abatement cost that is 13 times the starting price of the Australian carbon tax and 30 times the European carbon price. (This cost of electricity does not include costs for the existing electricity network).

Although it ignores costings, the EDM-2011 study is a useful contribution. It demonstrates that, even with highly optimistic assumptions, renewable energy cannot realistically provide 100% ofAustralia’s electricity generation. Their scenario does not have sufficient capacity to meet peak winter demand, has no capacity reserve and is dependent on a technology – ‘gas turbines running on biofuels’ – that exist only at small scale and at high cost.

Map of Australia’s transmission lines. There are no transmissions lines to any of the proposed CSP sites, and the best solar areas are far removed from the existing transmissions infrastructure.Source: Grattan Institute, Figure 10.1 (attributed to DRET (2010), Grattan Institute)

Introduction

I have reviewed and critiqued the paper “Simulations of Scenarios with 100% Renewable Electricity in the Australian National Electricity Market” by Elliston et al. (2011a) (henceforth EDM-2011).

This paper comments on the key assumptions in the EDM-2011 study. It then goes beyond that work to estimate the cost for the baseline scenario and three variants of it and compares these four scenarios on the basis of CO2 emissions intensity, capital cost, cost of electricity and CO2 abatement cost.

Comments on the EDM-2011 study

The objective of the desktop study by EDM-2011 was to investigate whether renewable energy generation alone could meet the year 2010 electricity demand of the National Electricity Market (NEM). Costs were not considered. The study used computer simulation to match estimated energy generation by various renewable sources to the known hourly average demand in 2010. This simulation, referred to here as the “baseline simulation” proposed a system comprising:

  • 15.6 GW (nameplate generation capacity) of parabolic trough concentrating solar thermal (CST) plants with 15 hours thermal storage, located at six remote sites far from the major demand centres;
  • 23.2 GW of wind farms at the existingNEMwind farm locations – scaled up in capacity from 1.5 GW existing in 2010;
  • 14.6 GW of roof-top solar photovoltaic (PV) inBrisbane,Sydney,Canberra,MelbourneandAdelaide;
  • 7.1 GW of existing hydro and pumped hydro;
  • 24 GW of gas turbines running on biofuels;
  • A transmission system where “power can flow unconstrained from any generation site to any demand site” – this theoretical construct is termed a “copperplate” transmission system.

The accompanying slide presentation by Elliston et al. (2011b), particularly slides 5 to 12, provides a succinct summary of the objective, scope for their simulation study, the exclusions from the scope, the assumptions and the results.

The results of the baseline simulation show that there are six hours during the year 2010 when demand is not met, with a maximum power supply shortfall of 1.33 GW. It should be noted that the supply shortfall would be significantly greater with higher time resolutions, e.g. 5 minute data rather than the 1 hour increments used, but this limitation is not addressed by EDM-2011.

The EDM-2011 approach is more realistic than Beyond Zero Emissions (2010)Zero Carbon Australia – Stationary Energy Plan” (critiqued by Nicholson and Lang (2010), Diesendorf (2010), Trainer (2010) and others), especially because EDM-2011’s approach, as they say, “is limited to the electricity sector in a recent year, providing a more straight forward basis for exploring this question of matching variable renewable energy sources to demand.” As the authors say, “this approach minimises the number of working assumptions”.

Read more »

Open Thread 21

The previous Open Thread has gone past is off the BNC front page, so it’s time for a fresh palette.

The Open Thread is a general discussion forum, where you can talk about whatever you like — there is nothing ‘off topic’ here — within reason. So get up on your soap box! The standard commenting rules of courtesy apply, and at the very least your chat should relate to the general content of this blog.

The sort of things that belong on this thread include general enquiries, soapbox philosophy, meandering trains of argument that move dynamically from one point of contention to another, and so on — as long as the comments adhere to the broad BNC themes of sustainable energy, climate change mitigation and policy, energy security, climate impacts, etc.

You can also find this thread by clicking on the Open Thread category on the cascading menu under the “Home” tab.

———————

There are two very important articles now posted on The Guardian website. The first, by Duncan Clark, is titled New generation of nuclear reactors could consume radioactive waste as fuel: The new ‘fast’ plants could provide enough low-carbon electricity to power the UK for more than 500 years.

It talks about Britain’s options for plutonium (Pu) disposal, and the GEH proposal to build a pair of S-PRISM reactors (311 MWe each) to rapidly ‘spike’ the weapons-grade Pu inventory, and thereafter consume it and spent fuel for energy. The alternative option, a new MOX plant, is far less desirable.

Tom Blees wrote a detailed explanation of this plan on BNC here: Disposal of UK plutonium stocks with a climate change focus

To accompany this piece there is an excellent new essay by George Monbiot: We cannot wish Britain’s nuclear waste away: Opponents of nuclear power who shout down suggestions of how to use spent waste as fuel will not make the problem disappear.

Read more »

Black Swan theory and the anti-nuclear sentiment

Guest Post by Elaine Hirsch. Elaine is kind of a jack-of-all-interests, from education to technology to public policy, so she is currently working as a writer for various education-related sites. Currently, she writes for an online school resource.

Black Swan Theory, as explained by Nassim Nicholas Taleb in his 2007/2010 book, The Black Swan, describes an event which is a disproportionally-rare occurrence, is unpredictable, but has a high-impact when it does occur. According to Taleb, Black Swan events include the September 11 attacks, the rise of the Internet, World War I and the development of the personal computer. As a result, the event’s non-predictability causes behavioral/psychological changes within people, especially ones who adhere to the scientific method for identifying events. Statistically speaking, these outliers pay a disproportionate role in public opinion and public policy.

Critics of nuclear energy point to the destructive capabilities of failed reactors and long-lasting effects of radioactive energy as reasons of pessimism. According to USA Today, the Union of Concerned Scientists cited “serious safety problems” that plague U.S. Nuclear plants as a main reason for halting nuclear energy programs.

Of course, nuclear breakdowns certainly are possible. The most recent example is the Fukushima Daiichi nuclear crisis in Japan on March 11, 2011, in which three workers died*. A steam explosion at Mihama Nuclear Power Plant in Fukui Prefecture in Japan killed four workers and injured seven more. A severely-corroded control rod forced a 24-month closure of the Davis-Besse reactor in Oak Harbor, Ohio beginning February 2002. The radioactive aftermath of the Chernobyl disaster of 1986 continues to plague the area, with a large exclusion zone remaining in force.

Proponents of nuclear power plants point to the safety measures already in place and attempts to increase the safety of nuclear energy. Safety systems at nuclear power plants include the reactor protection system (RPS), essential service water system (ESWS), emergency cooling system (ECS), emergency electrical systems (ECS), containment systems, standby gas treatment and ventilation and radiation protection. All of these systems work to immediately stop the nuclear reaction in case of an emergency. The RPS terminates the nuclear reaction, stopping the production of heat, so that other systems can remove decay heat from the reactor’s core. Not all heat removal systems exist in all nuclear reactors. Every nuclear reactor has some combination of systems to remove decay heat from the core.

In evaluating whether Black Swan Theory contributes to the anti-nuclear sentiment, one must consider whether such events were isolated outliers or part of a larger trend in nuclear energy.

Read more »

The folly of making perfection the enemy of excellence

Ben Heard of DecarboniseSA asked if I’d like to reproduce his recent post, to give it exposure to the BNC audience. Given that I’m still in Spain and will be for a while, I’m happy to oblige. I think it’s an excellent piece — as I’ve come to expect from Ben — and I hope you find it interesting and useful.

——————-

The folly of making perfection the enemy of excellence: a visit to Beverley Uranium Mine

by Ben Heard

Today I visited the Beverley uranium mine in northern South Australia, operated by Heathgate Resources. Heathgate have been a client of mine through ThinkClimate Consulting for the last two years for the delivery of mandatory greenhouse gas reporting under NGER.

View to the foot of the Gammon Ranges, on approach to Beverley

It was clear skies on the flight in, showing an amazing landscape at the foot of the Gammon Ranges on the border of the Arkaroola pastoral lease. From the air the low vegetation takes on a wonderful patterned effect. It is a stunning view, with visible water courses snaking across the land. It is easy from that height to envisage that it was once covered in ocean. In both the landscape of eroded mountains and the creatures that inhabit it, tell-tale signs of truly ancient history abound.

As you approach the site in from the air, the various locations that make up the Beverley operation begin to appear. Each is truly unremarkable in size, no bigger than a block you might find in an industrial suburb of Adelaide. Even taken together it is a small imprint on the land.

The main facility of Beverley (foreground) and accomodation (background). Image is from Australian Geographic and provided by Heathgate

From ground level you could be forgiven for thinking the landscape of the plains is a “barren desert”. Nothing could be further from the truth. On a simple site visit I saw wedge tailed eagles, nesting and flying, a beautiful small lizard whose name escapes me and a truly wonderful example of a bearded dragon basking on the road. This critter was too bold for his own good and was impervious to our best efforts to shoo him away. He simply was not afraid. A true highlight of the day was the head of Health, Safety and Environment picking this feisty fella up on a shovel and carrying him into the scrub, hopefully to safety.

The regular wildlife surveys reveal a multitude of birds, insects and reptiles, from tiny banded snakes to big lace monitors and woma pythons. After enough rain, the local water course, previously dry as a bone, abounds in a fish called the Spangled Grunter. As a word-lover, I am so, so glad to know of the existence of something called a Spangled Grunter.

Read more »

Saludos desde Mataelpino

I haven’t published an energy or climate-related article on BNC for almost a week, for a good reason:

Damien Fordham, Barry Brook and Miguel Araújo enjoy the cool Spanish mountain air

Yes, I am enjoying myself (but working too!). We (me, and some colleagues from University of Adelaide: Corey Bradshaw, Damien Fordham and Salvador Herrando-Perez) are visiting a research collaborator in Spain (Miguel Araújo). Our workshop is being held at the El Bosque Hotel in Mataelpino, a village located 1,000 m up in the Madrid Sierra.

We’re investigating the shifts in the geographic ranges of over 200 bird species in the U.K. in relation to climate and land-use change, as well as developing a multi-species population viability analysis metapopulation model on the predator-prey-habitat interactions of the critically endangered Iberian lynx, rabbits, disease and climate change.

Although it’s the height of winter here, the region is currently experiencing a drought, and so conditions are very mild for this time of year. As such, the weather is incredibly beautiful, with bright blue skies and crisp dry air. Yesterday we went for a hike (at about 2,100 m elevation) in the Parque Natural de Peñalara. There was some snow about, but not a lot. This is the area where some of the scenes of one of my favourite movies was filmed. It’s just like being in Cimmeria

Barry Brook at Peñalara Natural Park, Spain

I’ll be back in Adelaide in the middle of next week, with some new BNC posts on sustainable energy and climate.Meanwhile, feel free to use the comments list of this post as an especially open “Open Thread” — one not necessarily limited to climate or energy topics! As for me, I’ll sign off with some more photos (taken by Corey): Read more »

Burning energy questions – ERoEI, desert solar, oil replacements, realistic renewables and tropical islands

Late last year, Tom Blees, I and a few other people from the International Award Committee of the Global Energy Prize answered reader’s energy questions on The Guardian’s Facebook page. The questions and answers were reproduced on BNC here. Now we’re at it again, this time for the website Eco-Business.com (tagline: Asia Pacific’s sustainable business community). My section is hosted here (Part I), and Tom’s here (part III).

Part II, which I don’t reprint, answered by Iceland’s Thorsteinn Sigfusson, covered the relationship between large-hydro and climate change, and why solar conversion isn’t used more extensively.

I’ve reproduced my and Tom’s answers below.

—————————————

Barry Brook’s Q&A

Sunil Sood: What are the “Real Energy Payback Periods” for Solar PV and Wind Energy Systems? Taking in to account the energy consumed during manufacture of components, balance of systems, transportation, installation, servicing and variations in availability of energy and usage patterns, actual life expectancy (not theoretical). Are we consuming more of ‘Dirty Coal’ to produce these so-called ‘Clean’ energies?

Calculating true energy paybacks are tough. Every energy system has initial investments of energy in the construction of the plant. It then must produce energy for a number of years until it reaches the end of its effective lifetime. Along the way, additional energy costs are incurred in the operation and maintenance of the facility, including any self-use of energy. The energy payback period is the time it takes a facility to “pay back” or produce an amount of energy equivalent to that invested in its start-up. A full accounting of energy payback includes not only the materials and energy that are input into the extraction (mining) and manufacturing processes, but also some pro-rata calculation for inputs into the factory that constructed the power generation facility, some estimate for human (worker) inputs, etc. As you can imagine, it can be difficult to fully integrate all possible inputs.

However, there are reasonable ballpark estimates for a range of technologies, including wind, solar PV, solar thermal and nuclear. Material inputs tells one part of the story, and some attempts are a standardized comparison are given here and here for a few technologies (wind, solar thermal, Gen III nuclear). As a short-cut for estimate of total energy-returned-on-energy-invested (ERoEI), we can use studies that have looked at the life-cycle emissions of alternative technologies, and then calibrate these against the emissions intensity of the background economy used to produce the technology. This gives us an approximate ERoEI. Based on a range of studies, the estimates range from 180 to 11 for Gen III nuclear, 30 for wind, 11 for solar thermal and 6 for solar PV. That is, your PV panels would repay their inputs 6 times over during their lifespan, and if they lasted on your roof for 25 years then the payback time is about 4 years. If a nuclear plant had a ERoEI of 50 and operated for 40 years, its energy payback time would be 10 months.

Read more »

Could nuclear fission energy,etc., solve the greenhouse problem? The affirmative case

I have published a new paper in the peer-reviewed journal Energy Policy with the title “Could nuclear fission energy,etc., solve the greenhouse problem? The affirmative case” (currently online first, DOI: 10.1016/j.enpol.2011.11.041 — it will appear in the print version, with volume/page details, later this year). If you would like a PDF copy of the article, email me and I’ll be happy to send it to you.

My paper was written as a response to Ted Trainer’s (mostly) excellent 2010 article “Can renewables etc. solve the greenhouse problem? The negative case” — hence my particular choice of title. I explain the purpose of my piece in the introduction:

…In this context of needing to replace fossil fuels with some alternative(s), Trainer (2010) examined critically the adequacy of renewable sources in achieving this energy transition. He concluded that general climate change and energy problems cannot be solved without large-scale reductions in rates of economic production and consumption.

However, Trainer’s (2010) sub-analysis of nuclear energy’s technical potential involved only a cursory dismissal on the grounds of uranium supply and life-cycle emissions… In this paper… I argue that on technical and economic grounds, nuclear fission could play a major role (in combination with likely significant expansion in renewables) in future stationary and transportation energy supply, thereby solving the greenhouse gas mitigation problem.

Thus my aim was to critique the only substantive weakness I could identify in Trainer’s analysis — the short sub-section on nuclear energy.

The abstract provides the core thrust of my argument:

For effective climate change mitigation, the global use of fossil fuels for electricity generation, transportation and other industrial uses, will need to be substantially curtailed this century. In a recent Viewpoint in Energy Policy, Trainer (2010) argued that non-carbon energy sources will be insufficient to meet this goal, due to cost, variability, energy storage requirements and other technical limitations. However, his dismissal of nuclear fission energy was cursory and inadequate. Here I argue that fossil fuel replacement this century could, on technical grounds, be achieved via a mix of fission, renewables and fossil fuels with carbon sequestration, with a high degree of electrification, and nuclear supplying over half of final energy. I show that the principal limitations on nuclear fission are not technical, economic or fuel-related, but are instead linked to complex issues of societal acceptance, fiscal and political inertia, and inadequate critical evaluation of the real-world constraints facing low-carbon alternatives.

Below I’ll fill in a few details, but I’d of course encourage you to read the actual paper (contact details above for the PDF).

Read more »

The nuclear fission ‘Flyer’

Below is the foreword I wrote, on invitation of Chuck Till and Yoon Chang, for the book “Plentiful Energy” (I included a shorter version in my review of the book on Amazon).

In this short essay, I draw an analogy between the IFR and the Wright brothers’ 1903 ‘ ‘Flyer’. The idea is that successful technology — especially a revolutionary design — is built on the back of many learning-by-doing failures. Yet, once the initial problems have been solved, the remaining pathway for the technology’s development is one of incremental (but often rapid) evolutionary improvements.

I suspect that with just a few more years of serious investment in RD&D, the LFTR ‘Flyer’ could also launch. The molten-salt thorium reactor concept is extremely appealing, and the ORNL prototype, which ran in the mid- to late-1960s, showed real promise. In my view the Th232-U233 fuel cycle would make an excellent complement to the U238-Pu239 fuel cycle offered by the IFR, and both reactor types hold the promise of safe and inexhaustible energy.

————————

Foreword to: Plentiful Energy – The book that tells the story of the Integral Fast Reactor

On a breezy December day in 1903 at Kitty Hawk, N.C., a great leap forward in the history of technology was achieved. The Wright brothers had at last overcome the troubling problems of ‘inherent instability’ and ‘wing warping’ to achieve the first powered and controlled heavier-than-air flight in human history. The Flyer was not complicated by today’s standards – little more than a flimsy glider – yet its success proved to be a landmark achievement that led to the exponential surge of innovation, development and deployment in military and commercial aviation over the 20th century and beyond.

Nonetheless, the Flyer did not suddenly and miraculously assemble from the theoretical or speculative genius of Orville and Wilbur Wright. Quite the contrary – it was built on the back of many decades of physical, engineering and even biological science, hard-won experience with balloons, gliders and models, plenty of real-world trial-and-error, and a lot of blind alleys. Bear in mind that every single serious attempt at powered flight prior to 1903 had failed. Getting it right was tough!

Yet just over a decade after the triumphant 1903 demonstration, fighter aces were circling high above the battlefields of Europe in superbly maneuverable aerial machines, and in another decade, passengers from many nations were making long-haul international journeys in days, rather than months.

What has this got to do with the topic of advanced nuclear power systems, I hear you say? Plenty. The subtitle of Till and Chang’s book “Plentiful Energy” is “The complex history of a simple reactor technology, with emphasis on its scientific bases for non-specialists”. The key here is that, akin to powered flight, the technology for fully and safely recycling nuclear fuel turns out to be rather simple and elegant, in hindsight, but it was hard to establish this fact – hence the complex history. Like with aviation, there have been many prototype ‘fast reactors’ of various flavors, and all have had problems.

Read more »

Plentiful Energy – The book that tells the story of the Integral Fast Reactor

Yesterday the hard copy of the book “Plentiful Energy — The story of the Integral Fast Reactor” (CreateSpace, Dec 2011, 404 pages) arrived in the post. It is wonderful to see it in print, and now available for all to enjoy and absorb. I was honoured to play a small part in its realisation.

The subtitle of the book is “The complex history of a simple reactor technology, with emphasis on its scientific basis for non-specialists”. Written by the two leading engineers and Argonne National Laboratory Associate Directors behind the integral fast reactor, Dr. Charles E. Till and Dr. Yoon Il Chang, it is a landmark in the sustainable energy literature.

The first paragraph of the Acknowledgements explain the authors’ motivation for writing the book:

In beginning this book we were thinking of a volume on fast reactor technology in general to be done in a manner suited to the more technically inclined of the general public. There had been advances in this technology that had not been adequately covered in the literature of the time, we didn’t think, and we felt that a book on this area of nuclear technology could play a useful role. However, at about this time the enthusiastic advocacy of the IFR in the writings of Tom Blees, Steve Kirsch, Terry Robinson, Joe Shuster, Barry Brook and Jim Hansen began to appear.

In books and articles they outlined the merits of the Integral Fast Reactor and advocated its urgent deployment. Written by these highly technically literate non-specialists in the technology, they provided a general understanding of the IFR and what its implications for energy supplies would be for the future. And they did this admirably, describing accurately and vividly the capabilities of the IFR and the reasons for urgency in its deployment. They could only touch on the technology underlying it, however, and the why and how of the technology that caused it to work as it did, and the influence of the history of its development on the development itself, were obvious to us as being very important too. These things then became the focus of our efforts in this book…

After visiting Chicago and Idaho Falls in 2009/2010, talking to Yoon and Chuck, visiting the EBR-II site, and really getting immersed in the background to the technology, I was delighted to assist in the production of this book by reading and doing a technical edit on the entire draft manuscript — and so I think I can claim to be the first person to have read it all, other than the authors!

More about the book is given at its CreateSpace publishing page, and you can purchase it at Amazon.com (currently for $US 18). Obviously, I thoroughly recommend that all BNC readers get a copy.

Read more »

2011 on Brave New Climate

So the year 2011 draws to a close. What a tumultuous year it was, particularly for nuclear energy! For climate change, alas, the freight train just keeps gathering steam.

For 2012, I will expect the unexpected, but also hope to see some better signs of progress towards the downfall of fossil fuels. But really, let’s be honest, that is a decadal rather than year prospect.

Anyway, to the BNC year in review. Below I list some of the most read, most commented and most stimulating or controversial subjects of the past BNC year.

1. Fukushima nuclear crisis: This was the biggest story of the year for the blog. Read about the early diagnosis and explanation, ongoing reports, some technical speculation, an essay on what we can and can’t design for, preliminary and considered lessons learned, what the INES 7 rating means, and the need to avoid radiophobia with some common sense (and data). Another highlight is Ben Heard in his pre-decarbonisesa.com days

2. Renewables in the context of effective CO2 abatement. Some useful analyses on CO2 avoidance cost with wind, climatologist James Hansen admonishes use to get real about how effective (or ineffective) green energy has been to date at displacing fossil fuels, an adventure to energy debates in wonderland, a look at geographical smoothing, an argument that an energy strategy without nuclear does not have history on its side, Geoff Russell deconstructs the situation for India and Switzerland, and I do so for Germany.

3. More depressing climate trends. Sea ice declines and emissions rise, the cost of climate extremes, complications and realities, a plea to clean up the climate ‘debate’, why the argument of ‘no recent warming’ is statistically invalid, and a graphical review of the grim numbers. Read more »

Follow

Get every new post delivered to your Inbox.

Join 3,798 other followers