Categories
Nuclear Open Thread

Open Thread 20

The previous Open Thread has gone past is running of the recent posts lists and getting tough to find, so it’s time for a fresh palette.

The Open Thread is a general discussion forum, where you can talk about whatever you like — there is nothing ‘off topic’ here — within reason. So get up on your soap box! The standard commenting rules of courtesy apply, and at the very least your chat should relate to the general content of this blog.

The sort of things that belong on this thread include general enquiries, soapbox philosophy, meandering trains of argument that move dynamically from one point of contention to another, and so on — as long as the comments adhere to the broad BNC themes of sustainable energy, climate change mitigation and policy, energy security, climate impacts, etc.

You can also find this thread by clicking on the Open Thread category on the cascading menu under the “Home” tab.

———————

A new temperature reconstruction by Foster & Rahmstorf (Env. Res. Lett.), which removes ENSO signals, volcanic eruptions and solar cycles, and standardises the baseline.

I’m currently in Auckland, New Zealand, attending the 25th annual International Congress on Conservation Biology. A 4-day event, it’s a great chance to network and catch up with my colleagues, hear the latest goings on in the field of conservation research, and also give a few presentations (me and my students). I’m talking tomorrow on the impacts of climate change in Oceania — this covers a co-authored paper I have coming out in an upcoming special issue of Pacific Conservation Biology (which was actually the first journal I ever published in, back in 1997), entitled: “Climate change, variability and adaptation options for Australia”.

A conversation starter: George Monbiot has written a superb piece on nuclear power and the integral fast reactor over at The Guardian. It is titled “We need to talk about Sellafield, and a nuclear solution that ticks all our boxes” (subtitle: There are reactors which can convert radioactive waste to energy. Greens should look to science, rather than superstition). My favourite quote:

Anti-nuclear campaigners have generated as much mumbo jumbo as creationists, anti-vaccine scaremongers, homeopaths and climate change deniers. In all cases, the scientific process has been thrown into reverse: people have begun with their conclusions, then frantically sought evidence to support them.

The temptation, when a great mistake has been made, is to seek ever more desperate excuses to sustain the mistake, rather than admit the terrible consequences of what you have done. But now, in the UK at least, we have an opportunity to make amends. Our movement can abandon this drivel with a clear conscience, for the technology I am about to describe ticks all the green boxes: reduce, reuse, recycle.

George’s essay includes details on the integral fast reactor and the S-PRISM modules that GEH hope to build in the UK (to, as a first priority, denature the separated plutonium stocks, and thereafter generate lots of carbon-free electricity). The fully referenced version is here.

Although the comments thread contains the typical lashing of misinformation and vitriol one would expect from such topics in a relatively unmoderated stream, it’s also clear George has created some converts — or at least people who are willing to reassess their preconceptions. Great stuff. Feel free to leave a few comments yourself on that post — Ben Heard has certainly weighed in a few times! This is becoming an inescapable reality for rational Greens now. I really feel some momentum, at last.

By Barry Brook

Barry Brook is an ARC Laureate Fellow and Chair of Environmental Sustainability at the University of Tasmania. He researches global change, ecology and energy.

436 replies on “Open Thread 20”

I’ll post the info that most concerns me the most about climate change. Its a report appearing in the National Geographic magazine October 2011. Here is the article http://ngm.nationalgeographic.com/2011/10/hothouse-earth/kunzig-text and there were a couple of graphics appearing in the article that are important. Here is the cover page artists view showing what the US would likely look like with a 220 ft ocean rise http://egpreston.com/NatGeoOct2011a.jpg and there was also this graph that is important http://egpreston.com/NatGeoOct2011b.jpg . The other photos are on line. Its suggested that this ocean rise could happen within roughly a 200 year period. The last time Nat Geo correctly predicted an event like this was a year before Hurricane Katrina hit New Orleans. The article printed in Nat Geo magazine a year before that disaster happened was eerily accurate. I think they have done us a favor by revealing that this ocean rise happened in the past and is likely to do so again sooner rather than later.

Like

I have recently returned from NZ and was surprised to see so little rooftop solar PV panels. I would be interested in whether you notice this too and any local responses.

Like

I’m so happy George Monbiot finally discussed the IFR. A Green putting the argument right in front of the Greens. I have a feeling that anti-nuclear environmentalists are not quite as immune to rationality as AGW denialists. They’ll come around, at least enough of them to make a difference.

Like

A few quick questions, prompted by the need to give a little bit of context to this youtube video, featuring Masao Tanaka of Nagoya University which has gained some prominence in Japan:

In summary, he calculates how TEPCO, the utility supplying Tokyo with power can co without nuclear power. Pumped hydro storage is part of the mix – but fossil fuel plants provide the most.

So the questions: what is the typical energy storage capacity of pumped storage dams – i.e. how long can they run at rated max power? How long to replenish the dam? Not looking for exacts here – just whether they’re orders of hours, days, or weeks.

What is a typical capacity factor for the various fossil fuel plants? I’ve searched for this and only gotten figures on nuclear, wind and solar.

Cheers
MODERATOR
Eamon – do you have a link to this video with an English translation or a transcript? Most folk on this blog would not be fluent in Japanese which makes it hard to comment or answer your questions.

Like

Pablo the reason for so few rooftop solar is because they produce too little energy for the currency investment.

Eamon you asked about capacity factors. I don’t see the posting above. The answer is that the lowest cost fuels get loaded to the hilt, so nuclear would probably 90% on average if base loaded all the time. Wind and solar will be as high as their sources can produce, which vary from site to site. Coal is usually cheaper than gas so coal can be either mid range or base load so the coal CF can vary from 90% to a lower value such as 40% probably would be a minimum for coal because the capital cost of coal has to be paid for and if the energy drops too low on coal its cheaper to go with gas, even if its imported. Gas CF can be all over the place from nearly 0% to base loaded 90% depending on how its used in the specific utility. Any utility that is burning oil now is crazy and shouldn’t be doing so. I did generation planning many years at Austin Energy.

Like

Eamon — Pumped hydro capacity depends upon the local topography to be sacrificed to form the upper and lower reservoirs. Ordinarily such facilities are designed to pump overnight when electricicty is inexpensive and generate during the day. There are a few such facilities whch happen to have a very mcuh larger capacity; a luck of geography.

Fossil fuel burners have an availability about the same as for NPPs, slightly upwards of 90%. That is the maximum capacity factor (CF) but the obtained CF depends upon how the facility is operated. Often CCGTs (used to be) operated only during the day, lowering the obtained CF; older (so inefficient) coal burners are used only as necessary with quite a low resulting CF.

Some such plan is certainly possible for Japan, just considerably more expensive than turning back on the (increasingly idled) NPPs.

Like

David, thanks.

Moderator, the best I can come up with it this:

[TEPCO area]

水力hydro 2,180,000kw

揚水pumped hydro 6,810,000kw

卸電力事業者揚水Wholesale Electricity Utility pumped hydro 2,530,000kw

稼働原子力nuke 0kw

火力thermal 38,190,000kw

卸電気事業者火力Wholesale Electricity Utility thermal 5,450,000kw

緊急設備電源emergency power supply system(thermal) 2,000,000kw

合計 total 62,070,000kw

Used electricity at peak time (2010) 60,000,000kw

If you see boxes to the left of the English – they’re Japanese Kanji which your browser doesn’t support.

Like

I wanted to point out an important development at the Olympic Dam mine. They’re getting a licence to increase the uranium production four-fold, from 3800 tU to 16100 tU:

http://energyfromthorium.com/forum/viewtopic.php?f=6&t=3432

This would be about 10 billion/year worth of metal. Though the uranium is only a byproduct, it provides sufficient electricity in light water reactors to run about 100 GWe. If used in a denatured molten salt reactor (once through running on low enriched uranium and thorium) it can provide all of today’s US electricity supply, or about 900 Hoover Dams worth of electricity.

I just think that this is amazing. Powering entire countries from the byproduct of a copper mine. This is testament to the great energy density and potential of nuclear fission.

Like

Alternatively, the 16100 ton U contains 114 ton (0.71%) U235 which is enough to start up about 20 GWe of IFR per year. In 40 years of the mine operation, that means 800 GWe of IFR started up from the output of this single mine – a copper mine mind you!

Like

During last weekend’s ALP conference , Senator Conroy was relating a story , told to him by his uncle who worked at Sellafield. The uncle was supposed to have stated, ” If there is a choice, don’t pick nuclear”.
The inconvenient truth we are facing though, is that we do not have a choice. As explained on this blog many times over.

Like

Gene Preston, you say you think that National Geographic article says

” a 220 ft ocean rise … Its suggested that this ocean rise could happen within roughly a 200 year period…. and is likely to do so again sooner rather than later.”

It doesn’t say that and I don’t see anything that “suggested” it. What are you reading from to get that idea?

Like

“Earth was hot and ice free at the end of the Paleocene epoch. With sea level 220 feet higher than now, the Americas-not yet joined by continental drift-were smaller. Look in vain for Florida.”

This was the front page caption which seems to not have been posted on line. Other passages are:

“as the Paleocene epoch gave way to the Eocene, it was about to get much warmer still—rapidly, radically warmer”

“The cause was a massive and geologically sudden release of carbon.”

“Today the evolutionary consequences of that distant carbon spike are all around us; in fact they include us. Now we ourselves are repeating the experiment.”

“the most popular hypothesis is that much of the carbon came from large deposits of methane hydrate”

“large deposits of methane hydrate found today”

“warming by burning fossil fuels could trigger a runaway release of methane”

“You can’t wait 100 to 200 years to see what happened”

Putting these idea together you arrive at the conclusion they are warning us that a runaway condition could happen creating a spike in CO2 causing ice melting at an accelerated rate resulting eventually in a 220 ft ocean rise. When you combine this knowledge with the acceleration of the ice melting we are seeing now, and looking at the graphs Jim Hansen is proposing at the end of this century we could easily have a 220 ft ocean rise in two centuries because of the acceleration effect.

Like

I don’t see how the Olympic Dam expansion can proceed without a new power source. Deposits of gas and already mined coal within hundreds of kilometres of OD are nearing depletion. There is also considerable opposition to the preferred site for a desalination plant in a narrow gulf 300 km from the mine. If the company proceeds with either a fossil fuel power source or the nominated desal site it will face headwinds.

The original publicity said U3O8 output would expand to 19,000 tonnes a year from OD. Interestingly Australia could produce more ThO2 than that without major effort as a byproduct of hard rock mining of rare earths and from monazite sand. I don’t see any progress on OD since governments can’t bring themselves to promote any proven power source that isn’t wind or solar. I suspect the OD expansion will spend another decade in limbo, a victim of paralysis-by-analysis.

Like

John Newlands, I don’t think so. All the main analysis has been done (ref. the World Nuclear News article). The Australian minister of environment approved the Olympic Dam extension. The regulator approved it. There are many conditions that must be met along the line. It is, and continues to be, a highly bureacratic process. That’s what you get with projects this size. It is a political hot potato.

Yet the stakes are also high. With 10 billion a year revenue, the commercial mining industry stakes are high. By extension the government stakes are high since they get a lot of tax money on this revenue. A lot of new light water reactors are being built around the world and secondary supplies from recycled bombs are getting low. So the nuclear industry stakes are high as well. If the price of uranium triples, the nuclear powerplants won’t shut down. Uranium is too cheap for that. They will all continue running, and just pay half a cent per kWh more, but that means even more pressure on expanding uranium mining (Olympic dam could at that point make just as much money on the uranium sales as the copper, making it a main rather than byproduct).

The problem with ThO2 is that it has no fissile. Nature’s curse that will continue to make this material unattractive on the near term (until we get Gen IV IFR and LFTR/DMSR). Fertile is worth less than lead, fissile is worth more than gold.

Like

CR I’ve discussed the OD expansion with some mining industry insiders. They say it has to happen because of the money at stake. One suggested a new coal mine and coal fired power station could be opened to provide the 700 MW. That flies completely in the face of the whole carbon tax thing. Another option is to force gas rich parts of Australia to share enough gas for a new power station, either at the mine or via improved transmission. Another possible power source would be small modular reactors.

After all the antinuclear histrionics at the recent Labor Party conference I don’t like the chances of SMRs. Some even want to ban uranium mining altogether so that copper and gold would be removed from OD but uranium re-buried. That’s what we’re up against. My point about thorium is we have plenty without even trying. If there was a way to use it somehow.

Like

Cyril R said: “Powering entire countries from the byproduct of a copper mine. This is testament to the great energy density and potential of nuclear fission.”

It is also testament to the cheapness of the fuel. Yet the established designs emphasise fuel efficiency, preparing for a 1960s world where uranium threatened to become increasingly expensive. This despite the fact the more we look for uranium, the more we find, and the cheaper it gets, until it is nothing more than a byproduct of something more lucrative to the miner.

Maximum fuel efficiency is achieved with maximum size reactors, with all the attendant problems from being gigantic in the public eye, the public purse, the fears of the ignorant, etc. The high capital cost makes planning excessively conservative, so that the planners must assume unchanging climate, industry and public values.

There are many other criteria which we could use to select from the zoo of reactor designs. Water efficiency, deployability, transportability, ease of construction, mass production, low maintenance, autonomous running and so on. In particular, small is flexible, and we need to be flexible to respond to the unfolding future in a changing climate.

Like

Will (@LancedDendrite) — The second link (a pdf file) offers little regarding balancing agents (backup) for when the wind fails. One has to go the the references to see what happens on various grids; here in the Pacific Northwest the usual response is to generate more from hydro but sometimes natgas generators are dispatched for a short time instead.

Best illustrated by the Iberian experience is the desirability, approaching necessity, of controlling wind generation so as to not have to accept all that is offered. With that, some modest level of penitration by wind generation can actually be beneficial to grid stability (wand the wind farm operators should be so compensated, not just an energy only market).

I found of interest the Irish studies mentioned which indicated a definite upper limit to wind penitration. For BPA here in the Pacific Northwest a variety of factors conspire to keep wind generation for which BPA is the balancing agent down to (just) under 20% of total nameplate capacity.

Like

Two interesting tid-bits I came across this week. First is a report on energy security for Australia done by the Australian Strategic Policy Institute that outlines some analysis on renewables in the energy sec context. Particularly how they (80-100% plan) won’t be able to provide that secure generation capacity and the amount of area that they require.

http://www.aspi.org.au/publications/publication_details.aspx?ContentID=321&pubtype=-1

The second is this quote from the South Australian Premier Jay Weatherill on Nuclear Power in South Australia. It is a no, but its only on economic grounds (SA is in debt, and FOAK is costly in that context). When he said this on 891 ABC he did not even acknowledge any other reason, not even stating “and other issues”. Just economics….interesting.

http://www.adelaidenow.com.au/news/south-australia/nuclear-power-not-on-sa-agenda-weatherill/story-e6frea83-1226213922876

With the Olympic Dam project it is projected to produce 19,000 tpa of Uranium from the mine which will be 95% of the projected production of upcoming Uranium mine projects (not including current mines). Also the Jacinth-Ambrosia Heavy Mineral Sands mine on the West coast of SA does separate out the thorium but it goes into waste as there is no market for it. When the mine is rehabilitated this material will go back into the hole it came from, I am pretty sure that is the rehab arrangement but will have to check the MARP (rehab program) for it.

Like

I agree on the lack of info on covering backup, but I must note that the report is about integrating wind into the larger grid, not using it to replace the grid. “Backup” is virtual in most grids because generators can just be told to wind down their output to the pool – mind you this is assuming that wind is not a majority or near majority (>40%) generation source on the grid.

The info on being able to provide ancillary services such as reactive power to the grid as an alternative to adding the output to the electricity pool and using turbines as a fault ride-through for local distribution systems are interesting. Another thought is perhaps using wind farms with short-term backup such as Vanadium Redox Flow batteries or flywheels as potential reserves for black starts – mind you, wind + storage is expensive compared to diesel gensets or hydro for this role, but another specialised role that it could additionally fill.

Like

Will (@LancedDendrite) — Unfortunately, it is nowhere near as simple as you seem to think to ‘wind down’. Even worse is ‘winding up’ when the wind fails.

The short-term backup solutions you mention are very expensive. Some wind farms encourporate a small amount of battery storage so as to even the power provided; however, I am under the impression this is rare.

Like

My understanding is that the Ambrosia heavy sand concentrate will be trucked to Ceduna (they call the wharf part Thevenard) and shipped to Iluka’s main depot in Geraldton WA. There the components will be separated into rutile, ilmenite, zircon, monazite etc. At Ceduna the concentrate is mildly radioactive, I’d guess a few thousand Bq per kg. Link. When I lasted looked Iluka were seeking buyers for the monazite separated at Geraldton. Maybe it would suit either the just built Malaysian or proposed Whyalla rare earths plant

I think SA is between a radioactive rock and a hard place. They have the Torrens Island baseload station which is Australia’s biggest gas user, yet Cooper Basin has only a good decade left unless fracking delivers. To cap it off they must have desalination when El Nino next dries up the main river. My suggestion is to cut the federal NBN budget in half and soft lend twenty bill to SA to construct NPP. In my opinion around half the SA population would approve but they don’t make all the noise.

Like

I wouldn’t call a few thousand Bq per kg “mildly radioactive”. This is only a few times more than normally occuring radionuclides in soil (typically 500 Bq/kg soil worth of natural radioactivity). I would qualify a few thousand Bq per kg as “normal soil”. I would not mind using this soil in my garden.

http://www.physics.isu.edu/radinf/natural.htm

Regarding the OD mine expansion powering, let’s get some perspective here. This expansion can power 100000 MWe of light water reactors. So you need a 700 MWe coal station to get it. Even if you use 100% diesel, it is a small energy investment for such a huge amount of uranium.

That said, there does appear to be good prospects for geothermal power near Olympic Dam mine (northeast of it is a big geothermal area). Also not bad solar resources using CSP with molten salt storage (much more expensive than coal but with 10 billion a year worth of metals maybe you don’t care so much about power costs. Slightly cheaper than hauling in expensive diesel, too.

http://jcwinnie.biz/wordpress/?p=2651

Like

Hmm, I looked at the Compact Linear Fresnel Reflector but that looks too expensive and too poorly performing to power a mine.

http://www.areva.com/EN/solar-209/areva-solar-projects.html#tab=tab3

The Kogan Creek solar booster project is a concentrating solar plant added to a coal plant. You’d think this is cheap, since neither energy storage nor steam turbines have to be installed.

The project is a 44 MW peak electricity solar plant. Unfortunately it only delivers 44000 MWh/year worth of electricity. That’s only 5 MWe average, a capacity factor of only 0.11. Project cost is $AUD 104.7 million, that’s $AUD 21/Watt average electricity flow. Clearly this is pure greenwashing coal plants, since the Kogan Creek plant, at 750 MWe peak capacity, delivers over 100x as much electricity from coal. That is, the solar boost adds less than 1% to the energy flow of the coal power station. It is still over 99% coal. This money is much better spent in improving the efficiency of the coal plant by retrofitting the turbine, generator, steam boiler, etc. and adding better, modern pollution controls (baghouse filter/ESP, DeNOx and DeSOx)

Like

Cyril R you seem to be well informed on Australia. My understanding is that Olympic Dam started out with diesel generators but then transmission was built to Pt Augusta which has two adjacent coal fired stations. Coincidentally the current plan is to demolish one plant (Playford) and make it a solar steam boost for the other. If more coal or gas fired capacity is built to enable the OD expansion it plays right into the hands of nuclear critics who point to indirect CO2.

Despite generous government funding granite geothermal has yet to contribute a single watt to the grid. Therefore 700 MW output seems a bit of a stretch. Even though hundreds of kilometres apart both the uranium mines and the geothermal wells are on essentially the same slab of granite. Politicians who once said that geothermal would supply power for mining now look foolish.

However diesel could still be the Achilles Heel of the project. It is said the open cut excavation will use 19 bn litres of diesel. I’d guess a large amount of diesel will also be used in ANFO explosive. It would be good if a lot of that haulage could be done with electric trucks powered by a low carbon source. Since they mine 24/7 I doubt solar fits the bill.

Like

Electrification of the mining operations as far as possible is definately the way to go. But most of the machinery that they need is really heavy, using batteries would be tough. Can’t go and recharge half of the time, the machines have to do work all the time. Hydrogen might work, there are already hydrogen fork lifts available commercially. Serial electric drive with a diesel generator and electric motor for traction (like a diesel electric locomotive) is probably easier, and saves a lot of diesel.

Biofuel could be a good application for mining. There’s not enough biofuel to run all cars and airplanes by orders of magnitude, but there should be enough for remote powering mines. Biodiesel is the most suited fuel. Australia may have enough waste vegetable oil and such, to make biofuel for its mines.

Most likely the OD mine will continue to use lots of fossil fuels for at least the next 10 years. BHP Billiton will certainly not risk delaying the 10 billion a year mine revenue for a high profile first build nuclear plant. If the greens object, tell them that using a little fossil fuel gets a hundred times more nuclear energy generated elsewhere. So your lifecycle emissions are 1% that of coal. Also this is a copper mine – they built it for the copper – and the uranium is just a byproduct, albeit an increasingly important one. It wouldn’t be fair or accurate to put the fossil energy consumption of the mine on the balance of the nuclear power plants that use its byproduct. Mining and refining copper is energy intensive business.

Like

I doubt biodiesel could really help the huge mining industry as it is a niche product. I use it for about 80% of my car fuel. One form made locally (Tasmania) from opium poppy seed oil is said to cost $9 per litre. However I’m sure CNG could power mine trucks provided they didn’t drop rocks on the 220 bar tanks. Maybe LNG is the way to go
http://www.ngvglobal.com/dual-fuel-mining-truck-demonstrated-in-u-s-1022
The gas for trucks is freed up by not being used in a power station.

In my opinion the Olympic Dam expansion is the ideal justification for the first commercial nuclear generation in Australia. Others have agreed with this idea but not my specific suggestion that it be co-located with desalination on the coast 300km from the mine. As somebody put it to me recently ‘who gives a s.. if they have a Fukushima type accident out there’. Unlikely since the last big rumble out that way was British A-bomb testing after WW2.

I regard this line of thinking as fanciful
http://peakenergy.blogspot.com/2011/12/clean-energy-for-bhps-olympic-dam.html
Mining needs massive grunt, not fickle wind and solar or nonexistent geothermal.

Like

The conversation appears to be focusing on the question of, “how to power the mine haulage trucks?” However, may I suggest that the useful question is, “how to get the material up?”

After all, if there is copious electricity, then we need to look for electric solutions. One that comes to my mind immediately is conveyor belts. Such a mine may not need to have heavy haulage trucks at all. Where heavy trucks and other heavy equipment such as excavators and needed, overhead power lines might provide power.

Like

David B. Benson wrote:

The second link (a pdf file) offers little regarding balancing agents (backup) for when the wind fails. One has to go the the references to see what happens on various grids

Regarding this AEMO study, there’s an interesting finding in the section titled “Experience of wind during faults” (p. 28). This document appears to be a survey of available research, and is quite extensive so far as I can tell. For the individual cases they have reviewed (with wind penetrations from 5 to 30% of total output), they state:

“The vast majority of the time electricity grids are operating in a normal manner” (p. 28). They equate positive results with a “significant effort both at a planning and operation stage.” There are exceptions, however, and the authors describe several instances where a failure did take place, and the reasons for this failure (poor demand forecast, unexpected loss of conventional generators, poor wind forecasts, inadequate fault ride through capability, and more … almost always a combination of factors). From the available research and case studies, they conclude:

“The growth of wind energy with its variable and predictability characteristics coupled with its technical characteristics (e.g. lack of inertial response in the more modern devices and lack of reactive power control in the older designs) have led to concerns and claims that that it is adding too much uncertainty to the system and will result in blackouts. There is so far no experience to support this claim, however there are a number of instances where wind has contributed negatively. These events are important as lessons can be learnt so as operating and planning practices can be further improved to avoid these in the future” (p. 5 and p. 28)

Like

Prism is a sodium cooled fast spectrum system. With UK’s large recovered Pu stocks, it can burn a lot of recovered or depleted uranium to produce power. It is not clear if pyroprocessing for further recycle of the used fuel is also planned.It should be included.
Sodium, the highly problematic coolant is the weak point. Too many fingers have been burnt in sodium fires. There is a need to replace sodium with a more stable molten salt coolant.

Like

@Jagdish,

According to George Monbiot’s piece, pyroprocessing is not part of the current proposal.

Here is a presentation from Argonne on sodium as a coolant for fast reactors http://www.ne.doe.gov/pdfFiles/SodiumCoolant_NRCpresentation.pdf

Sodium has a lot of advantages (including safety advantages) and there may well be good reasons for it being the coolant of choice in most fast reactors that have been built so far.

Incidentally in PRISM, the reactor vessel is surrounded by inert Argon.

Like

As I understand it, the high thermal conductivity of sodium is essential for the large safety margins and passive safety of the IFR. The Argonne presentation above lists thermal conductivity of Na as 76 W/m-K. According to this document (p11) the salts investigated at Oak Ridge have thermal conductivity in the range 0.5 – 1.0 W/m-K.

Which strongly suggests that while a molten salt cooled solid fuel fast reactor might be possible, it would be something quite different from the IFR and would certainly require years of development. One would also have to ask the question of whether the degree of passive safety achieved with the IFR would be possible with a molten salt coolant.

Like

Incidentally in PRISM, the reactor vessel is surrounded by inert Argon.

Loss of inerting cover gas is considered a design basis accident. You’ll have to deal with it.

As I understand it, the high thermal conductivity of sodium is essential for the large safety margins and passive safety of the IFR. The Argonne presentation above lists thermal conductivity of Na as 76 W/m-K. According to this document (p11) the salts investigated at Oak Ridge have thermal conductivity in the range 0.5 – 1.0 W/m-K.

It’s a nice advantage, but not as great as you’d think. With liquid coolants, the convective heat transfer dominates over any thermal conductivity of the fluid, itself. Convective heat transfer coefficients for rapidly pumped liquids easily range >1000 W/m/K.

Also keep in mind that large differences in thermal conductivity between one side of the heat exchanger (sodium) and the other (steam/CO2/helium power cycle) results in a high thermal shock potential.

Fluoride salts have higher boiling points (1400-1600 degrees Celcius) than sodium and 4-5x the volumetric heat capacity. Those are far more important safety (and economic) advantages. No strong exothermic reactions with air and water is also a great advantage that helps a lot in a loss of cover gas scenario.

Lead is also a great coolant for a fast reactor. Even higher boiling, higher volumetric heat capacity and less extreme thermal conductivity than sodium. It also allows bigger coolant channels since it moderates less (void coefficients issue). I’d like to see an IFR with lead coolant.

Like

Maybe LNG is the way to go
http://www.ngvglobal.com/dual-fuel-mining-truck-demonstrated-in-u-s-1022

I like it, LNG can be delivered by truck or train and is much safer and more energy dense than CNG. This is a good application of using natural gas. Far more realistic and cheaper than hydrogen. Too bad we’re wasting so much natural gas in electricity generation (and its growing rapidly) when this stuff is far more useful for transport and remote powering mobile large machinery.

Like

Good news from the old continent:

http://www.sueddeutsche.de/wirtschaft/eu-setzt-weiter-auf-atomkraft-bruessel-ignoriert-deutsche-energiewende-1.1230255?commentCount=13&commentspage=2#kommentare

According to an article released in the German newspaper Sueddeutsche Zeitung, the EU Commission fully endorses nuclear power as a tool to fight climate change, calls for more support for Gen-4 research and commercialization, using subsidies if necessary, similar to those used for unreliables. They also say that 40 new nuclear power stations should be constructed within 20 years.

This puts the EU Commission with its German energy commissar (who apparently got it) on a direct collision with the anti-nuclear German government.

Like

Cyril R, on 9 December 2011 at 8:20 PM said:

I like it, LNG can be delivered by truck or train and is much safer and more energy dense than CNG. This is a good application of using natural gas. Far more realistic and cheaper than hydrogen

LNG is a good stepping stone to hydrogen.

Like

Cyril: probably an elementary question by non scientist. but how do the lead and fluoride coolants avoid the thermal shock problem?

do convective properties of lead and fluoride have anything to do with this?

seems like you’d have a large temperature difference in either case.

Like

David Walters, are you saying the COL of the Vogtle units is finally issued now?

Gregory Meyerson, lead has less thermal shock because of the lower thermal conductivity than sodium. Still quite high though. Thermal shock is caused by sudden temperature changes over components. It’s okay if there is a big temperature drop across the heat exchanger, because the heat exchanger develops an equilbrium where each area is more or less at the same temperature. But when the heat exchanger changes flow (eg suddenly shuts down) then the temperature profile will change. The rate of temperature change in one area of the heat exchanger determines the potential for thermal shock. The rate of temperature change is determined by the efficiency of the heat transfer. Part of that, in turn, is thermal conductivity. So if you have a heat exchanger with highly conductive sodium in one side and nonconductive steam or CO2 on the other side (power cycle side) then that can give a lot of thermal shock. Basically the sodium responds quickly to the flow stoppage, but the power cycle working fluid CO2 or H2O is responding only slowly and will stay at the initial temperature profile.

Thermal shock can be mitigated by selecting thermal shock resistant materials and by making sure that the design does not allow a sudden flow stoppage. Lead is quite heavy so gets you a lot of moving inertia, that’s good. But lead, sodium, and fluoride salts all have a high heat capacity compared to a gas or steam in the power cycle side of the heat exchanger/steam generator. Liquid coolants are just must better in heat transfer, no matter how high or low their thermal conductivity, that means there will always be the thermal shock issue that needs to be designed for. With PWRs, both sides of the heat exchanger (steam generator) have the same fluid, water. That makes it fairly easy to design out the thermal shock issue.

Like

A tonne of LNG can cost $8 and it has about 55 GJ heating value, hence costs .015c per MJ thermal. Unsubsidised diesel costing $1.40 for a litre with 35 MJ works out at 4c per MJ.

While liquefaction seems inefficient compared to piped gas it has the advantage of being pre-packaged for heavy vehicle applications. A giant outback mine could get LNG trucked or railed in from multiple sources. The delivery truck or locomotive gets fuelled by the vapour that boils off. That may be ultimately more secure than building a 400km pipeline from an ageing gas basin hoping the supply outlook improves.

It seems crazy to burn so much gas in baseload power stations when we will need it for so many other things, perhaps indefinitely.

Like

I agree with you John Newlands.

Liquefaction is actually quite efficient. You invest a one time energy sink and then you can ship or train transport it efficiently. Whereas with pipelines, the losses are zero at first but increase greatly with distance. Running the compressors along the pipeline is surprisingly inefficient. The pressure keeps falling along the pipeline and has to be re-energized along the line with compressor stations. The result is that pipelines are really only more energy efficient over short distances.

Like

I read somewhere that piping gas from Siberia to the UK about half the gas will be used to drive compressors. That’s what happens when you squander a resource. Another development is that some offshore gas may not be piped to shore but processed on large ships
http://www.abc.net.au/news/2011-05-20/worlds-first-floating-gas-platform-set-for-wa/2722352
Tanker ships pull up alongside and sail off with the oil and LNG.

Back in the 1970s the politician RFX (Rex) Connor predicted that southern Australia would run out of gas before the northwest. He couldn’t have known about coal seam gas though he was a nuclear advocate. Anyway he tried to raise the cash for a transcontinental gas pipeline without going through the proper channels and it backfired, bringing down the government. I wonder what Rex would make about today’s situation, including the fact we still don’t have nuclear nearly 40 years later.

Like

One question (or drawback) of the IFR that keeps popping up:

“If it’s so good, why did the US Government shut it down?”

Now I’m not too interested in the reasons if they were political, rather than technological- I understand even a opponent of the project admitted it had ticked all the boxes. I am however interested to know what the problem is in starting the project back up? Or, another country starting it back up?

I imagine there are some ownership/rights issues, are they mainly IP related?

What will it take to unlock this technology?

I really think IFR’s will be a good accompaniment to renewable technologies in the future, whenever that may be.

Like

“If it’s so good, why did the US Government shut it down?”

There are 2 answers to this.

1.) The government funds the R & D of the IFR. It is finished with the R & D so the government decided that it no longer needed funding.

2.) It is unnecessary for the U.S. to pursue commercial operations because it has little apparent benefit. 3 major issues the IFR solves are fuel utilization, passive safety, and waste geneartion.

All there are not significant issues in the U.S. Uranium reserves are in plentiful supply. The APR1000 addresses passive safety and the NRC is familiar with it. Dry cask storage has been determined to be good enough.

For this tech to get progress in the U.S. it will take a change in the view of the need for it.

Like

@ Jason Kobos “For [IFR] to get progress in the U.S. it will take a change in the view of the need for it.”

One such event might be a high level decision for an emergency rollout of nuclear electricity. A sudden expansion of slow-neutron start-ups would require a similar sudden expansion in enrichment activity, which may be unpalatable to the non-proliferation people. On the other hand, a sudden expansion of fast-neutron start-ups such as the IFR could be seeded using reprocessed and ex-military plutonium.

Like

NRC Chairman voted in favor of licencing the AP1000, the other commissioners have to vote as well for it to actually get licensed. This licences for the four new AP1000 builds will be given separately at a later date.

Like

With liquid coolants, the convective heat transfer dominates over any thermal conductivity of the fluid, itself. Convective heat transfer coefficients for rapidly pumped liquids easily range >1000 W/m/K.

But what happens in a station blackout scenario, or worse, failure to scram and coolant pump failure? Could molten salt cooling offer the same passive safety as the sodium cooled IFR? Is it even possible?

I’m no nuclear engineer but I would be the first to acknowledge that very careful detailed safety analysis is needed so that compromises, advantages/disadvantages and tradeoffs may be properly and objectively assessed.

What I am seeing is that the sodium issue is being used as a stick to beat PRISM with – something the antis will jump on with glee. But this “debate” ends up at a level of little more than high school chemistry and can become just another reason to be scared of nuclear power. This is territory to be approached with caution.

Of the two initiatives you mentioned for LFRs, SSTAR apparently is dead in the water with no new work being done and is now just limited to cooperation with the European counterparts. The European initiative is unlikely to be in a position to build a full size demo until sometime in the next decade.

It seems to me that the outlook for getting a genuine Gen IV full size plant operational this decade is limited to PRISM and it deserves support.

Like

But what happens in a station blackout scenario, or worse, failure to scram and coolant pump failure? Could molten salt cooling offer the same passive safety as the sodium cooled IFR? Is it even possible?

Yes. The fuel dilatation and other reactivity coefficients that shut down the reactor (or, if you don’t have negative coefficients, blow it up, as in Chernobyl) depend on the fuel type, fuel geometry, and coolant void worth. With volatile coolant such as water, any overheating or primary pressure boundary breach causes voiding of the coolant. So you want a negative void coefficient that shuts the reactor down. This is possible with fluorides, lead, and sodium but does require care in the design. If the scram fails, you will get more temperature rise since that is what the coefficients are bound to. But after that the chain reaction is shut down. One has to design for this, in particular the fuel and primary loop components must be able to cope with a number of such temperature rises. Note that this is exceptionally rare, Fukushima control rods worked just fine.

Now the reactor is intrinsically shut down. The question then is how do you make sure you can cool the decay heat away. It can’t be turned off.

With nonvolatile coolant and a passive decay heat cooling system such as the IFR has, you essentially have no possibility for voiding. (Still you can make sure that the overall coefficient is negative so that if there is some local voiding it doesn’t start the fission again.) As long as the fuel is covered in liquid coolant it can’t melt or even damage much at all. It is like having a pot of water on your home kitchen stove. As long as there is water in the pot, it won’t damage. So it makes sense to have a coolant with an extremely high boiling point, that means it is always liquid. No boiling = no gross fuel damage. No boiling = no pressure rise that can damage the containment. So no containment venting required, that means no fission products released. No evacuation required.

No water = no hydrogen to form. With sodium there is a lot of chemical stored energy – sodium is a fuel. Sodium burns in air and concrete, and explodes in water. I’ve been looking at different reactor incidents and accidents around the world. There was always a common denominator: a big energetic chemical reaction made it worse.

At Simi Valley it was chemical reaction of pump coolant with sodium that blew up the reactor.

At Chernobyl it was the combination of positive coefficients (stupid design, easy to inherently prevent) with stupid reactor shutdown (graphite tips on the control rods, this is like an accelleror pedal attached to your braking system), reactor control system (this should have automatically shut down the reactor when it became unstable), and an inadequate containment (only partial containment).

At Fukushima it was lack of passive cooling (or waterproof active cooling), and volatile coolant combined with hydrogen (explosion disperses fission products).

So what we need to do is go with the reactor design that has negative coefficients to shut down the reactor even with failure of scram, no means of big chemical reactions and is passively cooled. The IFR meets 2 out of 3. Lead cooled fast reactor and fluoride cooled and fluoride fuelled reactors meet 3 out of 3.

I’d support a PRISM build because it beats fossil by orders of magnitude. You have to convince a lot of people about sodium’s safety though. We had a recent sodium fire in a chlorine production plant (electrolysis of NaCl). It had interting cover gas and everything, but still had a big fire. There was also a fire in the sodium boiler building at the INL. This doesn’t install cofidence. Look at what minor incidents do to the industry. It really hurts our case. Fukushima has been a massive blow – unjustified, I agree – but it will slow nuclear buildout. It’s so much easier to convince people of the safety of lead and fluorides than of sodium. Most people know sodium from high school or college experiments – drop it in water and that’s firework. I think its easier to go with an AP1000 buildout.

Or you can try to build the PRISM reactor in the middle of nowhere.

Like

Cyril R., Simi Valley was built starting 1954 – before Shippingport even opened (!) — and the incident itself occurred in 1959. Referencing that somewhat obliquely as a concern for S-PRISM is a very shaky chain of logic indeed, and indeed hints at being a scare-mongering tactic — something I’ve commonly observed among those who favour the MSR-type design, and which I think is a very unwise move. The IFR design includes double-walled pipes, and the sodium-water heat exchanger is in an entirely different building to the primary vessel. The net feedbacks — what is important — are very clearly negative. Lead is a difficult coolant, and I doubt molten salt is viable either for a commercial fast reactor — certainly not one that would be commercial in the medium-term. I will have more to say on this (with a very specific reference), in an upcoming post, and I’d like to request that you to hold further technical speculation on what is best for fast reactors until then.

Like

Of the two initiatives you mentioned for LFRs, SSTAR apparently is dead in the water with no new work being done and is now just limited to cooperation with the European counterparts. The European initiative is unlikely to be in a position to build a full size demo until sometime in the next decade.

The ELSY design is more pragmatic, I think. It focuses more on near term deployment SSTAR uses a power cycle that isn’t commercially available. That’s never a good thing for near term deployment.

Personally I’d say the next 10 years are lost no matter what. We are still in a state of complete energy innumeracy and it will take decades for people to get perspective. We could build quite a few light water reactors in the next ten years, mostly in Asia, and that’s about it. A gigawatt of Gen IV more or less isn’t going to make the difference.

Why would a lead coolant in a PRISM be such a delay? Lead is compatible with zirconium so you can use the PWR fuel and cladding. You can use an exact fuel material and setup/handling system of a modern PWR. That by itself speeds up deployment, compared to barely used metallic fuel and stainless steel cladding – though metallic fuel has long term advantages in pyroprocessing. Lead and sodium are both liquid metal coolants. Lead avoids stringent cover gas guarantee (you in fact need some oxygen in the primary coolant to protect the components by passivation). Sure lead is heavier but its a simple vessel and pool mechanical exercise.

Like

The simi accident was caused by pump coolant (oil) reaction with sodium. It’s an example of a seemingly insignificant issue causing a nuclear accident. It is not scaremongering, it is pointing to some devil in the details with a chemically reactive and opaque coolant.

Lead isn’t a difficult coolant. The Russians had operating nuclear submarines using lead-bismuth coolant, far more corrosive than lead and making thousands of times more polonium. Yet they worked fine and were the most powerful submarines of their time.

Fluoride coolants should also be attractive for fast reactors, fluorine being a poor moderator, and fluorides having 4-5x the volumetric heat capacity of sodium, and being transparent over an even longer wavelength area than water, helps with in-service-inspection using conventional technologies such as videocameras on glass fiber optics. ORNL’s MSRE had great success with fluoride coolant (secondary loop) and fluoride fuel (primary loop) before fast reactors were even developed, using 1960s technology.

I agree that a coolant switch for a PRISM would entail 10+ years of delay, and that is one of the biggest downsides. It’s a tough call, going the quick way with a sodium PRISM but risking public non-acceptance delaying the project, or switching coolant risking technical delay but with an easier public relations case.

I’m looking forward to your post on this issue.

Like

In germany, the Klimaschutz-Index 2012 from germanwatch, ranking countries how good they do at protecting the climate got a lot of media attention like http://www.spiegel.de/wissenschaft/mensch/0,1518,801937,00.html

Unfortunately their manipulativ calculation methods got no attention:
They deliberately calculated nuclear plants with the emissions of a modern coal plant to discourage people to build them!

The original quote is: “Als besonders risikoreicher Energieträger wird Atomkraft mittels sogenannter Risikoäquivalenzen pro Energieeinheit in die Betrachtung mit einbezogen (sie entsprechen etwa den Emissionen eines modernen Kohlekraftwerks). Dadurch wird verhindert,dass der Neubau von Atomkraftwerken belohnt wird.” (source: http://www.germanwatch.org/klima/ksi-meth.pdf)

@moderation: This is my second try to post this, am I doing something wrong? My previous post should have been comment-145285 in this thread, but I can’t see it.
MODERATOR
Your comments are being caught in the Spam. The last one may have slipped through our check of a large number of Spam comments. Apologies.

Like

As no doubt the most tech. challenged guy in this discussion I can at least add 2+2. What that adds up to as far as I can see is that the nuke power solution appears to be a no go. It’s pretty simple if you take your lead from the implications of this voice-over slide show from respected climate scientist Kevin Anderson.

What he is saying to me is to avoid hitting that 2 deg C rise that will then unleash catastrophic feedbacks, essentially meaning game over as far as avoiding a 6th extinction event, we, the world, will have to begin dropping our carbon output 10% next year and continue a rapid yearly rate of drop to a level that will have us more than 90% below our high by 2030.

Nothing in Barry’s nuclear substitution scenario that I have read contemplates such a time line. KA recommends drastic cutbacks on carbon energy use, particularly among the rich, like almost immediately. It may not seem practical but he seems to see no other choice.

Any thoughts?

Like

David M, your point more seems to be one about a crash programme in voluntary energy deprivation vs continued growing energy consumption. It has little to do with whether nuclear or renewables are the alternative energy source, I’m sure you’ll agree. If a crash programme is required, then I think failure is already guaranteed (how do you propose this would be implemented??). If a more gradual reduction in carbon emissions is sufficient (e.g. cut by 50% on 2000 levels by 2050, 80% by 2080 and 100+% by 2100), then some mix of nuclear, renewables efficiency and other techno-fixes.

Like

Although I agree with fast spectrum reactor as a major source of electricity/energy, there is another view. Just consider greenhousing of the world as a positive development and use it to grow more biomass for food and fuel!

Like

An interview with me, from Singapore, on the Durban talks. Full article here, along with the podcast: http://entertainment.xin.msn.com/en/radio/938live/caforeignnews.aspx?cp-documentid=5647241

Keval Singh speaks to Barry Brook, Professor of Climate Science at the University of Adelaide.

He starts by asking about the importance of a new pact after the Kyoto pact.

“It’s clear that globally we have to reduce greenhouse gas emissions significantly. If we look at today’s emissions, they’ve risen over the last few years compared to our hopes and expectations. And so the Kyoto Protocol which was designed to reduce greenhouse gas emissions from developed countries through the years to 2012 has not been successful. So we need something new going forward, and the last few years of climate talks have been trying to come up with some new type of protocol or agreement between governments.”

Has the Kyoto Protocol been successful in achieving its goals to begin with? Is there really a point in coming up with something new if you have nations which won’t ratify the plan?

“A big issue with the original protocol was that some countries were not included. Places like China and India which have historically not contributed to global gas emissions, but today and in the future will certainly be very large contributors, they weren’t included within in. And that caused a lot of disagreements and originally stopped Australia from ratifying the treaty, and stopped the US from ever ratifying it. Australia eventually did when they had a change of government, and Russia did under some duress.”

Can we afford to continue the blame game?

“From the perspective of climate change, we can’t because its continuing greenhouse gas emissions are rising, the earth is getting warmer, the impact are getting worse and it’ll get a lot worse if we don’t do something about this problem. So just blaming each other is not really very helpful. What we need to do is work out what nations can do to help each other to reduce their emissions while also getting the benefits from doing that. Just having an agreement that different countries will do different tasks and some will do nothing, I think it’s not going to work.”

It’s often said that China and the US are the biggest polluters. But what about other countries? What can they do?

“Together China and the US account for almost half of global CO2 emissions. And if you add the countries of the EU, then you’re over 70 percent of emissions. But the other 30 percent is spread over a lot of countries. In places like Australia and Singapore and those with smaller populations have still got a responsibility because per person they’re still producing a lot of emissions. So other countries have to be incorporated within this. But because a solution and an agreement, doesn’t occur between China and the US, the EU and other countries like Russia and India, then it’s sort of pointless what other countries are going to do.”

Is there an alternative?

“I think there is and it has to focus on technology. The solution to the problem is so urgent that we can’t wait for things like the human population to reduce. And we’re not going to overall reduce the energy demand because so many countries currently suffer from energy poverty. So what we need to do is focus on technologies that can deliver that energy demand with essentially no carbon emissions. The two obvious technologies to do that are nuclear power and renewable energy. The problem is there are political, technological and economic and social issues around those technologies that make it difficult to displace fossil fuels.”

It has been said that if the planet were a bank, world leaders would have scrambled to save it. What are your thoughts? Do you think we’re being myopic?

“I think we’re being very short-termist. So the bank crisis was obviously going to result in economic upheaval within weeks or months. It risked going into an economic depression, governments could see that and they felt there was also a possibility that they could intervene. They could see an immediate problem, with an immediate impact and an immediate solution. The problem with climate change is that the problem is slow to build up. Most of the impact hasn’t been observed yet. And any change you implement today, won’t have strong consequences for many years. So if the climate system were a bank, in that it started to change very rapidly such that within an election cycle it would make a big difference, then I think governments would have intervened much more quickly.”

Like

Barry, your argument is not with me, it’s with Kevin Anderson. Did you look at his slide show? He’s saying a 90+% drop by 2030 or we shoot past 2 deg. C limit and feedbacks take over. He seems to be drawing on mainstream climate science projections.

If you don’t mind I’ll also use this post to see if I can manage a hyperlink. This is for a more limited written version of the slide show.

Like

David M, mainstream climate forecasts do not say this. You can prove this to yourself by running some scenarios in MAGICC/SCENGEN. This tool, developed by my colleague Tom Wigley, is used by the IPCC in their assessment report modelling. http://www.cgd.ucar.edu/cas/wigley/magicc/ Nor does the most recent work suggest a 90% drop by 2030 is required. I suggest you consult Rogelj et al. 2011, Emission pathways consistent with a 2 °C global temperature limit, published last month. Yes we need to peak our emissions before 2020 to have a reasonable chance of staying below 2C, but this does NOT imply 90% by 2030 (more like 50-80% by 2050 — depending on what went on in the previous decades).

Like

Sorry Barry, I didn’t get any guidance from either of your links. Perhaps you can come up with something simpler and more specific. Here’s an excerpt of KA, from my second link, that might move the discussion along.

Anderson is adamant that the familiar targets almost all politicians and many scientists use in public — e.g., “80 percent reduction in the rate of emissions by 2050” — are deeply misleading. As far as the climate is concerned, the rate of emissions in 2050 relative to the rate of emissions today is meaningless. CO2 stays in the atmosphere for over a century; the atmosphere doesn’t care what year it arrives. (Though targets in the distant future are comforting to politicians, for obvious reasons.)

The only thing that matters in limiting temperature rise is cumulative emissions, the total amount we dump into the atmosphere this century. When the total concentration of GHGs in the atmosphere rises, temperature rises. That is the correlation that matters.

If we want to limit temperature rise to 2 degrees C or less, then there’s only so much carbon we can dump in the atmosphere. That is our “carbon budget” for the century, the amount we have to “spend” before we’re in the danger zone. As best we know, the global carbon budget for this century is between 1,320 and 2,200 gigatons (There are too many uncertainties in the science to be more precise than that.)

Like

I need to modify what I said earlier. The 90+ % drop by 2030 was for the wealthier nations, the main polluters. The poorer countries would turn things around more slowly as they are starting from a lower foot print base.

http://www.grist.org/climate-policy/2011-12-08-the-brutal-logic-of-climate-change-mitigation

What would it mean for the U.S. and other developed countries to peak their emissions in 2015 and decline them by something on the order of 10 percent a year thereafter?

It’s safe to say that no carbon tax is going to do that. It’s tough to imagine any “market mechanism” that could ratchet things so quickly, at least on its own. We won’t get there through innovation or new technology, even if we spend a trillion a year for the next few years. We won’t get there by tweaking our current system. The only conceivable way to produce that level of reductions is a full-scale, all-hands-on-deck mobilization, what William James called “the moral equivalent of war.”

The vast bulk of the reductions available in the near-term are on the demand side. Of course this means driving efficiency as fast as possible while taking measures (like raising prices and setting standards) to avoid the rebound effect. But it also means (gasp!) conservation. Actually, “conservation” is too polite a word for it. It means shared sacrifice. Climate campaigners have sworn until they’re blue in the face that reducing emissions is compatible with robust economic growth. And it’s true! But reducing emissions enough? Maybe not, at least not for the next little while.

This is the stark conclusion drawn by Anderson and Bows: “The logic of such studies suggests (extremely) dangerous climate change can only be avoided if economic growth is exchanged, at least temporarily, for a period of planned austerity within Annex 1 nations and a rapid transition away from fossil-fuelled development within non-Annex 1 nations.”

Like

Jason, Roger, thanks for your replies.

Could you (or Tom Blees/Barry Brook if they’re around) comment further on the continued commercial development of IFR’s in other countries?

Presuming the US has no interest in the near future, say a joint project between Aus/GB/France for example?

Like

Saw a post on a forum where someone was all but jumping up & down screaming, with a link to a website claiming that Fukushima #4 reactor building was “collapsing” in aftershocks, and the entire spent fuel pool was about to come tumbling down dumping all those fuel rods on the ground.
http://www.infiniteunknown.net/2011/12/12/confirmed-fukushima-reactor-no-4-is-falling-apart-wall-was-lost-on-the-south-side-video-photos/

It was kind of scary reading (because of the tone taken, not the content) – “The wall of the south side is falling apart at reactor 4.
Reactor 4 is in the most serious situation. It is assumed that if another aftershock hits it to drop the spent fuel pool hung in the building, the entire area in eastern Japan would be too contaminated to be inhabitable.”

I came to the conclusion, by looking at the photos, that the change in the building profile is due to removal of wreckage (i.e. part of the cleanup operation).

I’ve lost track, where can I find up-to-date info on how the cleanup is progressing?

Like

The video on the AP 1000 nuclear power plant under blackout conditions was very instructive. Thank you Gregory, it answered some of my questions. Still for all the passive protections, if a cooling pipe or water tank is breached or feedback systems jam it still seems like you are up a creek.

To change focus, If somebody thinks we are on a moderate timeline as far as hitting the critical tipping points they might want to check this out.

http://www.independent.co.uk/environment/climate-change/shock-as-retreat-of-arctic-sea-ice-releases-deadly-greenhouse-gas-6276134.html

Like

Mark: if you read the whole article, there’s the “he said/she said” section featuring arnie gundersen:

To many, Gale’s figures, however well documented, may not tell the whole story. One month after the meltdown, the Fukushima disaster was upgraded from a 5 to a 7 on the International Nuclear and Radiological Event Scale, the only accident to have been given that highest rating other than Chernobyl (the accident at Three Mile Island, in 1979, by contrast, merited a 5). One day before Gale spoke with the workers, a study of the Japanese disaster contended that, in the first few days of the meltdown, Fukushima released more radioactive noble gas than Chernobyl by a factor of 2.5. Earlier, Arnold Gundersen, a former nuclear-industry executive who served as an expert at Three Mile Island, had asserted that Fukushima has the potential to release 20 times as much radiation as Chernobyl.In the weeks immediately before Gale’s recent trip to Iwaki, one worker checked into a hospital after only 46 days on the job and was dead the following morning. The Japanese government lifted its “evacuation advisory” for those living more than 20 miles from the plant, but at the same time radioactive plutonium 238 was discovered in the soil up to 30 miles from the plant. And The New York Times reported, in mid-October, that citizens’ groups had found more than 20 “hot spots” in and around Tokyo contaminated with potentially harmful levels of cesium.

Me: this is supposed to be an article featuring Gale, whose point is that the real danger here is fear. And then the author inserts this paragraph, which reinforces the fear. The reference to plutonium 238 has no context, which means it will carry the context most people have with plutonium: most dangerous substance ever, etc.

No discussion of source of plutonium, amount, risk, nothing. sucks.

Like

David, the key to addressing things like breached cooling pipes and jammed feedback (I’m assuming you were talking about the return gutters) is redundancy. Have two, three, even 4 lines any one of which can do the job. This way if one were to fail, you have additional lines.

Even still, if the AP1000 were to still find itself up a creek, the defense is ancillary equipment. US plants are required to have a B5B pump (B.5.B being the federal code requiring it). Those are portable gas fired pumps that can be quickly deployed during a station black out. The consumption of water to cool an AP1000 is the volume of a few garden hoses. A B5B pump and even a bucket brigade could easily produce that flow.

Lastly, the station blackout at Fukushima disabled ALL core cooling (no passive systems at Fukushima). This allowed the fuel to heat up to the point where the zirconium-water reaction produced hydrogen leading to the explosions you saw on TV. Those explosions destroyed the pipes, valves, pumps, etc such that when they finally got power restored, there was nothing to hook up to. This is what sent them up a creek as you put it. The single best way to deal with the zirc water reaction is to not have it in the first place. That’s the main reason for the passive systems of this reactor. An AP1000 wouldn’t have melted down, even after that 45′ tsunami because the station blackout the tsunami caused would not have disabled the core cooling and so that hydrogen would not have been produced.

Like

Jack, I’m assuming a major earthquake, like Fukushima, enough to crack a water cooling pipe or cooling tank or disable any of the myriad of feedback mechanism(much more than simply the gutters – think of the ocean floor interface on the burst gulf oil well pipe which had backups).

You can have all the outside water you want but what does it feed into when the cooling pipe is broken? The alternative to prevent a melt down appears to be flooding it from the outside for years. That doesn’t strike me as being very practical. I hope they would have some protocol for fixing the cooling water pipe etc. under these crisis conditions but I haven’t seen it discussed.

Like

let’s recall here that Daichi 6 was okay because of an uprate that safeguarded the diesel generator.

This diesel generator was able to provide power to cool D 5.

AP 1000s would have to be designed specially for earthquake areas, but it could be done. Dave, you make it sound a bit too easy to have a meltdown. [is it really meltdown vs. flooding from outside?]

Like

Gregory, a term that is often used with modern nuclear power designs is “fail safe”. So given two truisms that I think you will agree with:

1. Human beings are fallible.

2. If things can go wrong they will.

What happens in a severe earthquake if the cooling pipe cracks and starts spilling the coolant water and there is a black out? Where is the fail safe? Simply saying the pipe won’t crack because it will be so well designed strikes me as a faith statement, not a fail safe scenario.

Like

There’s a document that I’ve read online before that I can’t find again and I’m hoping somebody here might have it bookmarked or something.

It was produced by Argonne National Laboratory, I think, and it’s all about liquid sodium as a coolant, the physical and chemical properties of sodium, and how it is a well characterised and useful fast reactor coolant.

Anybody have any ideas?

Like

Luke Weston — Search for M. J. Lineberry and T. R. Allen, Argonne National Laboratory, “The Sodium-Cooled Fast Reactor (SFR)”

Like

Thanks for the follow up link David B. I’ve learned more now. Just to dumb it down to my amateur level:

1. If there is a blackout the system is set up to maintain on its own for a week.

2. If there is a breach in a cooling pipe the system is set up to maintain on its own for 3 days.

3. If there is the beginning of a core melt down immediate human intervention is required.

In all scenarios if outside water is not eventually available a core melt down is assured. Air cooling is built in but only as a temporary measure. Finally outside water must be available after temporary defenses are exhausted.

It may seem obvious but I think that point nevertheless needs to be hammered home. There is a serious vulnerability here. Water supplies are not always assured.

Like

@ David M,

1. If there is station blackout the system can maintain itself for 3 days after which human intervention is required to refill the PCS. However some earlier AP1000 documents state that containment should stay intact on air-cooling alone[1][2]. I’m not sure what consequences this would have though – they wouldn’t add a PCS tank for no reason.

[1] http://nuclearinfo.net/twiki/pub/Nuclearpower/WebHomeCostOfNuclearPower/AP1000Reactor.pdf

[2] http://www.westinghousenuclear.com/docs/AP1000_brochure.pdf

2.

3. Flooding the reactor cavity is supposed to stop the core from melting through the pressure vessel which is done by gravity draining the refueling water storage tank. I presume a human needs to press a button for this to happen although maybe computers could do it.

Like

Thanks Scott but my primitive computer won’t download the pdfs. The problem of a disaster driven outside water cutoff remains to be addressed.

Ms Perps’ link to Barry’s article and the comments elicited this comment:

In 2008, the Intergovernmental Panel on Climate Change concluded that humanity has eight years left to prevent the worst effects of global warming. There is no possibility of building a significant number of new plants in that time;

That’s along the lines of what I said to Barry. Severe austerity with regard to fossil fuel at this point appears to be the main way we avoid passing the line into uncontrollable feedback from unleashed CO2 and methane release. Even a thousand new nuclear plants say between now and 2020 would be a drop in the bucket and would detract from resources for efficiencies and radically changing supply chain infrastructure and travel – think virtually no plane travel for starters and although people don’t like to hear it, mandated lower population targets. Sorry, I don’t make the rules. They are real world challenge generated.

Like

@David M,

A thousand new nuclear power plants by 2020 would expand world nuclear generation capacity to the point of generating ~50% of the worlds electricity and replace most coal fired generation. It would have a much more important effect on emissions than banning all air travel, which in any case is not going to happen anytime soon.

Like

I see Qantas want some carbon tax proceeds to fund a $300m bio jet fuel plant
http://www.theaustralian.com.au/business/aviation/fund-biofuel-from-carbon-tax-says-qantas/story-e6frg95x-1226099278769
Overseas airlines think they can get a whopping 2% of their liquid fuels from biomass. However if they use the Finnish NExBTL process it seems to be a mystery where they get the hydrogen booster which I gather is trucked to the refinery in cylinders.

As passengers fly over vast crops for biofuel they might also ponder
– is it making food more expensive?
– do diesel tractors, irrigation and nitrogen fertilisers help?
– injecting bio-CO2 at 10km altitude is not carbon neutral.
I suspect air travel passenger kilometres will be seriously lower in 2020 than now. Of course if no coal was used in electricity generation we might be able to use small amounts in making jet fuel as well as steel and cement.

When the former middle class cannot fly, eat steak or drive private cars as before they will take energy issues seriously.

Like

Quokka, factor in all our fossil fuel energy needs, which is the more meaningful goal, and a 1000 new nuclear plants would fall short of even covering the majority needs of the US, assuming we had all electrical transportation and all electrical home, business and government energy sources. And natural gas is not a bridge. It’s just as bad as oil.

As far as eliminating most of the plane flights, that was just an example of what would be needed. Walking, biking and using limited public transportation would have to be in the mix. High taxes on fossil fuel use would be another. More local sourcing of goods and services obviously. And remember the world population is increasing at over 200,000 a day. Factor that in.

And you and I know that 1000 new nuclear power plants, even if they were approved today would probably not in most cases even come on line by 2020 and even if they did, almost all would have a negative EROI within that timeline.

That said, I’m not a nuclear hater. As a substitute for fossil fuel for the next few decades it would normally seem to be the principal logical choice. I would like to see them both eliminated eventually but fossil fuel first. I think the Germans are insane to go for a nuclear phase out presently. That commits them to substantially greater fossil fuel. Even their own estimators graphically seem to admit it. http://cdn1.spiegel.de/images/image-194910-galleryV9-fvmp.jpg

It’s just that for now, drastic austerity appears to be the priority if we are going to avoid going over a critical tipping point.

Like

Over 300 comments on the Drum article and I can see plenty from BNCrs. All the usual furphies are being wheeled out and demolished but more comment from pro-nuclear folk is needed.

Like

@ JN derides Qantas for decorating its fuel useage with shallow biomass claims. Then wonders if in future coal will still be needed “in small amounts in making jet fuel as well as steel and cement”.

Jet fuel — The production of transport fuels using nuclear pyrolysis of coal has been commented on BNC as a process to ease the coal-extracting communities into the nuclear age. Eventually the coal has to be replaced with something else. As JN implies, we cant ask the biosphere to supply the biomass.

Steel — Carbon monoxide is certainly one way to add electrons to Fe3+, but a quite inefficient use of coal. However that could also be done by electrolysis in say, a chloride melt. Considering that electrolytic aluminium costs about 2 $/kg, that suggests a ballpark for (the 2x heavier) iron of about 1 $/kg.

Cement — Carbon isn’t essential to make cement clinker, although the reduction of some stray ferric iron to the ferrous form creates a flux that reduces the sintering temperature by 50 deg or so. However, it isn’t an essential requirement, because white cement is sintered with minimal iron, ~0.3%, at the higher temperature .

The limestone used in cement manufacturing still gives off CO2. Perhaps when redesigning the kiln, we could pipe off the CO2 to make the jet fuel!

Like

Hydrogen is another alternative reductant for steel making, as described in this BNC post. Steel making is responsible for about 7% of global CO2 emissions, about 80% of which is from coke oxidation during smelting. Knocking out that chunk of emissions by using nuclear hydrogen is definitely worth chasing.

Like

David M. wrote:

I think the Germans are insane to go for a nuclear phase out presently. That commits them to substantially greater fossil fuel. Even their own estimators graphically seem to admit it.

http://cdn1.spiegel.de/images/image-194910-galleryV9-fvmp.jpg

Along the same lines, have you looked at the GE visualization tool for the German energy mix from 1950 – 2030. Data comes from a variety of sources: AG Energiebilanzen; Kalert; BPB; BGR; Eurostat; Kohlenstatistik; IPPNW; Federal Ministry of Economics and Technology Leitstudie 2010; Prognos AG; and Plan B 2050.

http://visualization.geblogs.com/visualization/germanenergy/

From 1985 to today, it suggests coal use has declined from 42% down to 23% total energy consumption. To 2030, it projects a further decline to 8 – 12% (with different metrics in range depending on elevated contribution from renewables and energy efficiency). Natural gas remains relatively stable at 22-24% to 2030, with a high case of 38% in the low coal and low consumption alternative, for a lower carbon footprint. Presentation includes all data on imports and energy dependencies to rest of Europe, US, and Africa (with nuclear fully phasing out around 2020 – 2025).

A paper with greater detail is presented here: Energieszenarien 2011 (from Federal Ministry for Economics and Technology), click here for Google translation.

Like

This looks like it might be blowing up into something a little bit ridiculous.

http://www.news.com.au/world/russia-seizes-radioactive-material-bound-for-iran/story-e6frfkyi-1226224476224

It doesn’t really make sense, though.

Sodium-22 is an unstable, artificial radionuclide with a half-life of 2.6 years. It has one neutron less than natural, stable Na-23, and accordingly, it decays by positive beta decay (positron emission).

Like most other proton-rich radionuclides, it is not produced in fission reactors, and it is instead produced with a particle accelerator, by irradiating Mg-24 with a beam of deuterons in a cyclotron.

It has no relevance or usefulness for nuclear weapons or peaceful nuclear energy generation.

It has an unusually long half-life for a positron emitter, making it very useful as a radioactive source in any kind of research where a source of positrons is required, say for physics experiments. It’s the most common radionuclide used for that kind of thing where a positron source is required, since it is not extremely short lived like some of the other positron emitters such as C-11 or F-18 commonly used for PET. Na-22 sources can also be used in medicine for calibration, setup and testing of PET imagers.

Why would it be exported from Russia without proper customs authorisation? I don’t know.

Like

I haven’t read The Drum since yesterday but a couple of comments linger in memory. One was that nearly a million people have already died as a result of Chernobyl and another half million are doomed. For Japan it’s even worse. Another comment suggested in effect Barry hadn’t done his homework and should put in some effort getting up to speed.

I wonder how widespread such views are. Even if in the minority there is a leverage effect since political candidates need to talk tough then they may hold the balance of power in an alliance.

Like

I would like someone from the gubmint to explain why Australians are being discouraged from burning Australian coal and gas while foreigners are being encouraged.
http://www.miningaustralia.com.au/news/china-first-now-a-major-project
The name of the project says it all. They say they will have a CCS based power station to run the show. Since governments and industry are committed to carbon reduction we can be sure they wouldn’t omit the CCS part because it was a hassle.

Like

DBB:
Thanks for a link to an emotional, biased and clearly campaigning site.

Objectively, just what is it that was said there? That a number of fly ash sources in USA are suspected of seepage of water containing pollutants. There are ways to avoid damaging seepage, whether from rubbish dumps, ash dams, tailings dams, spoil dumps, slag heaps and all of the myriad other scars that mankind is responsible for on this earth. Fly ash is only one of the possible source products, perhaps one of the easiest to monitor and to manage. If it is arsenic that worries you, then perhaps in-situ leaching and spoil dumps associated with gold mining should rate higher as a concern. Besides which, since this discussion appears to be targetted at arsenic, what form is it present as, arsenite or arsenate, because this greatly affects the toxicology? Is it bio-available? Will the proposed remediation make things worse or better, eg could previously chemically immobile arsenic become mobilised due to well-meaning yet ill advised actions such as exposure to air?

To say something akin to “Look here! Arsenic! Huge problem!” answers none of these questions and thus cannot lead to an answer, especially a cost-effective, environmentally and toxicologically sufficient response to whatever real risks exist.

Essentially, return of seepage water to the dam via seepage interception and small pumps will often do the job at little cost. Disposal of surplus ash in old mine voids results in the materials listed going right back where they came from, provided that the studies have been done and the containment appropriately designed, eg with clay lining.

As I said on another thread on this site yesterday, it is essential that perspective be maintained. Over-the-top ranting, as at the linked site, does nothing to improve outcomes for mankind or for the planet. It relies on the squeaky wheel effect – make enough noise and even small problems can be made to appear huge.

This squeaky wheel effect is the primary tool of anti-science ravers, especially the antinuclear campaigners. The downside is that real problems are not attended to because effort has been diverted to lower ranked or insignificant issues.

A more balanced report into fly ash storage would avoid use of the term “toxic flyash” unless the flyash involved is actually and correctly classified as a dangerous good, which is simply not the case for most (all?) fly ashes. I know that some will disagree with me on this point, but consider the following.

What is doing greater harm to humans today, fly ash or cane sugar? Which one more deserves the adjective “toxic” by this yardstick?

Manufacture of which product causes most damage to the environment today, fly ash or cane sugar? One, if poorly handled and stored, can be responsible for local environmental damage and perhaps threats to some forms of life. Sugar cane farming, on the other hand, through runoff containing phosphate and many other pollutants, is clearly reponsible for significant environmental degradation of the Australian Great Barrier Reef in a World Heritage Area, a supposed wilderness reserve unequalled anywhere else globally.

Where is De Smog Blog’s campaign against sugar cane?

Perspective and balance are absent from De Smog Blog’s site.

One of BNC’s values is its perspective.

For a link to the MSDS for a typical Aussie fly ash, see https://bravenewclimate.com/2011/12/17/fukushima-9-months-o/#comment-146210.

Like

John Bennetts the problem with fly ash collection is that it just keeps growing and growing and growing and eventually the original site is not able to contain all the tooth paste like toxic stuff that has an infinite hazardous waste lifetime. Over the long haul leakage is inevitable. Also if you try to store on the surface the CO2 capture stuff you wind up with more material being stored at the plant site than the original train loads of coal itself. The most compact way to store carbon is in the form of the coal itself. The moral of the story is we should just leave the coal in the ground because thats the most efficient form of storage..

Like

Arsenic.

To put into perspective the issue of arsenic in fly ash, check out http://www.inchem.org/documents/pims/chemical/pimg042.htm.

This 11,000 word discussion of the properties of arsenic, its toxicological effects, and much more does not even rank mention of fly ash as a source, although it does rank many other potential sources, across many industries.

If respected authorities such as INCHEM, the International Programme on Chemical Safety, are so unconcerned, then where is there support for the noise generated by De Smog Blog on this topic?

This is an excellent example of how overblown claims and noisy campaigns can divert attention, and thus action, away from the things that really matter.

It is similar to the results of well-meaning argument which lacks perspective, about statistically insignificant risks from very low levels of ionising radiation whilst the earth is warming.

Like

Leave a Reply (Markdown is enabled)