Willacy’s Fukushima

Guest Post by Geoff Russell. Geoff is a computer programmer, vegan, environmentalist, and more generally, a ‘by-the-numbers’ polymath. For a list of all of his posts on BNC, click here. He also has collections here and here.

The following article was sent to ABC’s “Drum” website about a month ago. They rejected it. I have asked why, but they refuse to give a reason. I would have thought that when a journalist so publicly associated with the ABC as Mark Willacy, makes serious factual errors in a book, that the ABC would publish a reasoned critique. Apparently not. 

ABC journalist Mark Willacy recently launched a book: “Fukushima: Japan’s tsunami and the inside story of the nuclear meltdowns”. The ABC is giving the book plenty of exposure and gave Willacy time off to write it.

Back in March this year I wrote an article on poll results exposing deep nuclear ignorance in Australia. Only 10 percent of Australians understood clearly that a nuclear explosion was impossible in a nuclear reactor. The other 90 percent occupied various positions along a spectrum between certainty of the facts and being equally certain that the impossible was possible.

Willacy doesn’t just believe that the impossible is possible, but that it’s already happened. His book reveals similar knowledge deficits not only in the Japanese public, but in many of those who oversaw Japan’s Fukushima meltdown response, including nuclear industry workers.

Willacy states clearly (p.128) that the Chernobyl reactor underwent a nuclear explosion in 1986. Not true or even possible. Power reactor fuel is simply wrong stuff. It’s as different from bomb material as potting soil is from gun powder. And even if you loaded a reactor with bomb grade enriched uranium, there’s no detonation mechanism. Designing and building the latter is one the hardest design jobs in building a nuclear bomb.

Confusion about categories of explosions may seem a small thing, but we’ll see that it extends to a more general ignorance about the nature and scale of radiation risks.

Explosion types, no small matter

Here’s a couple of pictures illustrating the difference between what Willacy claimed happened at Chernobyl and what actually happened … a steam explosion. The image on the left is of a nuclear explosion (the World War II Hiroshima bombing) which flattened about 700 hectares of buildings and killed about 60,000 people with its blast heat and pressure waves. An additional similar number died in the following weeks from various injuries, including radiation received from the blast. The image on the right is the aftermath of the Chernobyl explosion … a steam explosion which blew the top off the reactor, killed two workers and mangled a building. But, as you can see, it didn’t even have enough power to knock over a tower a few metres away from the blast.

Hiroshima: nuclear explosion

Hiroshima: nuclear explosion

Chernobyl: steam explosion

Chernobyl: steam explosion

The above two images are at very different scales, but the difference is clear…

A steam explosion, together with a larger radiation release could definitely have happened at Fukushima if workers at the plant hadn’t succeeded in releasing steam (including radioactive material) from the reactors. This was definitely worth avoiding, but it could never have been a nuclear explosion. Nor could it have possibly killed any workers in the anti-seismic command bunker hundreds of meters away. The worker death fears described by Willacy were clearly very real to them, but they weren’t at any risk of death other than when out of the bunker and close to the reactor. As at Chernobyl and at the recent fertiliser explosion at West near Waco in the US, it’s usually the firefighters taking the big risks. The West explosion left a 93-foot crater and killed 14 people, mostly the firies … this was a much, much bigger bang than Chernobyl. The 1947 Texas fertiliser explosion was a very much bigger bang again, it levelled 1,000 buildings, killed 581 people and even knocked a couple of planes out of the sky.

Continue reading

CO2 is a trace gas, but what does that mean?

Carbon dioxide, methane, nitrous oxide and most other long-lived greenhouse gases (i.e., barring short-lived water vapour), are considered ‘trace gases’ because their concentration in the atmosphere is so low. For instance, at a current level of 389 parts per million, CO2 represents just 0.0389% of the air, by volume. Tiny isn’t it? How could such a small amount of gas possibly be important?

This issue is often raised by media commentators like Alan Jones, Howard Sattler, Gary Hardgrave and others, when arguing that fossil fuel emissions are irrelevant for climate change. For instance, check out the Media Watch ABC TV story (11 minute video and transcript) called “Balancing a hot debate“.

I’ve seen lots of analogies drawn, in an attempt to explain the importance of trace greenhouse gases. One common one is to point out that a tiny amount of cynanide, if ingested, will kill you. Sometimes a little of a substance can have a big impact.  But actually, there’s a better way to get people to understand, and that’s to follow one of the guiding principles of this blog: “Show me the numbers!“.

In response to a recent post by John Cook on George Pell, religion and climate change, commenter Glenn Tamblyn pointed out an interesting fact: Every cubic metre of air contains roughly 10,000,000,000,000,000,000,000 molecules of CO2. In scientific notation, this is 1022 — a rather large number.

Continue reading

Clearing up the climate debate

The Conversation is a recently established website set up to provide an independent source of information, analysis and commentary from the Australian university and research sector. Over the last few weeks, a group of climate scientists and academics from other relevant disciplines, have been running a series at The Conversation on ‘climate change scepticism’. I’ve been involved with a group, lead by Steve Lewandowsky from UWA and Megan Clement from The Conversation, that initiated and organised the concept for this series, and the result has been some terrific articles published by folks like Karl Braganza (BoM), James Risbey (CSIRO), Ian Enting (Univ Melb) and many others. You can browse the full listing of 13 articles here.

I was a co-signatory of the lead article, Climate change is real: an open letter from the scientific community, and also the concluding piece. I reproduce the latter, below (for the original posting at The Conversation, click here).

The false, the confused and the mendacious: how the media gets it wrong on climate change

The Conversation wraps up Clearing up the Climate Debate with a statement from our authors: the debate is over. Let’s get on with it.

Over the past two weeks The Conservation has highlighted the consensus of experts that climate change caused by humans is both real and poses a serious risk for the future.

We have also revealed the deep flaws in the conduct of so-called climate “sceptics” who largely operate outside the scientific context.

But to what extent is the “science settled”? Is there any possibility that the experts are wrong and the deniers are right?

Certainty in science

If you ask a scientist whether something is “settled” beyond any doubt, they will almost always reply “no”.

Nothing is 100% certain in science.

So how certain is climate science? Is there a 50% chance that the experts are wrong and that the climate within our lifetimes will be just fine? Or is there a 10% chance that the experts are wrong? Or 1%, or only 0.0001%?

The answer to these questions is vital because if the experts are right, then we must act to avert a major risk.

Dropping your phone

Suppose that you lose your grip on your phone. Experience tells us that the phone will fall to the ground.

You drop a phone, it falls down.

Continue reading

Climate Change – it’s complicated, but it’s real

I was recently invited to provide a response to an opinion article on climate change that was offered to “The Punch” website. The lead article can be read here: It’s just too hard to understand climate change. My response, reproduced below (original here), should be read with this context in mind.

It seems that many of the commenters on The Punch website thought I was being patronising or pontificating. Maybe I was, but how else to answer such a “it’s all too hard” complaint? As one of the others commenters noted: “If wishes were horses, beggars would ride”, i.e. just wanting for simple answers and consistent outcomes won’t make them so. Anyway, see what you think…

————————

Dylan Malloch laments that understanding climate change is difficult, with the forecasts sometimes appearing to be contradictory or having a bit both ways, and therefore seeming all rather confusing! It’s easy to sympathise with him. Unfortunately, this is the nature of science.

Let’s consider another example. Newton’s laws of physics work just fine for the everyday world, but if we tried to use them in the timing system of our global positioning satellites, the resulting drift error would be about 10 kilometres every day.

So, the engineers at GPS mission control need to use Einstein’s relativistic theories to make sure your iPhone tells you precisely where you are, whenever you want to know. Similarly, neither Newton’s or Einstein’s equations allow scientists to properly predict the subatomic interactions within the electronics of satellites or iPhones. For that, you need to reference the weird world of quantum mechanics.

Each of these model systems – Newtonian, Einsteinian and Quantum physics – produce some contradictory predictions, and gaps in understanding remain. The theories have not yet been unified, for instance, to the lament of Einstein and his successors.

Yet the vast majority of us – the average Joe and Josephine Public –  are not confused or worried about GPS and iPhones, for the simple matter that we don’t try too hard to understand how they work. After all, it’s plain enough to our eyes, immediately and incontrovertibly, that they do! So we just accept it, like we do for most forms of technology.

Climate science is now treated rather differently, however. This is because although the stochastic and chaotic systems involved are, in their own way, just as complex as relativity and quantum theory, many people just don’t want to take the underpinning science and evidence for granted.

Continue reading

No (statistical) warming since 1995? Wrong

Yes, I’m still on vacation. But I couldn’t resist a quick response to this comment (and the subsequent debate):

BBC: Do you agree that from 1995 to the present there has been no statistically-significant global warming

Phil Jones: Yes, but only just.

Here is the global temperature data from 1995 to 2010, for NASA GISS and Hadley CRU. The plot comes from the Wood for Trees website. A linear trend is fitted to each series.

Both trends are clearly upwards.

Phil Jones was referring to the CRU data, so let’s start with that. If you fit a linear least-squares regression (or a generalised linear model with a gaussian distribution and identity link function, using maximum likelihood), you get the follow results (from Program R):

glm(formula = as.formula(mod.vec[2]), family =
                       gaussian(link = "identity"),
    data = dat.2009)

Deviance Residuals:
      Min         1Q     Median         3Q        Max
-0.175952  -0.040652   0.001190   0.051519   0.192276  

Coefficients:
              Estimate Std. Error t value Pr(>|t|)
(Intercept) -21.412933  11.079377  -1.933   0.0754 .
Year          0.010886   0.005534   1.967   0.0709 .
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for gaussian family taken to be 0.008575483)

    Null deviance: 0.14466  on 14  degrees of freedom
Residual deviance: 0.11148  on 13  degrees of freedom
AIC: -24.961

Two particularly relevant things to note here. First, the Year estimate is 0.010886. This means that the regression slope is +0.011 degrees C per year (or 0.11 C/decade or 1.1 C/century). The second is that the “Pr” or p-value is 0.0709, which, according to the codes, is “not significant” at Fisher’s alpha = 0.05.

What does this mean? Well, in essence it says that if there was NO trend in the data (and it met the other assumptions of this test), you would expect to observe a slope at least that large in 7.1% or replicated samples. That is, if you could replay the temperature series on Earth, or replicate Earths, say 1,000 times, you would, by chance, see that trend or larger in 71 of them. According to classical ‘frequentist’ statistical convention (which is rather silly, IMHO), that’s not significant. However, if you only observed this is 50 of 1,000 replicate Earths, that WOULD be significant.

Crazy stuff, eh? Yeah, many people agree.

Continue reading