Climate Change

CCQA1 Presentations Available

The PowerPoint presentations from the first of the Climate Change Q and A seminars are now available as PDF files. The presentations by Barry Brook and Andrew Watson provide the scientific answers to sceptical questions about whether the earth is really warming. Please see at or for details.

Efforts have been made to provide the sources for all the information in the presentations. If you believe copyrighted work is available on this site in such a way that constitutes copyright infringement, or a breach of an agreed licence or contract, please let us know.

By Barry Brook

Barry Brook is an ARC Laureate Fellow and Chair of Environmental Sustainability at the University of Tasmania. He researches global change, ecology and energy.

17 replies on “CCQA1 Presentations Available”

Excellent resource! I have yet to come across as comprehensive and easy-to-access presentation as this one by Prof. Brook. Thanks for posting it!


Thanks Counters. Of course it’s mostly images, so will work best if looked through whilst listening to the audio podcast (which will be up later today).

Note in the above post that Andrew Watson’s (Bureau of Meteorology) presentation is now also available for download.


Dear Professor Brook

You say that my posting has already been refuted by the articles to which you have been linked. I’m slightly bemused by how you think this can be. I have written a piece which is largely to do with dubious procedures in scientific journals, and also includes a section about dubious benchmarking for the RE statistic in the verification period. Yet none of the articles you link to even mentions the RE statistic. They seem to be largely about principal components analysis which is not mentioned in my posting at all.

I was wondering if you would give us your opinion of the benchmarking procedures used by Wahl and Amman. Do you think they were valid?


You cannot duck these questions, because you exhibited the discredited hockey stick diagram several times in your slides referred to above. The errors have been exposed by papers by McIntyre and McKittrick, Von Storch etal., Burger and Cubasch and others. A team of expert statisticians led by Edward Wegman found that “In general, we found MBH98 and MBH99 to be somewhat obscure and incomplete and the criticisms of MM03/05a/05b to be valid and compelling”, and confirmed that the ‘hockey stick’ could be reproduced by feeding random noise into the MBH method. The ‘refutations’ at the alarmist web sites you refer to are pathetic in comparison with the analysis of McIntyre, and repeat the blatant falsehood that all the reconstructions show the MWP cooler than today.


Bishop Hill, you are tying yourself in knots trying to understand the peer review process – a process I’ve experienced over a 100 times, and seen all the possible convolutions, including multiple revisions of an ms prior to submission, requests and re-requests for changes/clarifications/qualifications, rejections, revisions to meet reviewer requests, resubmissions, debate with referees and editors, more revisions and very long cover letters (rejoinders), and final acceptance that the article is sufficient robust on most points to warrant exposure to the broader scientific community for further commentary and critique. I … err.. I guess you haven’t.

For the guff by PaulM on the Wegman finding, I suggest you read the full history, well summarised by cce here:



I assume from the fact that you are no longer discussing the papers you linked to earlier, that you are tacitly admitting that they don’t in fact refute anything I wrote.

You then take issue with my understanding of the peer review process. Are you really suggesting that it is normal for a scientist to refuse access to key data behind one of their papers? Do you support Wahl and Amman’s actions in doing this? Do you do it yourself?

And what about the relegation of the key statistical argument to the Supplementary Information – unavailable to peer reviewers. Do you think this was acceptable? Normal? Have you ever done this yourself?

And now the SI is available, I’ll ask again: Do you think it is acceptable to publish an RE benchmark of zero in the main paper, while actually deriving one of 0.5 in the SI? Should I assume from your silence that you find this indefensible? And while we’re on the subject, do you think that a set of results with an R^2 of zero (or so close to zero as to make no diference) are significant?


Bishop Hill:
1. No – I just dislike repeating myself ad infinitum.

2. a. No. b. It is for the journal to decide on that policy, but it is not my experience. c. No.

3. a. Standard practice – try writing a Nature, Science, PNAS, Geophys Res Lett, Ecol Lett paper some time and see how quickly you hit the main article word limit. b. These are always made available to reviewers. c. Again, this is standard practice in every journal I have ever submitted to.

4. a. I don’t believe you understand the RE. 0.5 is the null expectation. b. I’m puzzled why you would assume this. c. That is a statistical argument – it can be ‘significant’ depending on sample size but the effect size is an indication of structural value of the model or predictive power. The ‘value’ depends on what it is compared to, i.e. relative worth compared to other forms of prediction.


Thank-you for replying with such thoroughness.

1. You pointed to some articles that you said refuted my posting. I said that they were actually on a different subject (PCA) to the one I had written about (validation statistics). You would not have needed to repeat yourself to respond to my point.

2. We agree that withholding key data is not normal and you don’t do it yourself. But what I also asked was your opinion on withholding data. Do you find it reprehensible?

3. I think I’m right in saying that relegating key argument to the SI is most unusual. It seems most unlikely that the SI was available to the peer reviewers because Climatic Change was unable to locate it when asked.

4. (a)Let me set out my understanding and you can put me right. M&M agree with Huybers that with a univariate model you get an RE benchmark of zero. But the proxies are not univariate, they are multivariate. With a multivariate model you get an RE benchmark of 0.5. W&A sought to refute this. In WA2007,they restate Huybers’ already refuted unvariate position, claiming that this shows M&M are wrong, while showing that the multivariate case does indeed require a benchmark of 0.5 in the SI. Am I right? And if so, doesn’t this make a nonsense of W&A’s purported refutation of M&M?(c)OK, but if the sample size is not small then R^2~0 means no correlation between temperature and tree rings, which is to say the hockey stick has no meaning, no predictive power. I don’t understand your point about relative value. You can’t get a lower R^2 than zero. Nothing else can be worse, can it?


Barry, I am puzzled. I refer to a report by expert statisticians and several scientific papers and all you can do is refer to another alarmist blogger. Which of us is the professor of climate science and who’s the amateur?


PaulM, short answer is I use well prepared graphics from “amateur” blogs when they are based on scientific publications or verified data sources and clearly presented. Graphics produced for scientific papers tend to be more technical and “busy” and are less suitable for a PPT for a general audience without removing clutter (e.g. multiple plots, detailed legends that can be explained verbally by me and are hence redundant in talks).

The presentations in CCQA are a mix – about half are from scientific papers, half from other sites that are based on official data streams.


Leave a Reply (Markdown is enabled)

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s