On the Russian IEA’s analysis of temperatures

I reported earlier on the Russian claims that the CRU had cherry picked the data from Russian weather stations. Deltoid points out that the analysis concerned actually confirms the recent warming from 1950 onwards:

The red and blue curves agree very well in the period after 1950, thus confirming the CRU temperatures. Well done, IEA!

The red and blue curves do diverge in the 19th century, but the one that provides more support for anthropogenic global warming is the blue hockey stick. The red curve shows warming in the 19th century before there were significant CO2 emissions, so it weakens the case that global warming is man-made. If CRU (not HAdley as claimed in the Russian news story) have “tampered” with the data, it would seem that they must have been trying to make a case against AGW.

The IEA analysis is, in any case, misguided. CRU has not released all the station data they use, so the red curve is not the CRU temperature trend for Russia at all. If you want that, all you have to do is download the gridded data and average all the grid cells in Russia. You have to wonder why the IEA did not do this.

Advertisements
Posted in Uncategorized. Tags: , , , . Comments Off on On the Russian IEA’s analysis of temperatures

Climategate recent stories

I just want to round up a few recent stories regarding “climategate”, as I’ve not had time to do the more in-depth work I’m planning yet.

Firstly, RealClimate look at the integrity of the CRU data set, with an analysis comparing raw data with the adjusted quality controlled data:

The key points: both Set A and Set B indicate warming with trends that are statistically identical between the CRU data and the raw data (>99% confidence); the histograms show that CRU quality control has, as expected, narrowed the variance (both extreme positive and negative values removed).

Thus if their analysis is sound, it would seem that the warming trend was present in the raw data, and the adjustments did not introduce it. Note that in contrast to Willis Eschenbach, Real Climate take a random sample of the whole data set rather than focus on a few stations.

Meanwhile, apparently the US Department of Energy has issued a “litigation hold” notice to CRU employees asking them to preserve documents, suggesting that some form of legal action might be being prepared for.

Also, the Russian Institute of Economic Analysis claims that the CRU have cherry picked the warmest stations in the HADCRUT data set (which is joint work of the Hadley Centre for Climate Change and the CRU). The article is a bit confused blaming the Hadley Centre rather than the CRU (who provided the land-based data in the HADCRUT data set) however. I’m not sure whether Real Climate’s analysis of the CRU data linked to above would account for cherry picking of the weather stations. It depends on whether the full raw data was in the data set with the adjusted data affected by the cherry picking or both.

Finally, Megan McCardle speculates that climate scientists may have been calibrating against each other, this would be more subtle than conspiring to fake the data, and they might not even realise they were doing it. The only way one can address these concerns is to analyse the raw and adjusted datasets, and any code and methods used for adjustment for signs of bias.

Some “climategate” code: proof of deception?

Samizdata recently featured this quote of the day. They’ve quoted from some computer code (in a language called IDL) which is alleged to prove that the CRU have been cooking their data. The reason people have latched onto this is that the code defines an array of adjustments to apply to a series of temperatures. These adjustments boost recent temperatures by upto 1.95 degrees whilst leaving earlier temperatures untouched or slightly reduced, hence the suspicion emanating from climate change sceptics. (NB: The values in the array range from -0.3 to +2.6 but are then multiplied by 0.75. Multiplying 2.6 by 0.75 gives 1.95.)

The story however is a bit more complicated than it first seems. For example, Robert Greiner, posting on the Cube Antics blog, notes that more than one copy of the code exists in the archive and only one of the copies has the adjustments commented out. (I should point out Greiner still thinks the code points to possible fraud.) It is also rather odd for someone to deliberately put a comment in the code like “Apply a VERY ARTIFICAL correction for decline!!” if they’re trying to pull the wool over people’s eyes. Furthermore Tim Lambert, a computer scientist working in Australia, points out that even if you comment the adjustments back in to the version of the code where they’re commented out, it would have plotted two graphs, one with and one without the adjustments with each version labelled as such, and thus would have been open about what is going on. He further found a related published paper where the graph was plotted without the adjusted version of the data.

So what is going on?

Read the rest of this entry »

Posted in Uncategorized. Tags: , , , . Comments Off on Some “climategate” code: proof of deception?

Do the leaked emails show that Phil Jones and others corrupted the peer review process?

One of the allegations made against the CRU’s Phil Jones and others mentioned in the emails such as Michael Mann, is that they corrupted the peer review process. Below I consider several of the emails mentioned in this context:

  • Jones said that he and a colleague would keep two papers out of the IPCC report even if it meant redefining the peer review process. The first point to note on this is that both papers were cited and discussed in Chapter 3 of the IPCC’s 4th Assessment Report, on page 244. Jones and his colleague are the 2 coordinating authors out of a group of 12 for this chapter, yet it seems they didn’t keep the two papers out in the end. Of course if they attempted to redefine the peer review process on the IPCC report the fact that they failed does not exonerate them. However, the evidence that they did is a most probably flippant comment in an email. Even if Jones and his colleague were trying to keep these papers out of the IPCC report, if they were doing so on the grounds the papers were of such poor quality they should never have been published in the first place, then they were doing what they should be doing, namely exerting quality control over what material goes into an important scientific report.  Without further evidence, I consider this part of the charge therefore to be unproven and evidence for it weak.
  • In another case, an email discussion involving Tom Wigley and Michael Mann (cc’d to Jones and others) has Wigley musing about trying to get Jamie Saiers, an editor at Geophysical Review Letters (GRL), ousted. A later email from Michael Mann talks about a leak at GRL being plugged. As Bishop Hill notes, Saiers is no longer an editor at GRL. However it’s not clear that Saiers left due to the efforts of Wigley or Mann. Saiers stepped down as editor of GRL in 2006 (according to the CV available at his webpage at Yale) over a year after Wigley’s email and some months after Michael Mann’s comment about the GRL leak being plugged. Moreover Saiers himself says his standing down as editor had nothing to do with attempts by anyone to get him sacked. There is however some question as to why the editor in chief took over handling the paper concerned from Saiers. At this point, it seems to me that we simply don’t know why he did so. The emails suggest a possible line enquiry but that’s all. It’s quite possible it had nothing to do with Wigley, and that Wigley’s suggestion was never acted on by Wigley himself or by those to whom he suggested it. The evidence here is contradictory and circumstantial.
  • In another email exchange, Michael Mann and Phil Jones discuss their concerns about the journal Climate Research and a paper by Soon & Bulianas. They both regard the work accept by Climate Research as being of low quality that wouldn’t pass muster in other peer-reviewed journals and Mann suggests ignoring Climate Research, whilst Jones indicates he will write to the to tell them he’ll have nothing to do with them until they get rid of “this troublesome editor”, namely Hans von Storch. Now, it turns out that there was a storm over the Soon & Buliunas paper that led to several of Climate Research’s editors, including von Storch, resigning.  However von Storch’s account of his resignation makes it clear that he thought the publication of Soon & Baliunas was an error, that the paper was severely methodologically flawed and that the peer-review process had broken down in the case of that paper. Clare Goodess’s account of the resignations is also worth a look. From what I’ve read of this affair it seems clear to me that von Storch and others resigned because of a dispute amongst the editorial board of Climate Research, over a paper that was widely regarded as methodologically flawed (as evidenced by numerous complaints, the fact some editors came to this view themselves and a rebuttal of the paper), and a peer review process that seems to have broken down at the journal concerned. Phil Jones et al’s role in this was simply to be part of the cacophany of complaint over the paper and to have contributed to the rebuttal of the paper. Criticising and/or boycotting a journal over allowing poor quality work to be published is hardly dishonest or unscientific.

None of the three emails above come close to proving the peer-review process had been corrupted by Phil Jones, Michael Mann et al and they seem to me to be the emails most supportive of this allegation that I’ve seen.

However I am aware that many regard the leaked code as exposing fraud and also that there are some issues about “hiding the decline” that my earlier post about “Mike’s Nature trick” did not address. I will turn to these topics next.

Did the CRU destroy data?

It appears earlier reports that the CRU had dumped raw data left out some crucial details. As discussed at Little Green Footballs, the situation looks rather different when those details are included. Greenwire reports:

At issue is raw data from the Climatic Research Unit at the University of East Anglia in Norwich, England, including surface temperature averages from weather stations around the world. The data was used in assessments by the Intergovernmental Panel on Climate Change, reports that EPA has used in turn to formulate its climate policies.

Citing a statement on the research unit’s Web site, CEI blasted the research unit for the “suspicious destruction of its original data.” According to CRU’s Web site, “Data storage availability in the 1980s meant that we were not able to keep the multiple sources for some sites, only the station series after adjustment for homogeneity issues. We, therefore, do not hold the original raw data but only the value-added (i.e. quality controlled and homogenized) data.”

Phil Jones, director of the Climatic Research Unit, said that the vast majority of the station data was not altered at all, and the small amount that was changed was adjusted for consistency.

The research unit has deleted less than 5 percent of its original station data from its database because the stations had several discontinuities or were affected by urbanization trends, Jones said.

“When you’re looking at climate data, you don’t want stations that are showing urban warming trends,” Jones said, “so we’ve taken them out.” Most of the stations for which data was removed are located in areas where there were already dense monitoring networks, he added. “We rarely removed a station in a data-sparse region of the world.”

Refuting CEI’s claims of data-destruction, Jones said, “We haven’t destroyed anything. The data is still there — you can still get these stations from the [NOAA] National Climatic Data Center.”

If this is correct, then it seems some of CRU’s data was deleted, for legitimate reasons, but this data is still available from other sources and therefore nothing has actually been lost.

Posted in Uncategorized. Tags: , , , . Comments Off on Did the CRU destroy data?

Why caution is required when interpreting the leaked CRU code.

So far, the “Mike’s Nature trick” email aside, I’ve focused on the alleged resistance to releasing data and the loss of raw data by the CRU and how both undermine the ability of other scientists to scrutinise and/or reproduce the CRU’s work. However, it is alleged by many that the leaked/stolen code are evidence of a scientific fraud.

Does the code really demonstrate fraud? In considering this question one must distinguish between legitimate transformations of data done in order to account, e.g. for the varying degrees of reliability of different sources of data and manipulations that facilitate a predetermined outcome.

Before I delve into the code and comments (in forthcoming posts), it is worth considering the problem of how one can go about trying to derive a best guess for how the global average temperature has varied over the millenial timescales that climate scientists often discuss in their work.  The nature of the task makes it inevitable that complex transformations of the raw data will be required to produce the graph, possibly even including the use of the odd fudge factor. Thus an honest attempt at this task is liable to see a lot of complex processing of the raw data before it’s turned into reliable estimates of the temperatures at different points in the timescale concerned.

To illustrate why, consider the apparently simple question of what the global average temperature was in the year 2008. Where and when exactly were the temperature measurements made? On the ground? 5 ft up? 1000km up? Next to a power plant? In the middle of a forest? At midday? In summer? The measurements may be made in all sorts of places in all sorts of ways, and will tend  not to be distributed uniformly but cluster in easily accessible places. Combining the measurements to produce a reliable global average is thus not necessarily straightforward. Furthermore, that’s for a 21st century date where we can (and do) make reliable temperature measurements in large numbers of places across the globe. Go back in time, and the available measurements become less reliable and fewer in number until eventually there are no direct measurements left.

This is why the climatologists rely on proxy records such as tree-ring data, ice cores, etc that indirectly indicate what the temperature might have been. These proxies are of course less reliable and very patchy. For example, tree-ring data may correlate with temperature but it is also affected by pests, the amount of rain fall, how much exposure to the sun the tree can get, soil quality and many other factors that may retard or boost growth independently of temperature. The tree-rings tend only to be found where trees are located to boot!

However, except for recent centuries, such indirect records are all we have to go on.

Somehow these various, at times contradictory strands of evidence have to be woven together to get a best guess of what the temperatures were. This may mean favouring certain records for certain periods, where we have reason to believe they’re reliable, whilst favouring them less at other times or places, where we have reason to believe they may have been overly influenced by non-temperature variables.

It may also mean that some very complex processing of the data is required that may be difficult to comprehend. It certainly means that one must be careful how you select, weight and combine different data from the different sources, and it will almost inevitably lead to fudge factors being used.

Thus when considering the manipulations the programs perform on the data, one must consider why they’ve been put there, as well as their effects. There might be a valid reason for them being there. E.g. if you find the contributions of some from tree-ring data being reduced for a particular period in time, it may be that the author knew that for that particular period of time those particular records had less reliability due to non-temperature variables playing a stronger role at that time than at others. Or it may be that the author is trying to obtain a particular temperature for that period. The latter would be unscientific, the former need not be at all.

The problem is that distinguishing between the two may require access to documentation that’s not present in the leaked code or even to the author him/herself. Because the code and emails have been leaked/stolen, we may be missing important context that would make it clear precisely why a particular manipulation has been used. Thus it seems to me that only the most blatant attempts to achieve a particular result can be taken at face value.

Posted in Uncategorized. Tags: , , . Comments Off on Why caution is required when interpreting the leaked CRU code.

Climategate: Prof Jones temporarily steps down until completion of investigation

The Telegraph reports:

Prof Jones, director of the University of East Anglia’s Climatic Research Unit (CRU), has said he ”absolutely” stands by the science produced by the centre – and that suggestions of a conspiracy to alter evidence to support a theory of man-made global warming were ”complete rubbish”.

He said he would stand aside as director until the completion of the independent review, which is being conducted in the wake of the allegations by climate ”sceptics”.