Samizdata recently featured this quote of the day. They’ve quoted from some computer code (in a language called IDL) which is alleged to prove that the CRU have been cooking their data. The reason people have latched onto this is that the code defines an array of adjustments to apply to a series of temperatures. These adjustments boost recent temperatures by upto 1.95 degrees whilst leaving earlier temperatures untouched or slightly reduced, hence the suspicion emanating from climate change sceptics. (NB: The values in the array range from -0.3 to +2.6 but are then multiplied by 0.75. Multiplying 2.6 by 0.75 gives 1.95.)
The story however is a bit more complicated than it first seems. For example, Robert Greiner, posting on the Cube Antics blog, notes that more than one copy of the code exists in the archive and only one of the copies has the adjustments commented out. (I should point out Greiner still thinks the code points to possible fraud.) It is also rather odd for someone to deliberately put a comment in the code like “Apply a VERY ARTIFICAL correction for decline!!” if they’re trying to pull the wool over people’s eyes. Furthermore Tim Lambert, a computer scientist working in Australia, points out that even if you comment the adjustments back in to the version of the code where they’re commented out, it would have plotted two graphs, one with and one without the adjustments with each version labelled as such, and thus would have been open about what is going on. He further found a related published paper where the graph was plotted without the adjusted version of the data.
So what is going on?
To answer this question, let’s start with the code itself. The most damning version of the code is arguably the second version that Robert Greiner found, where the correction was not commented out. The reason this is the most damning version is that unlike the version Tim Lambert deals with, the code plots a graph with the adjustments in place, and without showing the unadjusted data.
The problem, for someone who wishes to prove deception, is that we do not know how this code was used or why the programmer put this artificial “correction” of the data in place. Robert Greiner admits that he is not aware of the code being used to generate any graphs in the published literature.
In the meantime, as we saw above, Lambert pointed out that one candidate for possibly using this code did not use the adjusted data. Furthermore a commenter on Lambert’s article speculates that an unpublished manuscript (also available in the leaked data) may have been what the code was used for. This manuscript openly talks about an ad hoc artificial correction being temporarily applied to some data and then later removed, to test the sensitivity of an approach to calibration. It thus does not involve deception.
In conclusion, the fact this code exists does not prove any deception and we do not have any evidence it was used for such a purpose. We have some evidence it may have been used in an as yet unpublished paper, but it was used in an entirely valid and open manner if so. The artificial correction in this case would just be someone playing with their data to see how sensitive it is to adjustments. This is no smoking gun. At most it is a gun that hasn’t been fired.