FAQ

Why are the temperatures reported by Clisense so low?

In climate science, normally the focus is not on the planet’s absolute temperature, but on the change thereof. There are technical reasons why temperature change is known much more accurately than absolute temperature.

But if you’re going to focus on how much temperature has changed, you need a baseline. In the case of HadCRUT, this baseline is the 1961-1990 average; for NOAA it’s the 1901-2000 average. Every other temperature is expressed as a deviation from those averages. More technically, these deviations are known as anomalies, and that’s what Clisense reports. You can check that Clisense is indeed getting the same anomalies as its sources because, if you select for HadCRUT a period that starts in 1961 and ends in 1990, the anomaly is zero; the same happens with NOAA if the years chosen are 1901 and 2000.

(The Cowtan & Way record also uses 1961-1990 as a baseline, but for reasons I can’t explain very well its anomaly for that period is 0.0055ºC rather than exactly zero; Clisense rounds it up to 0.01ºC. I recommend checking out the author’s site for further info).

The app loses some technical accuracy by calling these numbers temperatures rather than what they are – temperature anomalies. But I believe making the app understandable for a broader range of users is more important than using a specific term.

Why does the app ask users to select not two years, but two periods?

Atmospheric temperature changes considerably from one year to the next, for reasons that have nothing to do with man-made climate change. The El Niño phenomenon, for instance, can raise global atmospheric temperatures by up to 0.2ºC, but this effect usually disappears in a year or two. If a La Niña develops, temperatures may even dip below the point they’d reached before El Niño. So there is a potential for distortions if one chooses anomalous years; for instance, if one tries to calculate climate sensitivity starting with an El Niño and finishing with a La Niña, the sensitivity thus calculated will be lower than in reality (because temperatures between these two periods will have increased less than between ‘normal’ years).

0.2ºC may not sound like much, but it’s about 20% of the total atmospheric warming experienced since the late XIX century. It makes no sense to say that global warming has been 0.9ºC one year but 1.1ºC one or two years later, and then again 0.9ºC the following year. A longer period smoothes out this natural variability, as it will not be so heavily influenced by a brief event like El Niño. (Of course, one cannot be sure that natural variability has been completely removed from the calculation just because one uses longer periods).

If the user wants to look at a single year anyway, he can do it. Just select the same year both as the start and end of a given period.

‘Percentage of global energy imbalance made up by ocean heat uptake’: what is that figure, and why do I have to guess it?

The planet has four ways of ‘storing’ the energy gained due to global warming. One, the atmosphere, is almost insignificant: only about 1% of said additional energy is stored in the form of higher air temperatures.

What about the other 99%? The bulk, around 90%, is in the ocean. And the rest has gone into rising land temperatures and melting ice.

Because the amount of heat taken up by land and ice is rather speculative, especially in the early part of the temperature record, the app allows users to prescribe the share taken up by the ocean; heat taken up by the planet but not by the ocean has mostly gone into land and ice. So if for example you set the ocean’s share at 93%, and ocean heat uptake corresponds to an energy imbalance of 0.5 watts per square meter, the total energy imbalance will be 0.5 / 0.93 = 0.54 watts per square meter.

Where are the confidence intervals?

I may add them in a future version of the app. It’s not trivial, and when it’s done I want to do it right – so I have to study how to derive an uncertainty interval from the various inputs (each of which has its own uncertainty).

Ocean heat content, in the scientific literature, is usually reported in multiples of joules. How do you go from that measure to watts per square meter?

The Earth’s surface is 510 million square kilometers, which is to say 510 trillion square meters. One watt is equivalent to one joule per square meter per second. Since heat content is reported on a yearly basis, by looking at the year-on-year change in said content you can calculate the rate at which the Earth was gaining (or losing) energy over that period.

There are 31,540,000 seconds in a year. Multiply this by 510 trillion and you get 1.61 * 10^22. That is the approximate number of joules delivered by one watt per square meter, over the whole Earth surface, over a year. The typical measure of ocean heat content in the scientific literature is the zettajoule, which is to say 10^21 joules. Thus, one watt per square meter is equivalent to roughly 16.1 zettajoules per year.

Sometimes scientific publications speak in terms of terawatts, rather than watts per square meter. Since 1 terawatt = 10^12 watts, and the Earth’s surface is 510 * 10^12 square meters, one W/m2 is equivalent to 510 terawatts.

%d bloggers like this: