2GW Blog - How We Measure and Report Global Temperatures

June 5, 2021

I thought the temperatures would just be what they are, as the scientists say they are, and we can go from there.

But a few folks don’t want to accept the temperatures as real. I reckon these folks don’t like what the temperature records are indicating, and rather than wrestling with all the considerable available information, it’s easiest to just deny it all as a conspiracy of deliberate wrong doing.

When you start to look at where the temperature information comes from, in any depth, you soon go down a rabbit hole. For anyone interested in the excruciating details, I am attaching 20-page Appendix 1 – World Temperature Measurement.

What Scientists Do with the Measured Temperatures

First, we look at how temperatures are measured (and adjusted, where appropriate).

Look on the internet: Explainer: How do Scientists Measure Global Temperature? From CarbonBrief.org.

To get a complete picture of the earth’s temperature, scientists combine measurements from the air above the land with measurements of the ocean’s surface collected by ships, buoys and sometimes satellites also. The temperature at each land and ocean station is compared daily to what is ‘normal’ for that location and time of day, normal typically being the long-term average over a 30-year period. The differences are called ‘anomalies’ and they help to scientists to evaluate how temperature is changing over time. A positive anomaly means the temperature is warmer than the long-term average; a ‘negative’ anomaly means its cooler. Daily anomalies are averaged together over a whole month. These, in turn, are used to work out temperature anomalies from season-to season and year-to-year.

See a Climate & Environment at Imperial post by the Grantham Institute, 2015, Taking the Planet’s Temperature: How Are Global Temperatures Calculated? Granthaminstitute.com.

The World Meteorological Organization recommends defining the temperature of a location for a 24-hour period as the average of the maximum and minimum temperatures recorded during that period. This is practiced in many countries.  Although not the best calculation now available, it is the easiest to apply consistently for the calculation of temperature anomalies. For a better method, see Ma and Guttorp, University of Washington, Seattle, WA, USA and Norwegian Computing Centre, Oslo, Norway. Some countries use a linear combination of measurements taken at different times during the day.

That same Grantham Institute website indicates that land and sea surface temperature data are quality-checked and adjusted to remove biases from each different measurement process. On land, these adjustments include changes in the time of day of observations and moves or changes to measurement locations. Observations from modern, well-sited, automated equipment are treated as accurate and historical data are adjusted to use the baseline set by these modern observations. For sea surface temperatures from ships, one of the checks is that consecutive readings recreate a sensible ship’s course, allowing time or location errors to be spotted. Generally, land surface temperature adjustments increase the global land temperature trend slightly (discussed in detail in my big document). Sea surface temperature adjustments decrease the sea temperature trend considerably (discussed in detail in my big document).

Overall, the surface temperature adjustments cause a significant reduction in trends over a century or more, while making little difference to the conclusion that global warming is real. The surface temperature adjustments make the calculated extent of global warming less, not more.

The Temperature Data Sets

There are four main data sets available for global temperatures, discussed in more detail below. The NASA GISTEMP record is the most detailed of the four data sets, with each grid box two degrees longitude by two degrees latitude. The other three data sets have grid boxes which are each five by five degrees.

The four data sets differ in the number of land stations they have around the world:

·        HadCRUT4 has about 5500 stations

·        GISTEMP about 6300

·        MLOST has 7000 stations

·        The number of land stations for JMA was not given.

HadCRUT4 stretches back the furthest in history, to 1850. GISTEMP and MLOST both begin in 1880. JAL starts in 1891.

While there are some minor differences between the four sets of data, they are all quite consistent.

The four main groups listed above all keep track of tropospheric temperature and all four show a warming trend over the last 30 years.

Satellites are used as a quality check. As well as measuring the temperature of the earth’s surface, satellites can collect data from the bottom 10 kilometres of the earth’s atmosphere, the lower troposphere. Unlike the surface temperature record, tropospheric temperatures only extend back to the start of the satellite era in 1979. Lower troposphere temperature is different from the temperature at the surface of the earth but not much. The influence of the El Nino weather phenomenon is much larger, for example. Scientists can use lower troposphere measurements as a further evidence of a changing climate.

Early satellite data were incorrect, because the scientist that programmed the satellite temperature sensors got it wrong.

Working Up the Data into Temperature Anomalies

After working out the average annual temperature anomalies for each land and ocean station, the next step is to divide the earth’s surface into grid boxes. Scientists work out the average temperature for each grid box by combining the data from all available stations in that grid box. The smaller the grid box, the better the determined temperature of the box will reflect the actual temperature at any given point, leading to a more accurate estimate of the global temperature when you add them all together. The greater the number of temperature measurements within a grid box, the better the determined temperature of the box will reflect the actual average temperature for that grid box.

By combining the results for all the grid boxes, scientists calculate the average temperatures for the northern and southern hemispheres. The contribution of each grid box to the global average temperature is adjusted to account for the fact that a degree of longitude is bigger at the equator than at the poles. Taken together, the two hemispherical values provide an estimate of the global average temperature. It’s not as simple as adding the two hemispheres together, however. To avoid the better sampled northern hemisphere dominating the temperature record, scientists take the average of the two hemispheric values.

The Global Temperature Record

The most detailed temperature information exists since 1850, when methodical (mercurial) thermometer-based records began.

The web post Global Temperature Record on en.m.wikipedia.org (last edited Nov 3, 2019) indicates that proxy methods can be used to reconstruct the temperature record for the historical period, before recent times. Quantities such as tree ring widths, coral growth, glacial length variations, borehole temperatures, and isotope variations in ice cores, ocean and lake sediments, cave deposits, fossils and ice cores are correlated with climate fluctuations.

But hey, a proxy is like kissing a picture of your sister.

The website indicates that proxy reconstructions extending back 2000 years have been performed, but reconstructions for the last 1000 years are supported by more and higher quality independent data sets. The reconstructions indicate:

·        Global mean surface temperatures over the last 25 years have been higher than any comparable period since AD 1600, probably since AD 900.

·        There was a Little Ice Age centred in AD 1700.

·        There was a medieval Warm Period centered on AD 1000; the exact timing and magnitude are uncertain and have regional variation.

Some Hic-cups With Satellite Derived Temperatures

Christy and Spenser from the University of Alabama were pioneers on using satellites to measure temperatures of the surface of the earth and throughout the atmosphere. Satellites do not measure temperature, per se. They measure radiance and an algorithm is used to convert radiance to temperature. Initially, Christy and Spenser had some mistakes in their algorithm, which lead to false temperatures. They used the (unknowingly) false temperatures to make some wrong conclusions; they claimed that the stratosphere was warming, as well as the troposphere. Other scientists investigated and found errors in the methods used by Christy and Spenser to adjust the data.

It took 13 years after the original papers that that adjustments that Christy and Spenser applied were found to be incorrect. See Mears et al. (2003) and Mears et al. (2005). When the correct adjustments to the measurements were applied, the data matched much more closely the trends expected by climate models. The corrected data were also more consistent with the historical record of troposphere measurements obtained from weather balloons. Once corrected, the differences between the tropospheric and surface temperatures diminished – and a warming trend was then clear for the troposphere. The corrected data show a cooling trend in the stratosphere, consistent with the concept of global warming by an enhanced greenhouse effect (it’s not more sun).

Despite all this, Spenser continues to be a big climate change denier. He continues to think he is so right, even when he made some big time mistakes in his pioneering satellite work, which lead to fundamentally wrong conclusions.

My Comments

Deniers often disdain the concept of adjusting the data. But the global warming game is so complex that adjustments are indeed valid and needed from time to time.

Some people want the temperatures taken as they are, without any adjustments. As discussed above, when this was done with the four data sets, the extent of global warming looked greater, rather than the less they were expecting.

Christy and Spenser made some mistakes, likely unintentionally. Nonetheless, this experience is an example of deniers latching onto something that fits their mindset, despite their disdain for data adjustments. And then we find the very thing the deniers ranted about, people (Spenser) adjusting data, had inherent mistakes that created a false impression of the truth.

Just in this little story, we have two examples of deniers who should have been careful what they wished for.

 

Appendix 1 – World Temperature Measurement

June 2000

What Scientists Do with the Measured Temperatures

We are going to cover this first because it will give you a better perspective when we look at how temperatures are measured (and adjusted).

Look on the internet: Explainer: How do Scientists Measure Global Temperature? From CarbonBrief.org. To get a complete picture of the earth’s temperature, scientists combine measurements from the air above land with measurements of the ocean’s surface collected by ships, buoys and sometimes satellites also. The temperature at each land and ocean station is compared daily to what is ‘normal’ for that location and time of day, normal typically being the long-term average over a 30-year period. The differences are called ‘anomalies’ and they help to scientists to evaluate how temperature is changing over time. A positive anomaly means the temperature is warmer than the long-term average; a ‘negative’ anomaly means its cooler. Daily anomalies are averaged together over a whole month. These, in turn, are used to work out temperature anomalies from season-to season and year-to-year.

See a Climate & Environment at Imperial post by the Grantham Institute, 2015, Taking the Planet’s Temperature: How Are Global Temperatures Calculated? Granthaminstitute.com. The World Meteorological Organization recommends defining the temperature of a location for a 24-hour period as the average of the maximum and minimum temperatures recorded during that period. This is practiced in many countries.  Although not the best calculation now available, it is the easiest to apply consistently for the calculation of temperature anomalies. For a better method, see Ma and Guttorp, University of Washington, Seattle, WA, USA and Norwegian Computing Centre, Oslo, Norway. Some countries use a linear combination of measurements taken at different times during the day.

That same Grantham Institute website indicates that land and sea surface temperature data are quality-checked and adjusted to remove biases from each different measurement process. On land, these adjustments include changes in the time of day of observations and moves or changes to measurement locations. Observations from modern, well-sited, automated equipment are treated as accurate and historical data are adjusted to use the baseline set by these modern observations. For sea surface temperatures from ships, one of the checks is that consecutive readings recreate a sensible ship’s course, allowing time or location errors to be spotted. Generally, land surface temperature adjustments increase the global land temperature trend slightly (discussed in detail below). Sea surface temperature adjustments decrease the sea temperature trend considerably (discussed in detail below). Overall, the surface temperature adjustments cause a significant reduction in trends over a century or more, while making little difference to the conclusion that global warming is real. The surface temperature adjustments make the calculated extent of global warming less, not more.

We cannot measure the earth’s absolute temperature with high accuracy because the that would require an extremely dense network of temperature of temperature measurements. But we can measure the temperature anomaly with high accuracy, high enough to see that global warming is happening. Particular regions can experience climate extremes (e.g. heat waves) that are completely independent of climate change. These local variations are usually balanced by an opposite extreme somewhere else. By averaging over the globe, we rid ourselves of most of this more-local weather variability and more clearly isolate the smaller climate signal.

Working Up the Data into Temperature Anomalies

After working out the average annual temperature anomalies for each land and ocean station, the next step is to divide the earth’s surface into grid boxes. Scientists work out the average temperature for each grid box by combining the data from all available stations in that grid box. The smaller the grid box, the better the determined temperature of the box will reflect the actual temperature at any given point, leading to a more accurate estimate of the global temperature when you add them all together. The greater the number of temperature measurements within a grid box, the better the determined temperature of the box will reflect the actual average temperature for that grid box.

There are four main data sets available for global temperatures, discussed in more detail below. The NASA GISTEMP record is the most detailed of the four data sets, with each grid box two degrees longitude by two degrees latitude. The other three data sets have grid boxes which are each five by five degrees.

The four data sets differ in the number of land stations they have around the world. HadCRUT4 has about 5500 stations, GISTEMP about 6300, MLOST has 7000 stations. The number of land stations for JMA was not given.

HadCRUT4 stretches back the furthest in history, to 1850. GISTEMP and MLOST both begin in 1880. JAL starts in 1891.

By combining the results for all the grid boxes, scientists calculate the average temperatures for the northern and southern hemispheres. The contribution of each grid box to the global average temperature is adjusted to account for the fact that a degree of longitude is bigger at the equator than at the poles. Taken together, the two hemispherical values provide an estimate of the global average temperature. It’s not as simple as adding the two hemispheres together, however. To avoid the better sampled northern hemisphere dominating the temperature record, scientists take the average of the two hemispheric values.

Satellites are used as a quality check. As well as measuring the temperature of the earth’s surface, satellites can collect data from the bottom 10 kilometres of the earth’s atmosphere, the lower troposphere. Unlike the surface temperature record, tropospheric temperatures only extend back to the start of the satellite era in 1979. Lower troposphere temperature is different from the temperature at the surface of the earth but not much. The influence of the El Nino weather phenomenon is much larger, for example. Scientists can use lower troposphere measurements as a further evidence of a changing climate.

The four main groups listed above all keep track of tropospheric temperature and all four show a warming trend over the last 30 years.

The Global Temperature Record

The most detailed temperature information exists since 1850, when methodical thermometer-based records began.

The web post Global Temperature Record on en.m.wikipedia.org (last edited Nov 3, 2019) indicates that proxy methods can be used to reconstruct the temperature record for the historical period, before recent times. Quantities such as tree ring widths, coral growth, glacial length variations, borehole temperatures, and isotope variations in ice cores, ocean and lake sediments, cave deposits, fossils and ice cores are correlated with climate fluctuations.

But hey, a proxy is like kissing a picture of your sister.

The website indicates that proxy reconstructions extending back 2000 years have been performed, but reconstructions for the last 1000 years are supported by more and higher quality independent data sets. The reconstructions indicate:

·        Global mean surface temperatures over the last 25 years have been higher than any comparable period since AD 1600, probably since AD 900.

·        There was a Little Ice Age centred in AD 1700.

·        There was a medieval Warm Period centered on AD 1000; the exact timing and magnitude are uncertain and have regional variation.

Early Temperature Measurements in the Arctic

I started by trying to find some good information on the temperature at the north pole, to use in my musing of the temperature driving force for the Polar Jet Stream. I looked at the website Climate4you, Polar Temperatures. It has information published by NASA Goddard Institute for Space Studies on their GISS website, which also provides a limited historical background. It also has data published by the Climate Research Unit (CRU) which provides the mean annual surface air temperature (MAAT) anomaly for the global region 70 degrees north to 90 degrees north.

The GISS website indicates that the first effort at measuring and recording temperatures in the Arctic was by the Russians in 1923, through their Development of Russian Arctic Research Stations. The GISS website states that the total number of stations in the Russian Arctic remained small until 1929 and the quality of the equipment was low; the GISS website does not state the number of stations, or comment on the amount of data. The Russians also collected air temperature data on a few of their icebreakers. By 1933, six ships were deployed with weather stations in the Russian Arctic. In 1933, Russia added 15 new Arctic weather stations, but again, the total number of stations at that time was not given in the GISS website. In 1934, Russia added 26 more weather stations and another 10 in 1935. No more history was given for the Arctic.

The GISS website indicates that 1957 was the initiation of widespread meteorological observations in Antarctica.

How We Measure Ocean Temperatures

This is discussed in the web post Why Do Scientists Measure Sea Surface Temperature, by the National Ocean Service, National Oceanic and Atmospheric Administration, US Department of Commerce, oceanservice.noaa.gov, last updated Nov 15, 2019.

Oceans cover 71% of the earth’s surface. To measure sea surface temperature (SST), scientists deploy temperature sensors on buoys, ships and ocean reference stations. They also use satellites and marine telemetry. The NOAA-led US Integrated Ocean Observing System (IOSS) and NOAA’s Centre for Satellite Applications and Research (STAR) merge their data to provide SSTs worldwide.

Marine telemetry involves attaching tags to a wide range of marine species, from salmon smelts to 150-ton whales. The tags allow the marine animals to be tracked, where they are and where they go. Some of these tags measure water temperature. The signals from these tags are picked up by research vessels, buoys, satellites and other tracking networks.

Historically, ocean temperatures were measured off ships by dipping a bucket in the water, near the surface, hauling the bucket up and sticking a glass thermometer in the bucket of water, waiting and then noting the observed temperature. This method resulted in temperatures that are different than the actual temperatures. The temperature difference is caused by warming of the water in the bucket by warmer ambient air or cooling of the water in the bucket by cooler ambient air. See some rigorous work by Carella et al., 23 May 2017, Measurements and Models of the Temperature Change of Water Samples in Sea-Surface Temperature Buckets, Quarterly Journal of the Royal Meteorological Society, Volume 143, Issue 706. The wet bulb temperature of the ambient air was the biggest factor in errors, then the wait time between sampling and observing the temperature.  The difference between the reported temperature and the actual temperature at the time could be as much as 5oC. Furthermore, there was little standardization of the time of day that these temperatures were measured. Beware of these historical data; they could well be inaccurate.

Broadly, the evolution over time of the types of buckets used to measure SST on ships was from wooden buckets (partly insulated) to canvas (uninsulated) and then to plastic or rubber buckets (typically well insulated)

The proportion of ships making bucket observations has decreased over time with the introduction of engine room intake and hull manifold measurements.

Starting about 1993, the bucket method was replaced by installing dial thermometers in the water intake piping of ships. The dial thermometers were typically installed downstream of the intake pump; pumps inherently heat the water. Dial thermometers typically have 1 degree C accuracy and are next-to-never calibrated. Because of the heating introduced by the upstream pump and heat from the warm engine room, these measured temperatures are typically 0.6oC warmer than the actual temperature. Using this data as an historical reference hides some of the apparent global warming. If these historical data are compared with recent information, then today’s apparent global warming is overstated. See Emery and Thomson, 2001, Data Analysis Methods in Physical Oceanography, EoS Transactions, 80, Gulf Professional Publishing, pages 24 and 25. The water intake method also has the shortcoming that the water depth of the intake port varies from ship to ship; in a stratified ocean, these different depths can have different temperatures.

See a paper by Saur, 14 January, 1963, A study of the Quality of Sea Water Temperatures Reported in Logs of Ship’s Weather Observations, US Bureau of Commercial Fisheries, Biological Laboratory, Stanford, California. He found biases ranging from minus 0.5oF to plus 3.0oF in measurements, from each of 15 ships examined.

In some ships, thermocouples are installed in the hull of the ship, just below the water line, away from any heat sources. These sensors provide reliable and accurate continuous sea surface temperature data. See the web post, Sea Surface Temperature Sensors for Australian Vessels, imos,org.au.

Ship-based measurements have provided rigorous sampling along major shipping routes but these routes cover only a small fraction of the surface of the world’s oceans. There is a dearth of ship-based information for the vast majority of the world’s oceans.

The sea surface temperature can now be measured by special satellites, tuned to measure the temperature of the water surface. SSTs are measured from approximately 10 microns below the surface of the water (infrared bands) to 1 mm (microwave bands) depths, using radiometers. I wonder how the whitecaps are allowed for in the pyrometry.

Since the 1980s, most of the information about global SST has come from satellite observations. Instruments like the Moderate Resolution Imaging Spectroradiometer (MODIS) on board NASA’s Terra and Aqua satellites orbit the earth approximately 14 times per day, enabling these satellites to gather more SSTR data in three months than all other combined SST measurements taken before the advent of satellites. See Sea Surface Temperature, NASA, Jet Propulsion Laboratory, California Institute of Technology, earthobservatory.nasa.gov.

See Satellite Temperature, on the web at sciencedrect.com. It has a lengthy section of a chapter from a book Taking the Earth’s Temperature by Schneider et al. 2019. They indicate that a number of land surface and sea surface data sets are available. Weather satellites do not measure temperature directly. They measure radiances in various wave length bands. The measured radiance data have to be converted to temperature using a radiative model that accounts for atmospheric effects of the signal acquired by the satellite’s sensor at the top of the earth’s atmosphere. Besides radiometric calibration, there are three other factors that are critical to proper temperature derivation: emissivity, atmospheric properties at the time and topography. Land cover, snow cover, soil moisture, rocks, surface water, etc. all effect the emissivity of the surface the satellite is looking at.

Today, in addition to satellite and shipboard measurements, there are thousands of floats in the oceans measuring temperature and salinity. These floats are used to validate satellite measurements, in addition to sampling at depth in the water. The floats include both larger anchored buoys and smaller floating-free buoys. The surface drifters from the Global Drifter Program (GDP) provide about 60,000 night-time SST measurements per month at a shallow depth of 0.2 metres, becoming the biggest contributor to the in-situ global SST, Ocean Currents and Salinity measurements.

A major accomplishment in the distribution of satellite derived SSTs occurred with the Group for High Resolution Sea Surface Temperature (GHRSST) project. The project provides all SST data sets in a common format that allows easy accessibility across different computer platforms and operating systems.

How We Measure Land Temperatures

Thermometers for measuring land temperature are not in contact with the ground, but within the air about 1.5 metres above the ground, in a shaded weather station. Thus, strictly, land temperatures should be referred to using the term “near surface temperatures”. Readings are now automated, but previously were taken manually.

Satellites are also used to estimate air temperatures just above the ground.

See Urban et al., May 2013, Comparison of Satellite-Derived Land Surface Temperature and Air Temperature from Meteorological Stations on the Pan-Arctic Scale, Remote Sensing, on the web at remotesensing-05-0. Land surface temperature information from AVHRR (Advanced Very High Resolution Radiometer), MODIS (Moderate Resolution Imaging Spectroradiometer) and (A)ATSR (Advanced Along Track Scanning Radiometer) were compared to in situ air temperatures (Tair) from the National Climate Data centre (NCDC). MODIS agreed best with ground level air temperatures, Tair. (A)ATSR and AVHRR tended to indicate warmer and colder temperatures, compared to Tair, for the positive and negative temperature ranges. MODIS minorly overestimated temperatures in the positive range. (A)ATSR had outliers around the freezing point. The errors were associated with differentiating between clouds and snow, as well as ice-covered regions. Generally, LST indications from all three systems were in good agreement with the Tair data for the winter months, with deterioration in agreement towards the summer months. AATRS had the highest inter-annual variability.

See Tomlinson, 2012, Comparing Night-time Satellite Land Surface Temperature from MODIS and Ground Measured Air Temperature Across a Conurbation, Remote Sensing Letters, Volume 3, Issue 8. The article describes a pilot project over the summer of 2010 using Moderate Resolution Imaging Spectroradiometer (MODIS) to compare land surface temperature data and measured air temperature data from a custom network of data bloggers across the conurbation of Birmingham, UK. Their results showed that the night-time air temperature measured in meteorological stations was consistently higher than the satellite-indicated land surface temperature, but there was significant station-specific variability. The web did not provide the full paper, so I could not look at the extent of the discrepancies.

See Kenawy, et al. 2019, An Assessment of the Accuracy of MODIS Land Surface Temperature over Egypt Using Ground-Based Measurements, Remote Sensing, 2019, 11, 2369, on the web at remotesensing-11-0.During the night, MODIS tended to under-estimate the minimum ground level air temperature by minus 1.3oC during the winter, minus 1.2oC during the spring and minus 1.4oC in the fall.  During the daytime, MODIS markedly overestimated the maximum temperature in all seasons, with discrepancies mostly above 5oC.

Satellite-Estimated Temperatures Versus Surface Temperature Sensors

Satellites actually measure the average temperature over the lowest 8 km of the atmosphere.

Climate change deniers like Ted Cruz have said that satellite-based temperature measurements are the best we have, better than surface temperature sensors.

The web blog by tamino.wordprees.com Surface Temperature and Satellite Temperature, 2 February 2018 addresses this notion and provides information to show that the notion is wrong. They concluded, “Surface temperature data are more reliable than satellite atmospheric temperature data. For satellite data, RSS is far more reliable than UAH and the notion that satellite data are “better’ is just nonsense.” RSS TLTv4 is lower troposphere data from Remote Sensing Systems. UAH TLTv6 is lower troposphere data from the University of Alabama at Huntsville.

From what I have looked at, summarized above, land and sea temperatures derived from satellite measurements are not as good as sensors which sit directly in the air above the land and in the ocean. Satellites do not measure temperature directly. They measure radiance. The measured radiance is converted into a temperature via a horrendously complicated computer algorithm, which few humans understand. Why get complicated when it can be so simple with a sensor? Why go indirect, through a vague black box, when you can go direct? All that said, I see a place for satellite-based temperature measurements when it’s all you got.

Use of Satellites to Measure Temperature in the Troposphere

See the web, Wikipedia, en.m.wikipedia.org, last updated Nov 3, 2019, Satellite Temperature Measurements. Since 1978, microwave sounding units (MSUs) on National Oceanic and Atmospheric Administration polar orbiting satellites have measured the intensity of upwelling microwave radiation for the atmospheric oxygen, which is related to the temperature of broad vertical layers of the atmosphere. Measurements of infrared radiation from the ocean surface have been used to infer sea surface temperature since 1967. This website indicates that over the past four decades, the troposphere has warmed and the stratosphere has cooled. They say that both these trends are consistent with the increasing atmospheric concentrations of greenhouse gases.

Both the instrumental temperature record and satellites show global warming.

Some Hic-cups With Satellite Derived Temperatures

It is sometimes argued in parts of the media (for example see comments by Bob Carter presented in Skeptical Science) that the opposite is true – that the troposphere is warming more slowly than the earth’s surface, even slightly cooling. This argument stems from research published in 1990 by Christy and Spenser from the University of Alabama. Other scientists investigated and found errors in the methods used by Christy and Spenser to adjust the data. Also, the satellite must pass over the same spot on the earth at the same time every day in order to get a reliable temperature average. In reality, the time that the satellite passes a certain spot drifts slightly as the orbit of the satellite decays. To compensate for that, the data had to be adjusted. The MSU data are collected from a number of satellites which provide daily coverage of about 80% of the earth’s surface. Each day, the orbit shifts, so that 100% coverage is obtained every three to four days. The microwave sensors in the satellites do not measure temperature directly; rather they measure the radiation given off by the oxygen in the earth’s atmosphere. The intensity of this radiation is directly proportional to the temperature of the air and is therefore used to estimate global temperatures. There are also differences between the sensors that were onboard each satellite and merging all the data into one continuous record is not easily done.

It took 13 years after the original papers that that adjustments that Christy and Spenser applied were found to be incorrect. See Mears et al. (2003) and Mears et al. (2005). When the correct adjustments to the measurements were applied, the data matched much more closely the trends expected by climate models. The corrected data were also more consistent with the historical record of troposphere measurements obtained from weather balloons. Once corrected, the differences between the tropospheric and surface temperatures diminished – and a warming trend was then clear for the troposphere.

Deniers often disdain the concept of adjusting the data. But the global warming game is so complex that adjustments are indeed valid and needed from time to time. We all make mistakes. The MSU people made some mistakes, likely unintentionally. Nonetheless, this experience is an example of deniers latching onto something that fits their mindset, despite their disdain for data adjustments. And then we find the very thing the deniers ranted about, people adjusting data, had inherent mistakes that created a false impression of the truth.

See a more recent evaluation by Mears et al., October 2017, A Satellite-Derived Lower-Tropospheric Atmospheric Temperature Dataset Using an Optimized Adjustment for Diurnal Effects, Journal of Science, Volume 30, Issue 19, pages 7595 to 7718. They indicated that previous versions of the dataset used general recirculation model output to remove effects of drifting time of local temperature measurement on the measured temperature. This cited paper presented a method to optimize these adjustments using information from the satellite measurements themselves. This newer method found a global-mean land diurnal cycle that peaks later in the afternoon, leading to improved agreement with measurements made by co-operating satellites. The changes result in global-scale warming (global trend 70 degrees south to 80 degrees north, 1979 to 2016) = 0.174oC per decade, about 30% larger than their previous version of the data set, which showed 0.134oC per decade. They said the new dataset shows more warming than most similar datasets constructed from satellites or radiosone data. However, comparisons with total column water vapour over the oceans suggest that the new dataset may not show enough warming over the tropics.

We see Mears had to make two rounds of corrections to his calculations. The first round changed a cooling impression into a warming trend. The second round increased the warming extent. I wonder what deniers are saying about Mears now.

The Four Major Temperature Data Sets

Scientists use four major data sets to study global temperature:

·        The United Kingdom Meteorological Office Hadley Centre and the University of East Anglia’s Climate Research Unit Jointly produce HadCRUT4.

·        In the USA, the GISTEMP series comes via the NASA Goddard Institute for Space Sciences (GISS)

·        In the USA, the National Oceanic and Atmospheric Administration (NOAA) creates the MLOST record.

·        The Japan Meteorological Agency (JMA).

The internet publication by Explainer: How do Scientists Measure Global Temperature? From CarbonBrief.org, January 2015, presents a graph which compares the results for the average annual global temperature over the past 130 years. The results from all four data sets are quite consistent. They all show a warming trend, albeit with some year-to-year variability. Generally, they all show 0.5oC of warming, globally, by 2014, compared to the average temperature for the period 1951 to 1980.

For further discussion of the consistency of the various data sets (referred to as “reconstructions”) see Skeptical Science, Are Surface Temperatures Reliable? updated July 2015.

The GISTEMP data set shows the fastest warming. JMA tracks slightly lower than the others. The JMA is about 0.2oC lower than GISTEMP.

The main reason the four data sets differ somewhat lies in how the different data sets deal with having little or no data in remote parts of the world. Measurement errors, changes in instrumentation over time and other factors make capturing global temperature less than a straight-forward task.

Data coverage likely has the biggest influence. NASA GISTEMP has the most comprehensive coverage, with measurements over 99% of the world. By contrast, JMA covers just 85% of the globe, with particularly poor data in the poles, Africa and Asia.

NASA’s GISTEMP uses statistical methods to fill in gaps, using surrounding measurements. How much each measurement influences the final value depends on how close it is geographically to the missing point. NOAA follows a similar process for the MLOST data set.

HadCRUT4 is the only data set to leave regions with missing data blank, rather than try to fill them in. This effectively assumes temperatures in the blank spots are in line with the global average. This would not be an issue if the world were warming at the same rate everywhere. But data suggests that the Arctic, for example, is warming twice as fast as the global average. A missing Arctic data point could lead to a global temperature that’s lower than the real world. For example, updates to an old version of the temperature record (HadCRUT3) to include better Arctic data resulted in a determined northern hemisphere temperatures rise by 0.1oC.

Data gaps still exist: most of Greenland, the Amazon basin, parts of Central Africa, and Antarctica, mainly.  This is a good place for satellite-measured temperatures.

The internet publication by Explainer: How do Scientists Measure Global Temperature? From CarbonBrief.org, January 2015 discusses a 2013 paper that describes an attempted fix using satellite data to reconstruct the holes in the surface temperature record. Doing so suggested that the earth’s surface warmed twice as much over the past 15 years than the HadCRUT4 suggests. The 2013 paper was by Cowtan and Way, Coverage Bias in the HadCRUT4 Temperature Series and its Impact on Recent Temperature Trends, 12 November, 2013, Quarterly Journal of the Meteorological Society, Volume 140, Issue 683.

HadCRUT4 provides gridded temperature anomalies across the world, as well as hemispherical and the globe as a whole; data are available as monthly and annual values. HadCRUT4 is a combination of CRUTEM4 (a land temperature data set) and HadsetSST3 (ocean temperature data set). See Met Office Hadley Centre Observations: crudata.uea.ac.uk, Climate Research Unit Data. The data sets were developed by the Climate Research Unit (CRU) at the University of East Anglia, in conjunction with the Hadley Centre of the United Kingdom Meteorological Office. Although the sea surface temperature data set was developed solely by Hadley. Data sets are updated monthly.

Temperature Results

Polar Region Temperatures

See North Pole Climate: Average Temperature, Weather by Month. en.climate-data.org. Some monthly average temperatures for the north pole: minus 24.2oC for January, minus 1 for April, plus 15 for June, plus 15.9 for July, plus 6.9 for September, minus 4.1 for October, minus 23.3 for December. The average annual temperature for the North Pole region is minus 3.4oC.

See Wikipedia, Climate of the Arctic, updated November 11, 2019. The coldest location in the northern hemisphere is not in the Arctic, but rather in Russia’s far eastern interior. This is due to the region’s continental climate, far away from the moderating influence of the ocean. The coldest recorded temperature is minus 67.7oC.

At the South Pole, the mean annual temperature is minus 49.5oC. Some daily average monthly data: minus 28.4oC for January, minus 53.7 for March, minus 58 for May, minus 59.8 for July, minus 59.1 for September, minus 38.2 for November, minus 28.0 for December. See Wikipedia, South Pole, updated November 2019.

Why so much colder at the South Pole? It is at the centre of a large landmass, away from the moderating influence of the ocean and at an elevation of 2835 metres, 9301 feet. Using the standard adiabatic lapse rate of 9.8oC per km, we have to add 27.8oC, 50.0oF to get the equivalent temperature at sea level. Taking the mid-winter temperature of minus 59.8oC and adding 27.8oC, we get minus 31.0oC, about 7oC colder than the North Pole it its mid-winter. The annual average temperature of the South Pole, corrected to sea level is minus 21.8oC, much colder than the minus 3.4oC for the North Pole.

While we are here, some interesting information on Antarctica. It is the coldest, driest, windiest place on earth. It has the highest average elevation of all continents. It is the 5th largest continent, bigger than Australia.

Polar Regions Temperature Anomalies

The GISS website provides a graph with temperatures for the region 70 degrees north to 90 degrees north latitude, the Arctic region, for the period 1910 to 2010. In looking at my discussion of the data below, keep in mind that the GISS website indicates that the early data are sparse and of questionable accuracy.

·        The data for both the annual average temperature and the average autumn to early winter temperature show similar trends, with similar yearly variabilities. The annual average temperature is more or less consistent at minus 6oC, the autumn and early winter temperature is more or less constant at minus 4oC, but both have a noticeable upswing that begins about 2000. By 2010, both periods sit at about minus 3oC. These are actual temperatures, not anomalies.

·        The data for late winter and spring show an increasing trend from about minus 12oC in 1910, to about minus 9oC in 2010.

·        The data for mid-winter show a downward trend from about minus 12oC in 1910, to about minus 14oC in 2000, followed by an upswing to about minus10oC by 2010.

These data show temperatures increasing in the Arctic beginning about 2000.

On February 20, 2019, the Climate Research Unit (CRU) updated temperature data under the tag CRU4. That tag provides the mean annual surface air temperature anomaly for the region 70 to 90 degrees north latitude, the Arctic region. The data span 1910 to 2018. In this case, the anomaly is the temperature at each point, relative to a selected WMO normal period of 1961 to 1980. WMO stands for the World Metrological Organization, an intergovernmental organization with a membership of 193 States and Territories.

·        Between 1910 and 1920, the temperature anomaly was about 1oC less than the WMO. It was a bit colder then.

·        From 1920 to 1960, the temperature anomaly was variable but generally about 0.8oC warmer than the reference period. The temperature anomaly was consistently above zero.

·        1960 to 1990 is the WHO baseline, where temperature anomalies are consistently close to zero.

·        From 1990 to 2018, all the temperature anomalies are above the WMO baseline period, and there is a definite trend of increasing temperature anomaly, to about plus 1.8oC in 2018.

There is also CRU4 information for Antarctica. Between 1955 and 2010, the temperature anomaly has been essentially constant, with little variability. But from 2010 onwards, there has been a slight increasing trend to about plus 0.2oC in 2018.

The GISS website also provides data from remote sensing systems. The National Oceanographic and Atmospheric Administration provided data from their NOAA TIROS-N satellite for the monthly average temperature anomaly of the lower troposphere since 1979. The GISS website displays these data and includes a 27-month rolling average trend line.

For the Arctic region, 60 degrees to 82.5 degrees north latitude, the temperature anomalies were fairly constant between 1979 and 1993. Since 1993, there has been an increasing trend, up to about plus 1.3oC in 2018.

For the Antarctica region, 60 degrees to 70 degrees south, the temperature anomaly has been more-or-less constant between 1979 and 2018.

Global Average Temperature Anomalies

See Met Office Hadley Centre Observations: crudata.uea.ac.uk, Climate Research Unit Data.

·        1850 to 1910, global average temperatures were about 0.3oC below the WMO baseline period of 1960 to 1990.

·        1910 to 1940, the temperature anomaly rose from minus 0.3oC to zero.

·        1940 to 1975, the temperature anomaly dropped to minus 0.1oC and then back to zero.

·        1975 onwards, the temperature anomaly rose steadily to plus 0.7oC in 2019

Northern Hemisphere Average Temperature Anomalies

See Met Office Hadley Centre Observations: crudata.uea.ac.uk, Climate Research Unit Data.

·        1850 to 1920, northern hemisphere average temperatures were about 0.2oC below the WMO baseline period of 1960 to 1990.

·        1920 to 1940, the temperature anomaly rose from minus 0.2oC to zero.

·        1940 to 1970, the temperature anomaly was zero, plus / minus a bit.

·        1970 to 1975, the temperature anomaly dropped to minus 0.2oC.

·        1975 to 1980, the temperature anomaly rose to zero.

·        1980 to 1985, the temperature anomaly was zero, plus / minus.

·        1985 to 2019, the temperature anomaly rose steadily to plus 0.9oC in 2019.

Southern Hemisphere Average Temperature Anomalies

See Met Office Hadley Centre Observations: crudata.uea.ac.uk, Climate Research Unit Data.

·        1850 to 1920, southern hemisphere average temperatures were about 0.3oC below the WMO baseline period of 1960 to 1990.

·        1920 to 1940, the temperature anomaly rose from minus 0.3oC to zero.

·        1940 to 1960, the temperature anomaly was zero, plus / minus a bit.

·        1960 to 1980, the temperature anomaly dropped to minus 0.2oC and came back to zero.

·        1980 to 1985, the temperature anomaly was zero, plus / minus.

·        1985 to 2019, the temperature anomaly rose steadily to plus 0.5oC in 2019.

The Impact of Urban Areas on Global Temperatures

Three percent of the world’s land area is urban. See List of Urban Areas by Population, Wikipedia, 2019.

See the EPA website, Learn About Heat Islands, 2019, epa.gov. Buildings, roads and other infrastructure replace open land and vegetation. Surfaces that were once permeable and moist become impermeable and dry. These changes cause urban regions to become warmer than rural surroundings, forming an island of higher temperatures in the landscape. Heat islands occur on the surface and in the lower atmosphere. On hot sunny days, the sun can heat dry exposed urban surfaces such as roofs and pavement to temperatures 27oC to 50oC hotter than the air. Meanwhile, shaded or moist surfaces in rural surroundings remain close to the air temperature. Surface urban heat islands are typically present day and night, but tend to be strongest during the day, when the sun is shining. The annual mean air temperature of a city of a million people or more can be 1oC to 3oC warmer than its rural surroundings. On a calm clear night, the temperature difference can be as high as 12oC.

See Urban Heat Island, 2019, Wikipedia, en.m.wikipedia.org. The main cause of the urban heat island effect is the modification of land surfaces. Waste heat generated by energy usage is a secondary contributor.

See Skeptical Science, July 2015, Global Warming & Climate Change Myths, Does Urban Heat Island Effect Exaggerate Global Warming Trends? Also, see another similar post the same month. They first present a ‘Climate Myth’: A paper by Ross McKittrick and Patrick Michaels concludes that half of the global warming trend from 1980 to 2002 was caused by urban heat islands. They then comment on this claim. They say that when compiling temperature records, NASA, GISS go to great lengths to remove influences from urban heat islands. Scientists have compared the temperature data from remote stations (nowhere near human activity) to data from urban sites. The process is described in detail on the NASA website (Hansen et al.). Hansen et al. found that in most cases, urban warming was small and fell within uncertainty ranges. Forty-two percent of city trends investigated were cooler relative to their country surroundings, as weather stations were often sited in cool islands within the city, say a park, rather than in warmer industrial areas. NASA is aware of the urban heat island effect and rigorously adjust for it when analysing temperature records.

Continuing from the July 2015 Skeptical Science sites. Jones et al. 2008, looked at sites across rural and urban China, which has experienced rapid growth in urbanization over the past 30 years and therefore should show the urban heat island effect. One of the studies had 40 / 42 sites, the other had 728 urban and rural sites. The differences between urban and rural areas were very small, about 0.1oC. They continue by looking at where the majority of global warming has occurred across the globe. They look at the 2006 global temperature anomaly. The greatest differences in temperatures from the long-term averages occurred across Russia, Alaska, far north Canada and Greenland, where there is very little urbanization, not where major urbanization has occurred. They conclude that the urban heat island effect has had no significant influence on the record of global temperature trends.

More from the July 2015 Skeptical Science sites. Parker 2006 plotted 50-year records of temperatures observed on calm nights, versus windy nights. He found temperatures over the land rose as much on calm nights as windy nights. From that, he concluded that the observed warming was not a consequence of urban development.

Think about how the average global temperature is calculated. Grid by grid. With only three percent of the world’s surface area covered by urban centres, how can elevated temperatures in these areas make much difference? The representative temperature for each grid is a combination of available temperatures throughout that grid. Even the grid that has New York City in it likely has a large portion of rural area in it.

Challenges to the Adjustment of Raw Temperature Data

See Skeptical Science, Explainer: How Data Adjustments Affect Global Temperatures, posted July 25, 2017, skepticalscience.com. This is a repost from carbon Brief by Zeke Hausfather.

I am just going to quote most of the article:

Over the past two centuries, the times of day, locations and methods of measuring temperature have changed dramatically. For example, where once researchers lowered buckets over the side of ships to collect water for measuring, we now have a global network of automated buoys floating around the oceans to measuring the water directly.

This complicates matters for scientists putting together a long-term, consistent estimate of how global temperatures are changing. Scientists must adjust the raw data to take into account all the differences in how, when and where measurements were taken.

These adjustments have long been a heated point of debate. Many climate skeptics like to argue the scientists “exaggerate” warming by lowering past temperatures and raising present ones.

Christopher Brooker, a climate skeptic writing in The Sunday Telegraph in 2015, called them “the greatest scientific scandal in history”. A new report (see Wallace et al. June 2017, On the Validity of NOAA, NASA and Hadley CRU Global Average Surface Temperature Data & The Validity of EPA’s CO2 Endangering Finding, Abridged Research Report), ef-gast-data-research) from the right-wing US think-tank, The Cato Institute, even claims that adjustments account for “nearly all the global warming” in recent historical record.

But analysis by Carbon Brief comparing raw global temperature records to the adjusted data finds that the truth is more mundane: adjustments have relatively little impact on global temperatures, particularly over the last 50 years.

In fact, over the full period when measurements are available, adjustments actually have the net effect of reducing the impact of long-term warming that the world has experienced

Land and ocean temperatures are adjusted separately to correct for changes in measurement methods over time. All the original temperature readings from both land-based weather stations and ocean-going ships and buoys are publicly available and can be used to create a “raw” global temperature record.”

The post then presents a figure which shows the global surface temperature record created from only raw temperatures with no adjustments applied. It also provides the adjusted land and ocean temperature record produced using adjusted data from the US National Oceanic and Atmospheric Administration (NOAA). The figure also shows the difference between the adjusted data and the raw data. The data scan the period 1880 to 2016.

The figure illustrates that adjustments to the data have little effect on global temperatures after about 1950. The rate of warming between 1950 and 2016 in the adjusted data is just under 10% faster than the raw data. And only 4% faster since the start of the modern warming period in 1970.

The adjustments that have a big impact on the surface temperature record all occur before 1950. Here, past temperatures all adjusted up – significantly reducing the apparent warming over the last century. Over the 1800 to 2016 period, the adjusted data actually warms more than 20% slower than the raw data. The large adjustments before 1950 are almost entirely due to the way ships measured temperatures. Between 1880 and 1950, the raw data temperatures were about 0.2oC colder than the adjusted data. By using the raw data, the apparent extent of global warming is increased, not decreased. In other words, using unadjusted temperature data indicates more global warming than the adjusted data do.

Challenges to the Reliability of Surface Temperature Records

Example 1 – An Overview

See Skeptical Science, 15 August 2017 Are Surface Temperature Records Reliable?

They start with a ‘Climate Myth’: Temp Record is Unreliable, from a 2009 post by Watts:

We found (US weather) stations located next to exhaust fans of air conditioning units, surrounded by asphalt parking lots and roads, on blistering hot rooftops, and near sidewalks and buildings that absorb heat. We found 68 stations located at wastewater treatment plants, where the process of waste digestion causes temperatures to be higher than surrounding areas. In fact, we found that 89% of the stations, nearly 9 out of 10 – fail to meet the National Weather Service’s own siting requirements that stations must be 30 metres (about 100 feet) or more away from an artificial heating or radiating / reflecting heat source.

Before I give the rebuttal published by Skeptical Science, I will say that I have been in sewage treatment plants. The anaerobic tanks used to digest the sewage sludge are normally well insulated to help maintain the process temperature. Once you are few feet away from these tanks, no radiant heat is noticeable. So, I find the claim that waste digestion causes higher temperatures in surrounding areas quite far-fetched bullshit. Regarding the 30-metre criteria, normally when criteria / guidelines are set, safety factors are included, perhaps a doubling in instances like this. I wonder just how far away the stations were from artificial heating or radiating / reflecting heat sources, not 30 meters, but what?

Skeptical Science replied with the following:

“Temperature data is essential for predicting the weather. So, the U.S. national Weather Service, and every other weather service around the world wants temperatures to be measured as accurately as possible. To understand climate change, we also need to be sure we can trust historical measurements. A group called the International Surface Temperature Initiative is dedicated to making global land temperature available in a transparent manner. Surface temperature measurements are collected from about 30,000 stations around the world (Rennie et al. 2014). About 7000 of these have long, consistent monthly records. As technology gets better, stations are updated with newer equipment. When equipment is updated or stations are moved, the new data is compared to the old record to be sure measurements are consistent over time.”

“In 2009, some people worried that weather stations placed in poor locations could make the temperature record unreliable. Scientists at the National Climatic Data Centre took those critics seriously and did a careful study of the possible problem. Their article, “On the reliability of the U.S. surface temperature record” (Menne et al. 2010) had a surprising conclusion. The temperatures from stations that critics claimed were “poorly sited” actually showed slightly cooler maximum daily temperatures compared to the average.”

“In 2010, Dr. Richard Muller criticized the “hockey stick” graph and decided to do his own temperature analysis. He organized a group called Berkeley Earth to do an independent study of the temperature record. They specifically wanted to answer the question: “Is the temperature on land improperly affected by the four key biases (station quality, homogenization, urban heat island and station selection)?” Their conclusion was NO. None of those factors bias the temperature record. The Berkeley conclusions about urban heat effect were nicely explained by Andy Skuce in a SkS post in 2011.”

The Skeptical Science post included a figure from Hausfather et al. 2013, covering 1890 to 2010, which shows that the USA land temperature network does not show differences between rural and urban sites.

The Skeptical Science site ended with:

“Temperatures measured on land are only one part of understanding the climate. We track many indicators of climate change to get the big picture. All indicators point to the same conclusion: global temperature is increasing.

The adjustments NASA makes to temperature data are fully documented and available on line. See GISS Surface Temperature Analysis, National Aeronautics and Space Administration, Goddard Institute for Space Studies at data.giss.nasa.gov. Tamino explains it in a more digestible form in “Best Estimates” (looks pretty complicated and opaque to me).

The Skeptical Science site includes a figure with red dots on a map of the world, showing land temperature stations with at least one month of data in the Global Historical Climatology Network (GHCN-M). The figure is from Rennie et al 2014. It shows 7280 stations which were used during the period 1991 to 2013 in the global surface temperature data bank. There is an extremely high concentration of stations in the USA, on the south eastern side of Australia, and in Japan, then lesser concentrations in Central Europe, lesser in Western Australia, Central Africa, then places like Canada, South America, most of Africa. There are only a few stations in Antarctica and Greenland. This figure is well worth a good look.

Example 2 – Poorly Located Temperature Stations

See Skeptical Science 22 January 2010. I am going to just quote most of it.

The website surfacestations.org enlisted an army of volunteers, travelling across the U.S. photographing weather stations. The point of this effort was to document cases of microsite influence – weather stations located next to car parks, air conditioners, and airport tarmacs and anything else that might impose a warming bias. While photos can be compelling, the only way to quantify microsite influence is through analysis of data. This has been done in On the Reliability of the U.S. Surface Temperature Record (Menne 2010), published in the Journal of Geophysical Research. The trends from poorly sited weather stations are compared to well-sited stations. The results indicate that yes, there is a bias associated with poor exposure sites. However the bias is not what you expect.”

Weather stations are split into categories: good (rating 1 or 2) and bad (ratings 3, 4 or 5). Each day, the maximum and minimum temperatures are recorded. All data goes through a process of homogenization, removing non-climatic influences such as relocation of the weather station or change in the Time of Observation. In this analysis both the raw, unadjusted data and homogenized adjusted data are compared.”

The post presents a figure that compares the annual-average unadjusted maximum temperature from the good and bad sites. There is another figure comparing the annual-average unadjusted minimum temperature for good and bad sites. The time scale is 1980 to 2010.

“Poor sites showed a cooler maximum temperature compared to good sites. For minimum temperature, the poor sites are slightly warmer. The net effect is a cool bias in poorly sited stations. Considering all the air conditioners, BBQs, car parks and tarmacs, the results is somewhat a surprise. Why are poor sites showing a cooler trend than good sites?”

“The cool bias occurs primarily during the mid and late 1980s. Over this period, about 60% of USHCN sites converted from Cotton Region Shelters (CRS otherwise known as Stevenson Screens) to electronic Maximum / Minimum Temperature Systems (MMTS). MMTS sensors are attached by cable to an indoor readout device. Consequently, limited by cable length, they are often located closer to heated buildings, paved surfaces and other artificial sources of heat.”

“Investigations into the impact of the MMTS on temperature data have found that on average, MMTS sensors record lower daily maximums than their CRS counterparts and conversely, slightly higher daily minimums (Menne 2009). Only about 30% of the good sites currently have the newer MMTS-type sensors compared to about 75% of the poor exposure locations. Thus, it’s the MMTS sensors that are responsible for the cool bias imposed on poor sites.”

“When the changes from CRS to MMTS are taken into account, as well as other biases such as station relocation and Time of Observation, the trend from good sites shows close agreement with poor sites.”

The close agreement is shown in two figures, one for maximum temperature and the other for minimum temperature over the period 1980 to 2010.

Does the latest analysis mean that all the work at surfacestations.org has been a waste of time? On the contrary, the laborious task of rating each individual weather station enabled Menne 2010 to identify a cool bias in poor sites and isolate the cause. The role of surfacestations.org is recognized in the paper’s acknowledgements in which they “wish to thank Anthony Watts and the many volunteers at surfacestations.org for their considerable efforts in documenting the current site characteristics of USHCN stations.” A net cooling bias was perhaps not the result surfacestaions.org volunteers was hoping for but improving the quality of the surface temperature record is surely a result we all should appreciate.”

So there we go again. After due consideration and site-specific corrections, the rate of global warming is based on the unadjusted data is higher, not less. When the adjusted data are used, the calculated rate of global warming is less.

Recent Trends in Global Temperatures

Temperatures get measured at various places around the world, some land based, some ocean water temperatures. These measurements, certainly historical data, tend to be more numerous in developed countries. Some of the land-based measurement points are in or near big cities. Cities tend to create heat islands, resulting in false-high temperatures. Nonetheless, I think an ocean temperature is relatively simple and realistic.

In about 1970, Environment Canada established a temperature measuring station at the Alert airport, the closest point in Canada to the North Pole.

Sea temperatures are measured in the top mm of water.

Most world temperature data are displayed as anomalies, which is the departure from historic average conditions.

Global Science has a bar chart of the yearly average global temperature anomaly between 1880 and 2015. These data show a more or less steadily increasing trend; there is an upward blip in the early 1940s, but those temperatures are less than temperatures observed in recent years. These are long-term data.

Wikipedia has a chart of the temperature anomaly of the land and the sea. The temperature of the oceans was more-or-less constant between 1950 and 1970; since then it has increased steadily and was plus 0.7oC in 2018. The land-based data shows the same trend, with plus 1.3oC in 2018. The warm anomaly intensifies in the Northern Hemisphere, compared to the Southern Hemisphere. The northern polar area has a higher and more extensive temperature anomaly than the southern polar region.

The International Panel on Climate Change claims that at the equator, the upper ocean has warmed by 0.09oC to 0.13oC per decade over the past 40 years. Superimposed on this are the El Nino and La Nina climate cycles which can influence weather patterns across the globe. These cycles happen at irregular intervals roughly every three to six years, causing sea surface temperatures in the Pacific Ocean, along the equator, to be cooler or warmer than normal. El Nino can increase sea surface temperatures by 2 to 3oC.

It is hard to find a number for the annual average temperature of ocean water at the equator, but 30oC (86oF) is typical for ocean surface water in the tropics. See NOAA Ocean and Exploration Research, How Does the Temperature of Ocean Water Vary, oceanexplorer.noaa.gov.

Since 1980, the Northern Hemisphere has warmed faster than the Southern Hemisphere. There are a few possible contributors:

·        The Northern Hemisphere has more land area and less ocean surface than the Southern Hemisphere. Ocean warms less slowly than land, given its great depth; heat can be mixed down.

·        Global ocean currents tend to transport heat from southern waters into the North Atlantic and North Pacific.

·        Starting in the 1970s, there has been a significant reduction in aerosol emissions from countries in the Northern Hemisphere, especially Europe and North America. See Smith et al.2011 (9), Wang et al. 2015 (10), and Crippa et al. 2016 (11). The Southern Hemisphere never had significant aerosol emissions, so decreasing them is not going to have the same impact as in the Northern Hemisphere. The aerosols are formed from sulphur dioxide, as discussed above in the section on volcanoes. Sulphur dioxide is formed by burning fossil fuels, especially coal and high-sulphur diesel fuel. The reduction in aerosol emissions from countries in the Northern Hemisphere is due to a move away from high-sulphur fuels to reduce acidification of water, especially fresh water lakes.

All considered, there is a growing concern that tropical rainfall patterns are shifting northwards.

The temperature anomalies are higher in the Arctic region than the Antarctic.

The ten warmest years on record are: 2016 is the highest, then 2015, 2017, 2018, 2014, 2010, 2013, 2005, 2009 and 1998.

I looked at a report on weather in the Mid-western USA (US National Assessment: Midwest Technical Input Report: Historical Climate Sector White Paper, 2012). They provided data on the length of the growing season, bracketed by frosts. Prior to the 1930s, the growing season was 155 to 160 days. Between 1930 and 1980, it was 160 days. In recent years (2012), the growing season has been 167 days. The report also provided data that show while the average temperatures are higher now, the frequency of extreme high temperature periods (4 days and longer) and extreme cold periods has decreased between 1930 and recent times (2012); currently less temperature extremes, not more.

In the USA, the 1930s had very high temperatures, with a big drought in the mid-west. 1934 ranked 6th, behind 2012, 2016, 2015 and 1998, for the highest annual temperature in the contiguous 48 states. But the associated land area is only 2% of the earth’s surface. Globally, the annual average temperature for the 1930s was cooler than the average for the 20th century. The 1930s drought in the USA was a geographically isolated event, not representative of the whole world. People who hold up the USA temperatures of the 1930s are myopically cherry-picking with bias.

The world’s oceans contain about 1,351,000,000 cubic metres of water. If all of the incoming solar energy, 160 watts per m2 of earth surface, went into the oceans on a completely mixed basis, the temperature of the oceans would increase by 0.5oC per year. This is an extreme, unrealistic calculation. Nonetheless, it illustrates the temperature buffering power of the oceans; they are finite in their ability to moderate temperature rise.

End of This Blog

Blackie Manana

 

Comments

Popular Posts