Archive

Author Archive

The seasons

October 7, 2011 Leave a comment

…or, how not to analyze temperature and cloud cover.

A recent post on Watts Up With That by Erl Happ, titled “High clouds and surface temperature“, attempts to use the NCEP reanalysis data set to examine such a relationship.

First, as Erl is (presumably) not a scientist or researcher, I cannot fault him for sourcing his cloud data from a reanalysis product. Reanalysis products are quite appealing because they offer all sorts of variables – temperature, wind, moisture, and “value added” items like cloud fraction – in a gridded data set. The problem is that reanalysis data are not “real” data.

In general, most reanalysis products are assimilations of data from many sources – weather balloons, satellites for periods post-1979, surface stations, etc. – into the first time-step of a sophisticated, high-resolution weather system model.

Don’t get me wrong. Reanalysis products are extremely useful if they are used correctly. I have used them in a paper I co-authored and they are a major part of my current research. For “basic” variables, like temperature and the horizontal wind field, they are fairly good representations of the “real world”. The problem comes when you try to use these value-added things like cloud cover. I will not go into great detail, but suffice to say, you are better off using actual cloud observations from, for example, MODIS.

Okay, moving on to the real heart of the matter, Erl’s basic misunderstanding of the earth’s seasonality cripples his ability to perform any valuable analysis. I will first quote the relevant text:

“The minimum [in surface temperature] is experienced when the Earth is closest to the sun. The Earth is coolest at this time because the atmosphere is cloudier in January. January is characterized by a relative abundance of high ice cloud in the southern hemisphere. Relative humidity peaks in April (figure 6) when tropical waters are warmest. I suggest the variation in the minimum global temperature is due to change in high altitude cloud…”

A schematic of earth's orbit. Perihelion, the time when earth is closest to the sun, occurs in January - NH winter and SH summer.

Earth is closest to the sun in January and farthest in July – absolutely true. Now, the axial tilt of the earth determines the season each hemisphere is in at a particular time of year by directly altering the distribution of solar radiation. In northern hemisphere winter, say January, the northern hemisphere is tilted away from the sun while the southern hemisphere is tilted toward the sun, relative to earth’s orbital plane. This delivers more total radiation (and more radiation per unit area) to the southern hemisphere, and less to the northern hemisphere. However, the earth is closer to the sun and hence receiving more total solar radiation than it does in, say, northern hemisphere summer (July).

What we would expect then, all things being equal, is that the globally-averaged surface temperature would be highest in January when the earth is closest to the sun and receiving the most solar radiation.

Erl just asserts the temperature actually peaks in July without providing an illustrative figure. I hate assertions, so let’s look at some data. I’ve used the 2-meter temperature field from the Modern-Era Retrospective Reanalysis from NASA to generate the following plot of the global average surface temperature. As with all reanalysis data from US government agencies, it is free for anyone to download – atmospheric science enjoys a high degree of data sharing that many fields do not, and hey, your tax dollars paid for this so you should be able to use it! The time period is 1979-2010 and I’ve used area-weighting by the cosine of latitude.

Global average 2-meter temperature. MERRA data is courtesy of NASA's GMAO.

Whoah, it’s higher in July, when the earth is farthest from the sun! Erl was right, what gives?

Well, there is a crucial difference between the hemispheres – there is very little land in the southern hemisphere, and quite a lot of land in the northern hemisphere, especially in the midlatitudes. I’m going to claim that the northern hemisphere land masses are dominating this signal – despite the fact that the earth is 70% ocean! Why? Because they have such a high-amplitude seasonal cycle of temperature. This is due to the fact that the land surface has a much smaller heat  capacity per unit area than the oceans, so for a given amount of radiation absorbed, the land surface will increase its temperature more than the ocean.

Let’s apply a land/ocean mask to the same 2-meter temperature data to try to isolate the two surfaces. To make the comparison, I’ll compare their anomalies relative to their annual average.

2-meter temperature annual cycle for land and ocean. MERRA data courtesy of NASA's GMAO.

Bingo. As you can see, the amplitude of the seasonal cycle is dominated by the land surfaces. Indeed, the average temperature over the oceans barely changes at all, peaking in the late southern hemisphere summer/early fall as we would expect due to the thermal inertia of the oceans. This part of the signal is completely lost when you examine the global average temperature without separating land and ocean data – the global annual cycle has an amplitude of around 4C, while the ocean surfaces have an amplitude of less than half of a degree. Compare that to a nearly 12C amplitude for the land surfaces!

Erl could have saved himself a lot of work and the rest of us a lot of trouble if he would have just used a land/ocean mask, as suggested in the comments at WUWT, to prove to himself that the “continentality” he so derides is in fact the case. Continents heat up a lot in summer and cool down a lot in winter because of their low heat capacity, relative to the oceans. Because there is much more land in the northern hemisphere, the seasonal cycle of global average temperature is dominated by the northern hemisphere seasonality over land. That is, global average temperature tends to peak in July and reach a minimum in January because the northern hemisphere land surface reaches its maximum and minimum temperatures in those same months. This is not a surprise to anyone who has taken a basic course on earth’s climate.

Yes, it’s counter-intuitive that the earth’s global average temperature is highest when it is farthest from the sun, but the explanation is very simple.

“Variation in cloud cover should be the first hypothesis to explore when the Earth warms or cools over time. You would have to be very naive to think that the inter-annual change in temperature that is most obvious between November and March could be due to something other than a change in cloud cover.”

Naive, indeed. This is a striking example of the Dunning-Kruger effect.

Advertisements
Categories: Uncategorized

The Arctic death spiral

October 1, 2011 Leave a comment

Those following the observations of Arctic sea ice extent and volume were probably not surprised when the summer minimum numbers rolled in and 2011 had the lowest or second-lowest sea ice extent since monitoring began in the 1970’s. The downward trend shows no sign of stopping, and the distinction between lowest or second-lowest is unimportant, as year-to-year extent is influenced by surface wind patterns. No, what is important is this graph from the National Snow and Ice Data Center:

 

Sea ice extent from the NSIDC

 

The greyed-region is +/-2 standard deviations, with the central line the 21-year average for 1979-2000. The current extent values, especially at the summer minimum, are strikingly low. Just eye-balling the chart suggests that 2007 and 2011 are approach a deviation of more than 3-sigma from the average. The other years in the past decade are almost as low. What about the volume of sea ice in the Arctic?

 

Sea ice volume from PIOMASS

While the summer minimum sea ice extent has approximately halved, volume has decreased by almost 75%. This is especially troubling – thin ice responds more rapidly to variations in temperature and weather patterns, and the volume of multi-year ice is in rapid decline.

But sea ice is not the only victim in the Arctic. Last week The Conversation posted a stunning article, “Canadian ice shelves halve in six years”. From the article:

Half of Canada’s ancient ice shelves have disappeared in the last six years, researchers have said, with new data showing significant portions melted in the last year alone.

“Since the end of July, pieces equaling one and a half times the size of Manhattan Island have broken off,” said Luke Copland, researcher in the Department of Geography at the University of Ottawa.

These are shelves that have existed since long-before Europeans arrived. Let that sink in.

From a related article from ABC News:

Between 1906 and 1982, there has been a 90 percent reduction in the areal extent of ice shelves along the entire coastline, according to data published by W.F. Vincent at Quebec’s Laval University. The former extensive “Ellesmere Island Ice Sheet” was reduced to six smaller, separate ice shelves: Serson, Petersen, Milne, Ayles, Ward Hunt and Markham. In 2005, the Ayles Ice Shelf whittled almost completely away, as did the Markham Ice Shelf in 2008 and the Serson this year.

Ice shelves are massive, floating platforms of ice, often at the terminus of marine glaciers. Unlike sea ice, which thins and thickens with the seasons and is constantly jostled around by winds, these shelves are more permanent, though still dynamic, features. They are only native to the Arctic regions, as the ice would otherwise melt long before it reached the sea if annual-average temperatures were not sub-freezing. The primary mechanism for ice loss from these shelves is calving – when the ice reaches a certain distance beyond the grounding line, where it is anchored to the seabed, chunks mechanically shear off to form icebergs.

However, ice shelves across the world have been losing mass over the past decades, many at an ever-accelerating pace, including the dramatic collapse of the Larsen Ice Shelf in Antarctica. Glaciologists have pinpointed two major sources for this acceleration – warming ocean waters that undermine the shelf from below, and surface melt-water pools that chisel vertical fractures into the shelf, greatly reducing its structural integrity.

The accelerating decline of sea ice, ice shelves, and glaciers is but one line of evidence that demonstrates the world is warming. Unfortunately, the loss of ice contributes to the ice-albedo feedback and is set to not only disrupt ecosystems, but threaten water supplies for the millions that rely on glacial meltwater. Perhaps, though, the visibility of this phenomenon will finally start to resonate in people.

The great retreat of Jakobshaevn Isbrae in Greenland, via GRID-Arendal

We live in a changing world – we clearly have the power to disrupt it for the worse, but that also means we have the ability to shape it for the better.

Categories: Arctic, Ice and snow

Cloud chamber demonstration

September 27, 2011 Leave a comment

This is just too cool to pass up.

Despite it being common knowledge in the scientific community, many people don’t know that there is radioactive decay going on all around them – and it’s perfectly safe. Background radiation comes from many natural sources, including the radioactive potassium-40 in the bananas you eat.

Radioactivity has three common “flavors”, though there are a number of other decay processes. Alpha decay occurs when an atom expels a helium 2+ atom and decays to a lighter element, beta decay is the release of an electron, and gamma radiation is the release of a highly energetic photon. Alpha particles can be blocked by a piece of paper, while gamma radiation is only attenuated by a slab of lead. Luckily, background radiation tends to be mostly of the alpha variety.

In the video below, a cloud chamber is used to detect background radiation, as well as to illustrate the radioactivity of Americium and radon gas.

A cloud chamber is a sealed container which usually contains supersatured alcohol. Supersaturation means that the relative humidity of the vapor is greater than 100%. While this seems impossible, it’s in fact a common occurrence in clouds (you can get supersaturations in excess of a few percent in the vigorous updrafts of a thunderstorm). Water cannot spontaneously condense without the aid of a condensation nucleus – a particle like sodium chloride, for example – due to the energy requirements necessary to overcome surface tension, among other things. If you supersaturate a chamber of water and then inject condensation nuclei, a cloud will form instantly.

A cloud chamber operates on a similar principle. Without condensation nuclei in the isolated chamber, you can reach high supersaturations without producing condensation droplets. As an atom undergoes radioactive decay, the radiation ionizes the supersatured vapor. These ions act as condensation nuclei and essentially trace the path of the emitted particle through the chamber with little cloud streaks.

Very cool.

Categories: Uncategorized

Defending Science, Part 1

September 5, 2011 Leave a comment

As scientists, it’s our job to pose important questions, investigate them thoroughly, and analyze them honestly. But I don’t believe it’s enough to let the fruits of our discovery lay fallow in a journal, hoping to be picked up by the media  – which they won’t, unless they can be twisted into headlines that run counter to the evidence-based narrative, facts be damned.

This latest scandal involving a paper coauthored by Dr. Roy Spencer, of UAH satellite infamy*, is a textbook example of how dysfunctional our media have become and the state of the “controversy” in climate science. For those who are out of the loop, courtesy of Prof. Michael Ashley via Prof. Scott Mandia’s blog:

Have Spencer & Braswell found a significant difference between observations and the IPCC models?

No. Their article contains a number of errors that have since been identified by climate scientists. These errors range from the trivial (using the wrong units for the radiative flux anomaly), to the serious (treating clouds as the cause of climate change, rather than resulting from day-to-day weather; comparing a 10 year observational period with a 100 year model period and not allowing for the spread in model outputs).

Within three days of the publication of Spencer & Braswell 2011, two climate scientists (Kevin Trenberth & John Fasullo) repeated the analysis and showed that the IPCC models are in agreement with the observations, thus refuting Spencer & Braswell’s claims. An independent analysis by Andrew Dessler also confirms the Trenberth & Fasullo result.

Furthermore, Trenberth and Fasullo showed that the better-performing IPCC models were distinguished by their ability to track the El Niño-Southern Oscillation, not by their climate sensitivity as claimed by Spencer & Braswell.

In other words, there is no evidence from the 10 years of satellite data that forecasts of global warming are too high. There are additional problems with the article, but these new analyses are sufficient to invalidate the conclusions made by Spencer & Braswell.

This paper was published in Remote Sensing, a journal primarily for geographers that does not deal in climate or atmospheric science. You may ask yourself – why would someone with a climate science paper choose a non-climate science journal? Because, as Kevin Trenberth points out at RealClimate, it would probably not even make it to peer review. Solution: choose a journal with little or no expertise in the subject and hope it gets published.

It does, and the right-wing media feeding frenzy commences.

Christian Post: Scientist Says His Study May Disprove Global Warming

Fox Nation: New NASA Data Blow Gaping Hole in Global Warming Alarmism

Investor's Business Daily: Junk Science Unravels

FoxNews.com: Does NASA Data Show Global Warming Lost in Space?

Newsmax: NASA Study: Global Warming Alarmists Wrong

Hot Air: Sky-high hole blown in AGW theory?

Daily Mail: Climate change far less serious than 'alarmists' predict says NASA scientist

Let me note that Roy Spencer is not a NASA scientist. None of these articles brought into question his credibility, either.

*Roy Spencer is most famous for the UAH satellite controversy, which for years brought into question the reliability of our surface temperature records. Qiang Fu and others from the University of Washington discovered serious flaws in Spencer’s analysis, reported in Fu et al., 2004. The analysis was updated accordingly and is now consistent with the other temperature records. Roy also recently stated that he views his job “a little like a legislator, supported by the taxpayer, to protect the interests of the taxpayer and to minimize the role of government.” If I were a journalist I would consider these noteworthy details, but then again, none of these news outlets seem to consider themselves journalists.

But those are small details, as the rest is like mistaking the pile of pebbles in your backyard for the Rockies.

There are so many things wrong with these stories, but the most egregious is the media’s complete misunderstanding of climate change, and science in general. A single study in a single journal (especially an obscure one unrelated to the subject matter of the study) does not unravel what is now a mountain of evidence. Arrhenius, over 100 years ago, had already discovered the answer to why the earth was warmer than it should have been, given the amount of radiation it receives from the sun: greenhouse gases. They are the reason we are a temperate planet with a mild diurnal temperature range; why the moon, devoid of this protective layer, swings between hellishly hot and insufferably cold; and why Venus is a sweltering inferno.

In the late 1980’s computing technology and our understanding of the climate system had progressed to the point that we could model it, albeit crudely, and see our future unraveling before us. A long-ago debunked talking point says that climate models do not replicate reality – we know, in fact, that this is complete and utter B.S. Even the simplistic model that James Hansen used in his 1988 study shows decadal warming similar to that which has been observed.

Climate model simulations over the instrumental record from the IPCC AR3, 2001.

That is just one single line of inquiry into climate change attribution that we have, but the key principle is basic radiation physics – gases absorb and emit radiation in specific wavelengths. Greenhouse gases absorb and emit radiation in the peak emission wavelengths of the earth. They absorb and re-emit this radiation both back to the surface and out to space, raising the surface temperature of the earth to an equilibrium temperature higher than it would be in their absence. You can, with as simple or as complex a model you like, see what happens when you increase the concentration of these gases. Hint: things get warmer, and they have not stopped getting warmer according to all 5 major surface temperature and satellite records. In fact, I will show in a proceeding post that models are underestimating major changes in the climate system.

The idea that this one study, with it’s simplistic model that was tuned to give an answer, could topple decades and millions of hours of research, is absurd, but it sells a hell of a lot more headlines and placates everyone’s latent hope that global warming isn’t happening, and we really don’t have anything to worry about.

97.5% of climate scientists agree that human activities are increasing the planet’s temperatures.

Naomi Oreskes’ groundbreaking 2004 survey of all published, peer-reviewed studies of climate change between 1993 and 2003 found that not one…single…paper…rejected the fact that humans are causing global warming.

What has happened here?

I did!

The level of abuse that climate science has suffered, at the hands of the media and a certain political party, is nothing compared to the abuse that this planet has taken and is going to suffer in the coming century. Even now, as top presidential candidates call scientists frauds, proclaim their derision of global warming “alarmists”, and pray for rain that never comes, we are being greeted with ever more graphic images and disturbing details of the state of our climate system.

What has happened, and what can we do to fix it?

In this coming series of posts, I will dissect the political landscape and psychology of denial, and examine the current state of our climate and where it is headed. In doing so, I hope to gain an insight into ways that scientists and science advocates can engage and inform the populace and turn the tide against misinformation.

Brace yourself, because it’s going to be a hard landing.

Turtles all the way down

September 5, 2011 Leave a comment

In light of recent and ongoing events – the Spencer and Braswell 2011 debacle in Remote Sensing and everyone’s continued misunderstanding of climate models – I’d like to kick off this new blog with some thoughts about modelling in science.

Whether one is a chemist or a climate scientist, a soil engineer or an astrophysicist, we all use models to understand the world around us. The ideal gas law, which is sufficiently accurate for gasses close to standard temperature and pressure, assumes gas molecules are point masses with no volume and envisions all collisions as elastic. This is best described by my thesis advisor as “volume-less tennis balls flying around”. It’s an elegantly-derived model that is incredibly accurate for the atmosphere, but even though it’s a law, it’s still a model.

As we try to understand more complex systems, we must build more complex models. The current state-of-the-art climate models are some of the most complex and computationally demanding model simulations produced by humanity, and they integrate countless hours of scientific research and understanding – from aerosol processes used to model cloud physics to radiation subroutines handling absorption, scattering, and emission across so many wavelengths that they consume half of the computing time. These models, like our most simple models, are derived from basic physics – the laws of thermodynamics, conservation of momentum and mass, etc. – and empirical measurements.

But all models, whether they are a simple one-dimensional climate model or the state-of-the-art simulation, serve some utility. The big, complex models are hard for scientists to analyze – the more processes you include, the higher the resolution, the fewer simplifying assumptions you make – the more difficult it is to figure out what is going on and what is important. Not impossible, just very hard.

But the more simple the model, the less likely it will be to capture the details. A state-of-the-art simulation can represent internal variability and produce ENSO signals, while a one-dimensional model cannot. However, that does not mean that the one-dimensional model is “wrong”. Indeed, both models will tell you that as you increase the concentrations of greenhouse gases in the atmosphere, you will raise the average surface temperature of the earth.

There is nothing wrong with simple models. As Einstein said, “Make everything as simple as possible, but not simpler”. The simpler your model is, the easier it is to understand, as fundamental relationships will become more obvious. Simplify too much, though, and the model loses all utility.

Consider three models of the earth: the earth is flat, the earth is spherical, and the earth is an oblate spheroid whose diameter on the equatorial plane is wider than its diameter on the plane of its axis of rotation

The “earth is flat” model is wrong. It is too simple a model, based on sparse and simple observations and too many wrong assumptions. It is common-sense to our eyes, so long as we don’t question our logic too deeply. This model still states we exist on the surface of something, perhaps its only redeeming aspect, but it’s going to prevent us from understanding physics, especially gravity and astronomy. A bad model, a model too simplistic, is actually a detriment to our understanding. See: geocentrism.

What about “earth is a sphere”? Well, it isn’t really spherical. It’s actually a little wider on the equatorial plane than on the plane of its axis of rotation. Is this model of the earth “wrong”?

I don’t think so. As Isaac Asimov described in “The Relatively of Wrong”, there is a spectrum of “right” and “wrong”, and as humans, our models are going to fall somewhere on this spectrum.

Wrong |—-(earth is flat)——————-(earth is spherical)–(earth is an oblate spheroid)-| Right

The “earth is spherical” is mostly right, and is very close to “earth is an oblate spheroid”. The latter captures reality much better – the distance between lines of latitude on a sphere are constant (on the earth sphere, this is about 111km), but on the real earth, this varies with latitude because of the equatorial bulge. There are also implications for gravity, as well. Depending upon your application, the “earth is a sphere” may be a perfectly sufficient model to use as it simplifies calculations.

For my research, the error introduced by assuming a constant distance between latitudes is negligible compared to the orders of magnitude of the processes I want to understand. For an engineering team managing remote sensing satellites, such as GRACE which relates satellite drift to gravitational differences, the sphere assumption is too simple. It introduces enough error that it will compromise the data from the mission and hinder our understanding.

Spencer and Braswell 2011 is an example of using too simple a model with dubious assumptions based on poor evidence.

Give me a model of the earth with more than a few free parameters, and I will demonstrate that it’s turtles all the way down.