Validation & attribution experiment
This experiment used a model of the atmosphere and land surface only, with sea surface conditions, atmospheric greenhouse gas concentrations, and other external factors imposed on the model. This allowed us to study the range of weather possible given these conditions.
Recent extreme weather events include the flooding in the UK of Autumn 2000 and Summer 2007 and the European heat-wave of 2006. The severity of these events highlights the need for an improved capability to determine whether the chances of such events are influenced by human-induced climate change, associated with raised levels of atmospheric greenhouse gases in the atmosphere.
When these events occur, the public currently receive, via the media, a series of unfounded and conflicting claims and counter claims as to whether climate change has had an influence on the extreme weather.
This experiment aims to provide a scientific method of attributing the chance of extreme weather events to increases in greenhouse gas emissions.
The previous climateprediction.net Seasonal Attribution project pioneered a method of attributing the risk of an extreme weather event occurring to the increase in greenhouse gas concentrations in the atmosphere. This is done by computing very large ensembles of climate models under two different climate scenarios. The first scenario has the atmosphere resembling, as closely as possible, the observed atmospheric conditions in the period when an extreme weather event occurred. The second scenario has an atmosphere that is an estimate of what the conditions would have been if no man-made greenhouse gases had been emitted, in effect that the industrial revolution had never happened. By comparing the results from the two ensembles, the attributable risk of the event occurring can be calculated. This project followed the method of the Seasonal Attribution project but expanded it to a general framework for attribution studies. A key component of this framework is the validation of the climate model to be used.
Validation of the model involves comparing the statistics of the model output with the statistics of the observed atmosphere. Observed datasets and reanalysis products, such as the ECMWF ERA-40 reanalysis data set, are used to compare against. We are interested in the performance of the model in predicting rainfall, especially prolonged and heavy precipitation extremes in Europe. We are also interested in the ability of the model to predict the storm track and modes of atmospheric variability, such as the North Atlantic Oscillation, the Scandanavian Pattern and blocking activity. Comparison datasets are calculated from the observed and reanalysis data, and corresponding datasets are calculated from the ensemble output.
We also investigate the ability of the model to simulate the frequency of occurence of precipitation extremes over a range of timescales from 1-day to 90-day events. If necessary and appropriate, an empirical adjustment based on model output statistics is computed and used to adjust model-simulated fields to be consistent with observations.This correction can then be used in future attribution studies.
The model used is the Hadley Centre HadAM3P model. This is related to the model used in the Seasonal Attribution project and the regional model used in PRECIS. It is a full atmosphere general circulation model (GCM), with prescribed sea surface temperatures (SST) and sea ice concentrations (SI).
The resolution is N96: 192 grid boxes in the longitude and 145 grid boxes in the latitude. This equates to 1.875° x 1.25° and a grid box size of 208 km x 139 km at the equator.
Other changes from the Seasonal Attribution project include an interactive Sulphur Cycle, changes to the physics which improve the general atmospheric flow, an improved land surface scheme and improvements to the parameterisation of clouds and precipitation. These improvements have a beneficial impact on the statistics of the model we wish to study in this experiment, in that they are closer to observed values, and make up for the loss in resolution.
Experimental set up for validation phase
The experimental set up is what is known as a time slice experiment. We are running the simulations from 1959-2000. This period is broken up into 2 year periods, which overlap, e.g. there is a period from 1961-1963 and a period from 1962-1964. This allows us to examine how the model diverts from its initial starting conditions and removes any bias that may be caused by the model “spinning up” into a stable state.
There are no physics perturbations in this experiment. Each model run consists of the initial starting conditions, a perturbation that is applied to the initial conditions and the forcings for the period that is to be run. The forcings include the sea surface temperatures (SST), sea ice (SI) concentrations, emissions and constants for the sulphur cycle, ozone, natural volcanic emissions, solar variability and greenhouse gas emissions.
The initial conditions are taken from a long run of the model from 1959-2000, which was performed on University of Oxford computing facilities.
Sea surface temperatures (SST) and sea ice fractions (SI) are taken from HadISST a global dataset of observed SST and SI from the Hadley Centre.
Ozone is taken from observations up to 1991, then the IPCC SRES A2 scenario is used to the year 2000, as observations are not available in the format needed by the model.
Natural volcanic emissions are from the Sato et al. dataset.
Solar variability is taken from the Solanki and Krivova dataset. Greenhouse gas emissions are taken from the IPCC SRES A1B scenario.
The initial condition perturbations are created by computing next day differences in the 3D field of potential temperature from the long run. There are 1741 of these perturbations. Along with 39 starting conditions, this creates an ensemble of almost 70,000 members.
The Validation Experiment should provide us with an evaluation of the model and, if necessary and appropriate, information on possible calibration of the model’s output. If it passes this test, the we will proceed to an experiment to look at the degree to which anthropogenic emissions of greenhouse gases contributed to the risk of the summer 2007 floods in the UK.
This experiment will have a similar setup to the Seasonal Attribution project. A set of simulations of 2006-2007 will be run as in the Validation Experiment. Then, another set will be run for the same period, but in a hypothetical climate in which humans had never emitted greenhouse gases. This will mean both reducing the greenhouse gas concentrations in the model and also decreasing the ocean temperatures and sea coverage according to various estimates of the contribution of the greenhouse gas emissions. The implementation will closely follow that used in the Seasonal Attribution project.
Further extension is planned for the detailed simulation of the historical climate using the surface conditions that are extracted from pre-industrial coupled model simulations, performed at a coarse numerical resolution (eg. Millennium Experiment).