We will be keeping this page updated with a plot showing the results of this experiment as they come in.
- Jump to the First Results with 47 models, from 11 March 2014.
- Jump to the Second Batch of Results with 510 models, from 18 March 2014.
- Jump to the Third Set of Results, with 5027 models, from 20 March 2014.
- Jump to the Fourth Set of Results, with 15,816 models, from 25 March 2014.
- Jump to the Fifth Set of Results, with 29,464 models, from 1 April 2014.
- Jump to the Sixth Set of Results, with 33,071 models, from 10 April 2014.
Final Results, 30th April 2014
We are now able to announce the final results from this experiment. In the end, we received 39,726 models which we analysed and have plotted here: The inset shows more detail around the 1-in-100 year rainfall event, where you can see the difference between the dark blue “winter as observed” and the dark green “winter in the world that might have been” more clearly.
Over the course of this experiment, we compared tens of thousands of simulations of possible weather in our present-day climate with tens of thousands of simulations of a hypothetical world without the influence of past greenhouse gas emissions in the atmosphere using the same climate model.
Comparing numbers of extremely wet winters between these two groups provides estimates of the influence of climate change on the UK weather. We found that a 1-in-100-year winter rainfall event (ie. 1% risk of extreme rainfall in the winter of any given year) is now estimated to be a 1-in-80 year event (i.e. 1.25% risk of extreme rainfall in any given winter) so the risk of a very wet winter has increased by around 25%. Here is a graph showing the range of % increase in risk, with the mean at 25%: This change is statistically significant thanks to the number of computer simulations we were able to run– nearly 40,000 computer models run in the experiment, thanks very much to our volunteers who allow us to run these simulations on their home computers.
However, while our findings are statistically robust, the result depends on how man-made climate change is represented in the experiment. We used different climate models to estimate the pattern of global warming which provided a range of possible changes in risk. In several cases, the models gave no change or even a reduction in risk, but overall the simulations showed a small increase in the likelihood of extremely wet winters in the south of England.
It will never be possible to say that any specific flood was caused by human-induced climate change. We have shown, however, that the odds of getting an extremely wet winter are changing due to man-made climate change. Past greenhouse gas emissions and other forms of pollution have “loaded the weather dice” so the probability of the south of England experiencing extremely wet winters again has slightly increased.
Total winter rainfall, although useful as a benchmark, is not the direct cause of flood damage, so we are working with collaborators, such as the Centre for Ecology & Hydrology, to explore the implications of our results for river flows, flooding and ultimately property damage.
Sixth Set of Results, 10th April 2014
Here is the latest plot for the weather@home 2014 UK flooding experiment: We have now analysed over 33 thousand models! The curves are quite hard to tell apart, so we’ve zoomed in on an interesting part of the plot so you can see the blue and green curves separately. The further apart they are, the more climate change increased the risk of last winter’s flooding event. However, we still have to wait for more models to come back and do more analysis before we can say if this change is statistically significant.
Fifth Set of Results, 1st April 2014
We’ve now had nearly 30,000 model results in – here’s how the plot is looking: You can see that there is some “wobble” in the curve for the blue models. This makes it difficult to say anything conclusive about the difference between the blue (“winter as observed”) and green (“winter that might have been without climate change”) results. So, we are going to keep running the experiment for another week or so to get some more model results back which will hopefully help give us a clearer overall result. Here is the animation of the results so far:
Fourth Set of Results, 25th March 2014
We’re now at more than 15,000 results, and we still have more coming in – we’re hoping to get to 10s of thousands of models back before we make a final statement about the results. Here’s a plot of the latest results: And here’s the animation, now showing all 15,000+ models: It’s really exciting to see so many results coming in! Thank you, as always, to everyone who is helping us by running the models on their computers, and if you aren’t already signed up, we still need more models run, so please join in!
Third Set of Results, 20th March 2014
We’ve now have several thousand results back, which is fantastic! Here is the latest plot with over 5,000 models: As you can see, with this many models, the curves are really filling in at the extreme weather risks to the right of the plot. For more information about how to rest this return time plot, please see the Expected Results page. We’ve also updated the animation for this plot, showing the plot as it builds up from just a handful all the way to 5,000 models:
Second Batch of Results, 18th March 2014
Here is a plot of the second batch of results from the weather@home UK Flooding experiment we’re currently running: Each dark blue and green dot represents one model that was run on a participant’s computer. If climate change had changed the odds of getting the severe flooding last winter, then we should expect to see a significant gap between the blue and green dots. For more information about how to read this Return Time Plot, please see the Expected Results page. We’re hoping to eventually get several thousand model runs done, but we thought it would be interesting to plot the results so far, so you can see why we need so many results. With only 47 model runs (see First Results below), there simply aren’t enough dots to state statistically if there is a difference between the blue and green dots. Now, with 510, we’re starting to see a better pattern emerge. As you can see in the plot, there isn’t currently any significant gap between the blue and green dots. This short video shows an animation of the plot, as more and more results are added in: The lighter green dots represent the individual patterns of sea surface temperature we estimated as the response of the climate system to man-made climate change. The smaller light green dots therefore represent the uncertainty in the human influence. If the blue dots lie within the area covered by the light green dots, then we can’t attribute these results to climate change. But we need even more results to be able to make a statistical statement about the influence of climate change on the risk of extreme flooding. We will be posting further results in the next few days, when we should be into the 1000’s of model runs. So, please help us by signing up and joining in!
First Results, 11th March 2014
We have received an impressive response from the public after the launch of our experiment on the 4th of March. Our colleagues at the Oxford e-Research Centre, University of Oxford, estimate that nearly 1,000 new participants have subscribed to the project. Many thanks to all! We have now analysed the first 47 simulations that participants have run and they are plotted in the Figure below:
The blue dots represent the 2013/2014 winter as observed, and for this experiment, we received 20 simulations. For the “world that might have been” experiment, we received 27 simulations, which are shown as green dots. The smaller light green dots represent the individual patterns of sea surface temperature we estimated as the response of the climate system to man-made climate change. The smaller light green dots therefore represent the uncertainty in the human influence. At this stage, there are not enough simulations to make any conclusions about the role of climate change in the record wet winter 2013/2014. However, we want to illustrate to the public why we need such large ensembles, which is why we will show the “results” as they evolve. With this size of ensemble, the 2013/2014 winter as observed and as in the “world that might have been” are not distinguishable from another. Interestingly, our current wettest simulation comes from one of the “worlds that might have been” simulations (the uppermost green dot) – but this could be entirely due to chance. We will also compare the rainfall totals with observations. For this, we will need to calculate the rainfall total from a dataset from the Met Office for exactly the same region as the one we defined for the simulations (land rainfall for South England and Wales). As this was a record wet winter, it has never been observed in the roughly 250 years of meteorological records, so we are looking at a 1/100-year event at least. As you can see from the figure, we are still only seeing 1/10-year events. We need more simulations to see any pattern, so please keep crunching!