Until recent times, weather forecasts were deterministic in nature. For example, a forecast might state ``The temperature tomorrow will be $20^\circ$C.'' More recently, however, increasing interest has been paid to the uncertainty associated with such predictions. By quantifying the uncertainty of a forecast, for example with a probability distribution, users can make risk-based decisions. The uncertainty in weather forecasts is typically based upon `ensemble forecasts'. Rather than issuing a single forecast from a numerical weather prediction (NWP) model, ensemble forecasts comprise multiple model runs that differ in either the model physics or initial conditions. Ideally, ensemble forecasts would provide a representative sample of the possible outcomes of the verifying observations. However, due to model biases and inadequate specification of initial conditions, ensemble forecasts are often biased and underdispersed. As a result, estimates of the most likely values of the verifying observations, and the associated forecast uncertainty, are often inaccurate. It is therefore necessary to correct, or post-process ensemble forecasts, using statistical models known as `ensemble post-processing methods'. To this end, this thesis is concerned with the application of statistical methodology in the field of probabilistic weather forecasting, and in particular ensemble post-processing. Using various datasets, we extend existing work and propose the novel use of statistical methodology to tackle several aspects of ensemble post-processing. Our novel contributions to the field are the following. In chapter~3 we present a comparison study for several post-processing methods, with a focus on probabilistic forecasts for extreme events. We find that the benefits of ensemble post-processing are larger for forecasts of extreme events, compared with forecasts of common events. We show that allowing flexible corrections to the biases in ensemble location is important for the forecasting of extreme events. In chapter~4 we tackle the complicated problem of post-processing ensemble forecasts without making distributional assumptions, to produce recalibrated ensemble forecasts without the intermediate step of specifying a probability forecast distribution. We propose a latent variable model, and make a novel application of measurement error models. We show in three case studies that our distribution-free method is competitive with a popular alternative that makes distributional assumptions. We suggest that our distribution-free method could serve as a useful baseline on which forecasters should seek to improve. In chapter~5 we address the subject of parameter uncertainty in ensemble post-processing. As in all parametric statistical models, the parameter estimates are subject to uncertainty. We approximate the distribution of model parameters by bootstrap resampling, and demonstrate improvements in forecast skill by incorporating this additional source of uncertainty in to out-of-sample probability forecasts. In chapter~6 we use model diagnostic tools to determine how specific post-processing models may be improved. We subsequently introduce bias correction schemes that move beyond the standard linear schemes employed in the literature and in practice, particularly in the case of correcting ensemble underdispersion. Finally, we illustrate the complicated problem of assessing the skill of ensemble forecasts whose members are dependent, or correlated. We show that dependent ensemble members can result in surprising conclusions when employing standard measures of forecast skill.
Identifer | oai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:700174 |
Date | January 2016 |
Creators | Williams, Robin Mark |
Contributors | Ferro, Christopher ; Kwasniok, Frank |
Publisher | University of Exeter |
Source Sets | Ethos UK |
Detected Language | English |
Type | Electronic Thesis or Dissertation |
Source | http://hdl.handle.net/10871/21693 |
Page generated in 0.0018 seconds