• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • Tagged with
  • 9
  • 9
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Wind forecast verification : a study in the accuracy of wind forecasts made by the Weather Channel and AccuWeather

Scheele, Kyle Fred 08 November 2011 (has links)
The Weather Channel (TWC) and AccuWeather (AWX) are leading providers of weather information to the general public. The purpose of this Master’s Report is to examine the wind speed forecasts made by these two providers and determine their reliability and accuracy. The data used within this report was collected over a 12-month period at 51 locations across the state of Texas. The locations were grouped according to wind power class, which ranged from Class 1 to Class 4. The length of the forecast period was 9 days for TWC and 14 days for AWX. It was found that the values forecasted by TWC were generally not well calibrated, but were never far from being perfectly calibrated and always demonstrated positive skill. The sharpness of TWC’s forecasts decreased consistently with lead time, allowing them to maintain a skill score greater than the climatological average throughout the forecast period. TWC tended to over-forecast wind speed in short term forecasts, especially within the lower wind power class regions. AWX forecasts were found to have positive skill the first 6 days of the forecasting period before becoming near zero or negative. AWX’s forecasts maintained a fairly high sharpness throughout the forecast period, which helped contribute to increasingly un-calibrated forecast values and negative skill in longer term forecasts. The findings within this report should help provide a better understanding of the wind forecasts made by TWC and AWX, and determine the strengths and weaknesses of both companies. / text
2

Review and analysis of the National Weather Service river forecasts for the June 2008 eastern Iowa floods

Hunemuller, Toby John 01 December 2010 (has links)
The accuracy and quality of river forecasts are dependent on the nature of each flood. Less extreme , more common, floods may afford deviations between the predicted forecast and observed stage because the locals may be prepared, based on past experience to deal with the less extreme flood events. For less frequent, high flow events the flood forecasts and advanced warning time are more critical, because the locals need time to develop emergency response plans. The National Weather Service River Forecast Centers (NWS RFC) develop the river forecasts and provide them to the National Weather Service Weather Forecast Office (NWS WFO) for dissemination. During flood events the RFC's are tasked with processing the observed data and running, reviewing and modifying the forecast models to provide reasonable river forecasts based on observed conditions and the forecasters' experience. This thesis will discuss the personal experiences of the author, analyze the components of the National Weather Service river forecasting process, analyze June 2008 river and precipitation forecasts for several eastern Iowa watersheds, and discuss the results of the analysis as well as provide support to current calls to action to support forecast verification through the hindcasting process.
3

The Verification of Probabilistic Forecasts in Decision and Risk Analysis

Jose, Victor Richmond January 2009 (has links)
<p> Probability forecasts play an important role in many decision and risk analysis applications. Research and practice over the years have shown that the shift towards distributional forecasts provides a more accurate and appropriate means of capturing risk in models for these applications. This means that mathematical tools for analyzing the quality of these forecasts, may it come from experts, models or data, become important to the decision maker. In this regard, strictly proper scoring rules have been widely studied because of their ability to encourage assessors to provide truthful reports. This dissertation contributes to the scoring rule literature in two main areas of assessment - probability forecasts and quantile assessments. </p><p>In the area of probability assessment, scoring rules typically studied in the literature, and commonly used in practice, evaluate probability assessments relative to a default uniform measure. In many applications, the uniform baseline used to represent some notion of ignorance is inappropriate. In this dissertation, we generalize the power and pseudospherical family of scoring rules, two large parametric families of commonly-used scoring rules, by incorporating the notion of a non-uniform baseline distribution for both the discrete and continuous cases. With an appropriate normalization and choice of parameters, we show that these new families of scoring rules relate to various well-known divergence measures from information theory and to well-founded decision models when framed in an expected utility maximization context. </p><p>In applications where the probability space considered has an ordinal ranking between states, an important property often considered is sensitivity to distance. Scoring rules with this property provide higher scores to assessments that allocate higher probability mass to events “closer” to that which occurs based on some notion of distance. In this setting, we provide an approach that allows us to generate new sensitive to distance strictly proper scoring rules from well-known strictly proper binary scoring rules. Through the use of the weighted scoring rules, we also show that these new scores can incorporate a specified baseline distribution, in addition to being strictly proper and sensitive to distance. </p><p>In the inverse problem of quantile assessment, scoring rules have not yet been well-studied and well-developed. We examine the differences between scoring rules for probability and quantile assessments, and demonstrate why the tools that have been developed for probability assessments no longer encourage truthful reporting when used for quantile assessments. In addition, we shed light on new properties and characterizations for some of these rules that could guide decision makers trying to choosing an appropriate scoring rule. </p> / Dissertation
4

Verification of the Weather Research and Forecasting Model for Alberta

Pennelly, Clark William Unknown Date
No description available.
5

Rank statistics of forecast ensembles

Siegert, Stefan 08 March 2013 (has links) (PDF)
Ensembles are today routinely applied to estimate uncertainty in numerical predictions of complex systems such as the weather. Instead of initializing a single numerical forecast, using only the best guess of the present state as initial conditions, a collection (an ensemble) of forecasts whose members start from slightly different initial conditions is calculated. By varying the initial conditions within their error bars, the sensitivity of the resulting forecasts to these measurement errors can be accounted for. The ensemble approach can also be applied to estimate forecast errors that are due to insufficiently known model parameters by varying these parameters between ensemble members. An important (and difficult) question in ensemble weather forecasting is how well does an ensemble of forecasts reproduce the actual forecast uncertainty. A widely used criterion to assess the quality of forecast ensembles is statistical consistency which demands that the ensemble members and the corresponding measurement (the ``verification\'\') behave like random independent draws from the same underlying probability distribution. Since this forecast distribution is generally unknown, such an analysis is nontrivial. An established criterion to assess statistical consistency of a historical archive of scalar ensembles and verifications is uniformity of the verification rank: If the verification falls between the (k-1)-st and k-th largest ensemble member it is said to have rank k. Statistical consistency implies that the average frequency of occurrence should be the same for each rank. A central result of the present thesis is that, in a statistically consistent K-member ensemble, the (K+1)-dimensional vector of rank probabilities is a random vector that is uniformly distributed on the K-dimensional probability simplex. This behavior is universal for all possible forecast distributions. It thus provides a way to describe forecast ensembles in a nonparametric way, without making any assumptions about the statistical behavior of the ensemble data. The physical details of the forecast model are eliminated, and the notion of statistical consistency is captured in an elementary way. Two applications of this result to ensemble analysis are presented. Ensemble stratification, the partitioning of an archive of ensemble forecasts into subsets using a discriminating criterion, is considered in the light of the above result. It is shown that certain stratification criteria can make the individual subsets of ensembles appear statistically inconsistent, even though the unstratified ensemble is statistically consistent. This effect is explained by considering statistical fluctuations of rank probabilities. A new hypothesis test is developed to assess statistical consistency of stratified ensembles while taking these potentially misleading stratification effects into account. The distribution of rank probabilities is further used to study the predictability of outliers, which are defined as events where the verification falls outside the range of the ensemble, being either smaller than the smallest, or larger than the largest ensemble member. It is shown that these events are better predictable than by a naive benchmark prediction, which unconditionally issues the average outlier frequency of 2/(K+1) as a forecast. Predictability of outlier events, quantified in terms of probabilistic skill scores and receiver operating characteristics (ROC), is shown to be universal in a hypothetical forecast ensemble. An empirical study shows that in an operational temperature forecast ensemble, outliers are likewise predictable, and that the corresponding predictability measures agree with the analytically calculated ones.
6

Statistical Post-processing of Deterministic and Ensemble Wind Speed Forecasts on a Grid / Post-traitements statistiques de prévisions de vent déterministes et d'ensemble sur une grille

Zamo, Michaël 15 December 2016 (has links)
Les erreurs des modèles de prévision numérique du temps (PNT) peuvent être réduites par des méthodes de post-traitement (dites d'adaptation statistique ou AS) construisant une relation statistique entre les observations et les prévisions. L'objectif de cette thèse est de construire des AS de prévisions de vent pour la France sur la grille de plusieurs modèles de PNT, pour les applications opérationnelles de Météo-France en traitant deux problèmes principaux. Construire des AS sur la grille de modèles de PNT, soit plusieurs milliers de points de grille sur la France, demande de développer des méthodes rapides pour un traitement en conditions opérationnelles. Deuxièmement, les modifications fréquentes des modèles de PNT nécessitent de mettre à jour les AS, mais l'apprentissage des AS requiert un modèle de PNT inchangé sur plusieurs années, ce qui n'est pas possible dans la majorité des cas.Une nouvelle analyse du vent moyen à 10 m a été construite sur la grille du modèle local de haute résolution (2,5 km) de Météo-France, AROME. Cette analyse se compose de deux termes: une spline fonction de la prévision la plus récente d'AROME plus une correction par une spline fonction des coordonnées du point considéré. La nouvelle analyse obtient de meilleurs scores que l'analyse existante, et présente des structures spatio-temporelles réalistes. Cette nouvelle analyse, disponible au pas horaire sur 4 ans, sert ensuite d'observation en points de grille pour construire des AS.Des AS de vent sur la France ont été construites pour ARPEGE, le modèle global de Météo-France. Un banc d'essai comparatif désigne les forêts aléatoires comme meilleure méthode. Cette AS requiert un long temps de chargement en mémoire de l'information nécessaire pour effectuer une prévision. Ce temps de chargement est divisé par 10 en entraînant les AS sur des points de grille contigü et en les élaguant au maximum. Cette optimisation ne déteriore pas les performances de prévision. Cette approche d'AS par blocs est en cours de mise en opérationnel.Une étude préalable de l'estimation du « continuous ranked probability score » (CRPS) conduit à des recommandations pour son estimation et généralise des résultats théoriques existants. Ensuite, 6 AS de 4 modèles d'ensemble de PNT de la base TIGGE sont combinées avec les modèles bruts selon plusieurs méthodes statistiques. La meilleure combinaison s'appuie sur la théorie de la prévision avec avis d'experts, qui assure de bonnes performances par rapport à une prévision de référence. Elle ajuste rapidement les poids de la combinaison, un avantage lors du changement de performance des prévisions combinées. Cette étude a soulevé des contradictions entre deux critères de choix de la meilleure méthode de combinaison : la minimisation du CRPS et la platitude des histogrammes de rang selon les tests de Jolliffe-Primo. Il est proposé de choisir un modèle en imposant d'abord la platitude des histogrammes des rangs. / Errors of numerical weather prediction (NWP) models can be reduced thanks to post-processing methods (model output statistics, MOS) that build a statistical relationship between the observations and associated forecasts. The objective of the present thesis is to build MOS for windspeed forecasts over France on the grid of several NWP models, to be applied on operations at Météo-France, while addressing the two main issues. First, building MOS on the grid of some NWP model, with thousands of grid points over France, requires to develop methods fast enough for operational delays. Second, requent updates of NWP models require updating MOS, but training MOS requires an NWP model unchanged for years, which is usually not possible.A new windspeed analysis for the 10 m windspeed has been built over the grid of Météo-France's local area, high resolution (2,5km) NWP model, AROME. The new analysis is the sum of two terms: a spline with AROME most recent forecast as input plus a correction with a spline with the location coordinates as input. The new analysis outperforms the existing analysis, while displaying realistic spatio-temporal patterns. This new analysis, now available at an hourly rate over 4, is used as a gridded observation to build MOS in the remaining of this thesis.MOS for windspeed over France have been built for ARPEGE, Météo-France's global NWP model. A test-bed designs random forests as the most efficient MOS. The loading times is reduced by a factor 10 by training random forests over block of nearby grid points and pruning them as much as possible. This time optimisation goes without reducing the forecast performances. This block MOS approach is currently being made operational.A preliminary study about the estimation of the continuous ranked probability score (CRPS) leads to recommendations to efficiently estimate it and to generalizations of existing theoretical results. Then 4 ensemble NWP models from the TIGGE database are post-processed with 6 methods and combined with the corresponding raw ensembles thanks to several statistical methods. The best combination method is based on the theory of prediction with expert advice, which ensures good forecast performances relatively to some reference forecast. This method quickly adapts its combination weighs, which constitutes an asset in case of performances changes of the combined forecasts. This part of the work highlighted contradictions between two criteria to select the best combination methods: the minimization of the CRPS and the flatness of the rank histogram according to the Jolliffe-Primo tests. It is proposed to choose a model by first imposing the flatness of the rank histogram.
7

Rank statistics of forecast ensembles

Siegert, Stefan 21 December 2012 (has links)
Ensembles are today routinely applied to estimate uncertainty in numerical predictions of complex systems such as the weather. Instead of initializing a single numerical forecast, using only the best guess of the present state as initial conditions, a collection (an ensemble) of forecasts whose members start from slightly different initial conditions is calculated. By varying the initial conditions within their error bars, the sensitivity of the resulting forecasts to these measurement errors can be accounted for. The ensemble approach can also be applied to estimate forecast errors that are due to insufficiently known model parameters by varying these parameters between ensemble members. An important (and difficult) question in ensemble weather forecasting is how well does an ensemble of forecasts reproduce the actual forecast uncertainty. A widely used criterion to assess the quality of forecast ensembles is statistical consistency which demands that the ensemble members and the corresponding measurement (the ``verification\'\') behave like random independent draws from the same underlying probability distribution. Since this forecast distribution is generally unknown, such an analysis is nontrivial. An established criterion to assess statistical consistency of a historical archive of scalar ensembles and verifications is uniformity of the verification rank: If the verification falls between the (k-1)-st and k-th largest ensemble member it is said to have rank k. Statistical consistency implies that the average frequency of occurrence should be the same for each rank. A central result of the present thesis is that, in a statistically consistent K-member ensemble, the (K+1)-dimensional vector of rank probabilities is a random vector that is uniformly distributed on the K-dimensional probability simplex. This behavior is universal for all possible forecast distributions. It thus provides a way to describe forecast ensembles in a nonparametric way, without making any assumptions about the statistical behavior of the ensemble data. The physical details of the forecast model are eliminated, and the notion of statistical consistency is captured in an elementary way. Two applications of this result to ensemble analysis are presented. Ensemble stratification, the partitioning of an archive of ensemble forecasts into subsets using a discriminating criterion, is considered in the light of the above result. It is shown that certain stratification criteria can make the individual subsets of ensembles appear statistically inconsistent, even though the unstratified ensemble is statistically consistent. This effect is explained by considering statistical fluctuations of rank probabilities. A new hypothesis test is developed to assess statistical consistency of stratified ensembles while taking these potentially misleading stratification effects into account. The distribution of rank probabilities is further used to study the predictability of outliers, which are defined as events where the verification falls outside the range of the ensemble, being either smaller than the smallest, or larger than the largest ensemble member. It is shown that these events are better predictable than by a naive benchmark prediction, which unconditionally issues the average outlier frequency of 2/(K+1) as a forecast. Predictability of outlier events, quantified in terms of probabilistic skill scores and receiver operating characteristics (ROC), is shown to be universal in a hypothetical forecast ensemble. An empirical study shows that in an operational temperature forecast ensemble, outliers are likewise predictable, and that the corresponding predictability measures agree with the analytically calculated ones.
8

Identification of Hydrologic Models, Inputs, and Calibration Approaches for Enhanced Flood Forecasting

Awol, Frezer Seid January 2020 (has links)
The primary goal of this research is to evaluate and identify proper calibration approaches, skillful hydrological models, and suitable weather forecast inputs to improve the accuracy and reliability of hydrological forecasting in different types of watersheds. The research started by formulating an approach that examined single- and multi-site, and single- and multi-objective optimization methods for calibrating an event-based hydrological model to improve flood prediction in a semi-urban catchment. Then it assessed whether reservoir inflow in a large complex watershed could be accurately and reliably forecasted by simple lumped, medium-level distributed, or advanced land-surface based hydrological models. Then it is followed by a comparison of multiple combinations of hydrological models and weather forecast inputs to identify the best possible model-input integration for an enhanced short-range flood forecasting in a semi-urban catchment. In the end, Numerical Weather Predictions (NWPs) with different spatial and temporal resolutions were evaluated across Canada’s varied geographical environments to find candidate precipitation input products for improved flood forecasting. Results indicated that aggregating the objective functions across multiple sites into a single objective function provided better representative parameter sets of a semi-distributed hydrological model for an enhanced peak flow simulation. Proficient lumped hydrological models with proper forecast inputs appeared to show better hydrological forecast performance than distributed and land-surface models in two distinct watersheds. For example, forcing the simple lumped model (SACSMA) with bias-corrected ensemble inputs offered a reliable reservoir inflow forecast in a sizeable complex Prairie watershed; and a combination of the lumped model (MACHBV) with the high-resolution weather forecast input (HRDPS) provided skillful and economically viable short-term flood forecasts in a small semi-urban catchment. The comprehensive verification has identified low-resolution NWPs (GEFSv2 and GFS) over Western and Central parts of Canada and high-resolution NWPs (HRRR and HRDPS) in Southern Ontario regions that have a promising potential for forecasting the timing, intensity, and volume of floods. / Thesis / Doctor of Philosophy (PhD) / Accurate hydrological models and inputs play essential roles in creating a successful flood forecasting and early warning system. The main objective of this research is to identify adequately calibrated hydrological models and skillful weather forecast inputs to improve the accuracy of hydrological forecasting in various watershed landscapes. The key contributions include: (1) A finding that a combination of efficient optimization tools with a series of calibration steps is essential in obtaining representative parameters sets of hydrological models; (2) Simple lumped hydrological models, if used appropriately, can provide accurate and reliable hydrological forecasts in different watershed types, besides being computationally efficient; and (3) Candidate weather forecast products identified in Canada’s diverse geographical regions can be used as inputs to hydrological models for improved flood forecasting. The findings from this thesis are expected to benefit hydrological forecasting centers and researchers working on model and input improvements.
9

Design of ensemble prediction systems based on potential vorticity perturbations and multiphysics. Test for western Mediterranean heavy precipitation events

Vich Ramis, Maria del Mar 18 May 2012 (has links)
L'objectiu principal d'aquesta tesi és millorar l'actual capacitat de predicció de fenòmens meteorològics de pluja intensa potencialment perillosos a la Mediterrània occidental. Es desenvolupen i verifiquen tres sistemes de predicció per conjunts (SPC) que tenen en compte incerteses presents en els models numèrics i en les condicions inicials. Per generar els SPC s'utilitza la connexió entre les estructures de vorticitat potencial (VP) i els ciclons, a més de diferents esquemes de parametrització física. Es mostra que els SPC proporcionen una predicció més hàbil que la determinista. Els SPC generats pertorbant les condicions inicials han obtingut millor puntuació en verificacions estadístiques. Els resultats d'aquesta tesi mostren la utilitat i la idoneïtat dels mètodes de predicció basats en la pertorbació d'estructures de VP de nivells alts, precursors de les situacions ciclòniques. Els resultats i estratègies presentats pretenen ser un punt de partida per a futurs estudis que facin ús d'aquests mètodes. / The main goal of this thesis is to improve the current prediction skill of potentially hazardous heavy precipitation weather events in the western Mediterranean region. We develop and test three different ensemble prediction systems (EPSs) that account for uncertainties present in both the numerical models and the initial conditions. To generate the EPSs we take advantage of the connection between potential vorticity (PV) structures and cyclones, and use different physical parameterization schemes. We obtain an improvement in forecast skill when using an EPS compared to a determinist forecast. The EPSs generated perturbing the initial conditions perform better in the statistical verification scores. The results of this Thesis show the utility and suitability of forecasting methods based on perturbing the upper-level precursor PV structures present in cyclonic situations. The results and strategies here discussed aim to be a basis for future studies making use of these methods.

Page generated in 0.1094 seconds