• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 2
  • 1
  • Tagged with
  • 20
  • 20
  • 8
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Evaluation of probabilistic forecasts in Uppsala and its potential use in winter road maintenance / Utvärdering av probabilistiska väderprognoser i Uppsala och den potentiella användningen inom vinterväghållning

Johansson, Elisabet January 2023 (has links)
Efficient winter road maintenance is crucial for safety and societal function during the winter months in Sweden. This report aims to evaluate the MetCoOp ensemble system CMEPS and investigate its potential use as a basis for formulating criteria for snow removal that accounts for forecasted weather. Today the criteria for activating snow removal in Sweden are static, meaning they start after a set amount of snow and should end within a set time span. The verification metrics rank-histogram, continuously rankprobability score, reliability diagram, and Brier score were used to evaluate temperature and solid precipitation. Observations used as verification were taken at the measuring station Geocentrum in Uppsala during the winters of 2020/2021, 2021/2022, and November-December 2022. The analysis shows the temperature forecast to be under-dispersive and with a cold bias. The ensemble system is shown to be less reliable for predicting temperatures below 0 °C the first 24 hours after the forecast is issued. Still, the forecast generally performs better for short lead times. The forecast overestimates solid and liquid precipitation. The wet bias is greatest for short lead times and long accumulation times. Short lead times are most reliable regarding solid precipitation over 1mm and 3mm. The first 24-30 hours are most important for an application in winter road maintenance, and based on how the forecast system performs for these lead times in this study, it would need calibration. For larger amounts of snow, new criteria could help adjust the starting time and time limits. Before implementing such criteria, practical questions as if dynamic criteria would lead to an improvement and how high the probability threshold should be must be answered. The sample size is also found to be too small, and further analysis is required, especially with data allowing for evaluation of higher thresholds. / En ensembleprognos består av flera prognoser som genom att baseras på något olika information beskriver ett antal möjliga framtida väderutfall. Idag används sannolikhetsprognoser i många delar av samhället då det ger möjligheten att se vilken sannolikhet en viss väderhändelse har. I det här arbetet har ensembleprognosen bestående av 30 medlemmar från MetCoOp utvärderats för temperatur och snö. I rapporten diskuteras även om det finns potential för att sannolikheter om det framtida vädret kan användas i kriterier för att bestämma när åtgärder för snö och is ska påbörjas och avslutas. Effektiv snöröjning och halkbekämpning är samhällsviktiga uppdrag som är kostsamma och kräver mycket planering. Sannolikhetsprognoser används redan som en hjälp för de som jobbar med vinterväghållning, främst för halkbekämpning, men kriterierna är idag fasta och snöröjning påbörjas när en viss mängd snö är uppmätt. Observationer av temperatur, nederbörd och nederbördstyp från mätstationen Geocentrum i Uppsala för vintrarna 2020/2021, 2021/2022 samt november-december 2022 har använts som verifikation. Prognosen har utvärderats med hjälp av rank-histogram, CRPS, reliability diagram och Brier score. Det framgick att temperaturprognosen hade liten och otillräcklig spridning, speciellt för korta ledtider. Ensemblesystemet visade samtidigt ofta för låga temperaturer. Analysen indikerade att mängden fast nederbörd överskattades av prognosen speciellt för 24-timmar ackumulation. Prognosen visade sig vara mest pålitlig för att prognosticera snö över 1mm och 3mm för korta ledtider. Studien visade även på att modellen överskattade regn vilket innebär att ensemblen har svårt att uppskatta nederbörd i allmänhet och inte snö i synnerhet. Prognosen visade sig inte vara pålitlig för att förutsäga om temperaturen 12 och 24 timmar efter observerat snöfall var konsekvent under 0 °C. Analysen är mindre pålitlig på grund av få snöfall under perioden i Uppsala. För att dra säkra slutsatser behöver ytterligare data analyseras med fler snöfall. Det finns dock potential att använda ensemblen från MetCoOp för att formulera kriterier för snöröjning, speciellt om den kalibreras. Med dynamiska kriterier skulle start- och sluttider kunna justeras så att de var anpassade till större snömängder. Det krävs ytterligare undersökning om hur inställningen bland yrkesverksamma ser ut och hur kriterierna skulle se ut i praktiken.
12

Essays on forecast evaluation and financial econometrics

Lund-Jensen, Kasper January 2013 (has links)
This thesis consists of three papers that makes independent contributions to the fields of forecast evaluation and financial econometrics. As such, the papers, chapter 1-3, can be read independently of each other. In Chapter 1, “Inferring an agent’s loss function based on a term structure of forecasts”, we provide conditions for identification, estimation and inference of an agent’s loss function based on an observed term structure of point forecasts. The loss function specification is flexible as we allow the preferences to be both asymmetric and to vary non-linearly across the forecast horizon. In addition, we introduce a novel forecast rationality test based on the estimated loss function. We employ the approach to analyse the U.S. Government’s preferences over budget surplus forecast errors. Interestingly, we find that it is relatively more costly for the government to underestimate the budget surplus and that this asymmetry is stronger at long forecast horizons. In Chapter 2, “Monitoring Systemic Risk”, we define systemic risk as the conditional probability of a systemic banking crisis. This conditional probability is modelled in a fixed effect binary response panel-model framework that allows for cross-sectional dependence (e.g. due to contagion effects). In the empirical application we identify several risk factors and it is shown that the level of systemic risk contains a predictable component which varies through time. Furthermore, we illustrate how the forecasts of systemic risk map into dynamic policy thresholds in this framework. Finally, by conducting a pseudo out-of-sample exercise we find that the systemic risk estimates provided reliable early-warning signals ahead of the recent financial crisis for several economies. Finally, in Chapter 3, “Equity Premium Predictability”, we reassess the evidence of out-of- sample equity premium predictability. The empirical finance literature has identified several financial variables that appear to predict the equity premium in-sample. However, Welch & Goyal (2008) find that none of these variables have any predictive power out-of-sample. We show that the equity premium is predictable out-of-sample once you impose certain shrinkage restrictions on the model parameters. The approach is motivated by the observation that many of the proposed financial variables can be characterised as ’weak predictors’ and this suggest that a James-Stein type estimator will provide a substantial risk reduction. The out-of-sample explanatory power is small, but we show that it is, in fact, economically meaningful to an investor with time-invariant risk aversion. Using a shrinkage decomposition we also show that standard combination forecast techniques tends to ’overshrink’ the model parameters leading to suboptimal model forecasts.
13

Rank statistics of forecast ensembles

Siegert, Stefan 08 March 2013 (has links) (PDF)
Ensembles are today routinely applied to estimate uncertainty in numerical predictions of complex systems such as the weather. Instead of initializing a single numerical forecast, using only the best guess of the present state as initial conditions, a collection (an ensemble) of forecasts whose members start from slightly different initial conditions is calculated. By varying the initial conditions within their error bars, the sensitivity of the resulting forecasts to these measurement errors can be accounted for. The ensemble approach can also be applied to estimate forecast errors that are due to insufficiently known model parameters by varying these parameters between ensemble members. An important (and difficult) question in ensemble weather forecasting is how well does an ensemble of forecasts reproduce the actual forecast uncertainty. A widely used criterion to assess the quality of forecast ensembles is statistical consistency which demands that the ensemble members and the corresponding measurement (the ``verification\'\') behave like random independent draws from the same underlying probability distribution. Since this forecast distribution is generally unknown, such an analysis is nontrivial. An established criterion to assess statistical consistency of a historical archive of scalar ensembles and verifications is uniformity of the verification rank: If the verification falls between the (k-1)-st and k-th largest ensemble member it is said to have rank k. Statistical consistency implies that the average frequency of occurrence should be the same for each rank. A central result of the present thesis is that, in a statistically consistent K-member ensemble, the (K+1)-dimensional vector of rank probabilities is a random vector that is uniformly distributed on the K-dimensional probability simplex. This behavior is universal for all possible forecast distributions. It thus provides a way to describe forecast ensembles in a nonparametric way, without making any assumptions about the statistical behavior of the ensemble data. The physical details of the forecast model are eliminated, and the notion of statistical consistency is captured in an elementary way. Two applications of this result to ensemble analysis are presented. Ensemble stratification, the partitioning of an archive of ensemble forecasts into subsets using a discriminating criterion, is considered in the light of the above result. It is shown that certain stratification criteria can make the individual subsets of ensembles appear statistically inconsistent, even though the unstratified ensemble is statistically consistent. This effect is explained by considering statistical fluctuations of rank probabilities. A new hypothesis test is developed to assess statistical consistency of stratified ensembles while taking these potentially misleading stratification effects into account. The distribution of rank probabilities is further used to study the predictability of outliers, which are defined as events where the verification falls outside the range of the ensemble, being either smaller than the smallest, or larger than the largest ensemble member. It is shown that these events are better predictable than by a naive benchmark prediction, which unconditionally issues the average outlier frequency of 2/(K+1) as a forecast. Predictability of outlier events, quantified in terms of probabilistic skill scores and receiver operating characteristics (ROC), is shown to be universal in a hypothetical forecast ensemble. An empirical study shows that in an operational temperature forecast ensemble, outliers are likewise predictable, and that the corresponding predictability measures agree with the analytically calculated ones.
14

Decision-making, uncertainty and the predictability of financial markets: Essays on interest rates, crude oil prices and exchange rates

Kunze, Frederik 17 May 2018 (has links)
No description available.
15

Rank statistics of forecast ensembles

Siegert, Stefan 21 December 2012 (has links)
Ensembles are today routinely applied to estimate uncertainty in numerical predictions of complex systems such as the weather. Instead of initializing a single numerical forecast, using only the best guess of the present state as initial conditions, a collection (an ensemble) of forecasts whose members start from slightly different initial conditions is calculated. By varying the initial conditions within their error bars, the sensitivity of the resulting forecasts to these measurement errors can be accounted for. The ensemble approach can also be applied to estimate forecast errors that are due to insufficiently known model parameters by varying these parameters between ensemble members. An important (and difficult) question in ensemble weather forecasting is how well does an ensemble of forecasts reproduce the actual forecast uncertainty. A widely used criterion to assess the quality of forecast ensembles is statistical consistency which demands that the ensemble members and the corresponding measurement (the ``verification\'\') behave like random independent draws from the same underlying probability distribution. Since this forecast distribution is generally unknown, such an analysis is nontrivial. An established criterion to assess statistical consistency of a historical archive of scalar ensembles and verifications is uniformity of the verification rank: If the verification falls between the (k-1)-st and k-th largest ensemble member it is said to have rank k. Statistical consistency implies that the average frequency of occurrence should be the same for each rank. A central result of the present thesis is that, in a statistically consistent K-member ensemble, the (K+1)-dimensional vector of rank probabilities is a random vector that is uniformly distributed on the K-dimensional probability simplex. This behavior is universal for all possible forecast distributions. It thus provides a way to describe forecast ensembles in a nonparametric way, without making any assumptions about the statistical behavior of the ensemble data. The physical details of the forecast model are eliminated, and the notion of statistical consistency is captured in an elementary way. Two applications of this result to ensemble analysis are presented. Ensemble stratification, the partitioning of an archive of ensemble forecasts into subsets using a discriminating criterion, is considered in the light of the above result. It is shown that certain stratification criteria can make the individual subsets of ensembles appear statistically inconsistent, even though the unstratified ensemble is statistically consistent. This effect is explained by considering statistical fluctuations of rank probabilities. A new hypothesis test is developed to assess statistical consistency of stratified ensembles while taking these potentially misleading stratification effects into account. The distribution of rank probabilities is further used to study the predictability of outliers, which are defined as events where the verification falls outside the range of the ensemble, being either smaller than the smallest, or larger than the largest ensemble member. It is shown that these events are better predictable than by a naive benchmark prediction, which unconditionally issues the average outlier frequency of 2/(K+1) as a forecast. Predictability of outlier events, quantified in terms of probabilistic skill scores and receiver operating characteristics (ROC), is shown to be universal in a hypothetical forecast ensemble. An empirical study shows that in an operational temperature forecast ensemble, outliers are likewise predictable, and that the corresponding predictability measures agree with the analytically calculated ones.
16

[en] ON THE MISSING DISINFLATION PUZZLE: A DATA-DRIVEN APPROACH / [pt] SOBRE O MISSING DISINFLATION PUZZLE: UMA ABORDAGEM COM APRENDIZADO DE MÁQUINA

23 September 2021 (has links)
[pt] O presente trabalho investiga as potenciais explicações para o fenômeno do Missing Disinflation Puzzle. Nós montamos uma base de dados contendo apenas variáveis associadas com o fenômeno, e utilizamos métodos de Machine Learning para calcular estimativas para a inflação do Consumer Price Index durante o período de interesse. Esses métodos podem lidar com bases de dados extensas, e realizar seleção de variáveis. Um exercício de seleção de melhores modelos utilizando a técnica de Model Confidence Set sobre previsões pseudo out-of-sample é proposto. Nós analisamos o padrão de seleção de variáveis entre os melhores modelos selecionados e encontramos evidência a favor das explicações associadas ao uso de diferentes métricas de expectativas de inflação - em especial aquelas ligadas a pesquisas feitas com consumidores. / [en] This paper examines the potential explanations for the Missing Disinflation Puzzle (MDP). We construct a data set containing only variables associated with the puzzle, and use of Machine Learning (ML) methods to compute estimates for U.S. Consumer Price Index inflation over the period of interest. These methods can handle large data sets, and perform variable selection. A model selection exercise using Model Confidence Set over pseudo-out-of-sample forecasts is proposed to assess forecasting performance and to analyze the variable selection pattern of these models. We analyze the variable selection performed by the best models and find evidence for explanations associated with different metrics for inflation expectations - in particular those linked to consumers surveys.
17

Evaluating USDA Agricultural Forecasts

Bora, Siddhartha S. 01 September 2022 (has links)
No description available.
18

The Non-alcoholic Beverage Market in the United States: Demand Interrelationships, Dynamics, Nutrition Issues and Probability Forecast Evaluation

Dharmasena, Kalu Arachchillage Senarath 2010 May 1900 (has links)
There are many different types of non-alcoholic beverages (NAB) available in the United States today compared to a decade ago. Additionally, the needs of beverage consumers have evolved over the years centering attention on functionality and health dimensions. These trends in volume of consumption are a testament to the growth in the NAB industry. Our study pertains to ten NAB categories. We developed and employed a unique cross-sectional and time-series data set based on Nielsen Homescan data associated with household purchases of NAB from 1998 through 2003. First, we considered demographic and economic profiling of the consumption of NAB in a two-stage model. Race, region, age and presence of children and gender of household head were the most important factors affecting the choice and level of consumption. Second, we used expectation-prediction success tables, calibration, resolution, the Brier score and the Yates partition of the Brier score to measure the accuracy of predictions generated from qualitative choice models used to model the purchase decision of NAB by U.S. households. The Yates partition of the Brier score outperformed all other measures. Third, we modeled demand interrelationships, dynamics and habits of NAB consumption estimating own-price, cross-price and expenditure elasticities. The Quadratic Almost Ideal Demand System, the synthetic Barten model and the State Adjustment Model were used. Soft drinks were substitutes and fruit juices were complements for most of non-alcoholic beverages. Investigation of a proposed tax on sugar-sweetened beverages revealed the importance of centering attention not only to direct effects but also to indirect effects of taxes on beverage consumption. Finally, we investigated factors affecting nutritional contributions derived from consumption of NAB. Also, we ascertained the impact of the USDA year 2000 Dietary Guidelines for Americans associated with the consumption of NAB. Significant factors affecting caloric and nutrient intake from NAB were price, employment status of household head, region, race, presence of children and the gender of household food manager. Furthermore, we found that USDA nutrition intervention program was successful in reducing caloric and caffeine intake from consumption of NAB. The away-from-home intake of beverages and potential impacts of NAB advertising are not captured in our work. In future work, we plan to address these limitations.
19

Essays in International Macroeconomics and Forecasting

Bejarano Rojas, Jesus Antonio 2011 August 1900 (has links)
This dissertation contains three essays in international macroeconomics and financial time series forecasting. In the first essay, I show, numerically, that a two-country New-Keynesian Sticky Prices model, driven by monetary and productivity shocks, is capable of explaining the highly positive correlation across the industrialized countries' inflation even though their cross-country correlation in money growth rate is negligible. The structure of this model generates cross-country correlations of inflation, output and consumption that appear to closely correspond to the data. Additionally, this model can explain the internal correlation between inflation and output observed in the data. The second essay presents two important results. First, gains from monetary policy cooperation are different from zero when the elasticity of substitution between domestic and imported goods consumption is different from one. Second, when monetary policy is endogenous in a two-country model, the only Nash equilibria supported by this model are those that are symmetrical. That is, all exporting firms in both countries choose to price in their own currency, or all exporting firms in both countries choose to price in the importer's currency. The last essay provides both conditional and unconditional predictive ability evaluations of the aluminum futures contracts prices, by using five different econometric models, in forecasting the aluminum spot price monthly return 3, 15, and 27-months ahead for the sample period 1989.01-2010.10. From these evaluations, the best model in forecasting the aluminum spot price monthly return 3 and 15 months ahead is followed by a (VAR) model whose variables are aluminum futures contracts price, aluminum spot price and risk free interest rate, whereas for the aluminum spot price monthly return 27 months ahead is a single equation model in which the aluminum spot price today is explained by the aluminum futures price 27 months earlier. Finally, it shows that iterated multiperiod-ahead time series forecasts have a better conditional out-of-sample forecasting performance of the aluminum spot price monthly return when an estimated (VAR) model is used as a forecasting tool.
20

Rationalität und Qualität von Wirtschaftsprognosen / Rationality and Quality of Economic Forecasts

Scheier, Johannes 28 April 2015 (has links)
Wirtschaftsprognosen sollen die Unsicherheit bezüglich der zukünftigen wirtschaftlichen Entwicklung mindern und Planungsprozesse von Regierungen und Unternehmen unterstützen. Empirische Studien bescheinigen ihnen jedoch in aller Regel ein unbefriedigendes Qualitätsniveau. Auf der Suche nach den Ursachen hat sich in Form der rationalen Erwartungsbildung eine zentrale Grundforderung an  die Prognostiker herausgebildet. So müssten offensichtliche und systematische Fehler, wie bspw. regelmäßige Überschätzungen, mit der Zeit erkannt und abgestellt werden. Die erste Studie der Dissertation übt Kritik am vorherrschenden Verständnis der Rationalität. Dieses ist zu weitreichend, weshalb den Prognostikern die Rationalität voreilig abgesprochen wird. Anhand einer neuen empirischen Herangehensweise wird deutlich, dass die Prognosen aus einem anderen Blickwinkel heraus durchaus als rational angesehen werden können. Der zweite Aufsatz zeigt auf, dass in Form von Befragungsergebnissen öffentlich verfügbare Informationen bestehen, die bei geeigneter Verwendung zu einer Verbesserung der Qualität von Konjunkturprognosen beitragen würden. Die Rationalität dieser Prognosen ist daher stark eingeschränkt. Im dritten Papier erfolgt eine Analyse von Prognoserevisionen und deren Ursachen. Dabei zeigt sich, dass es keinen Zusammenhang zwischen der Rationalität und der Qualität der untersuchten Prognosezeitreihen gibt. Die vierte Studie dient der Präsentation der Ergebnisse eines Prognoseplanspiels, welches den Vergleich der Prognosen von Amateuren und Experten zum Ziel hatte. Es stellt sich heraus, dass die Prognosefehler erhebliche Übereinstimmungen aufweisen.

Page generated in 0.1177 seconds