Spelling suggestions: "subject:"forecast evaluation"" "subject:"dorecast evaluation""
11 |
Nonlinearity In Exchange Rates : Evidence From African EconomiesJobe, Ndey Isatou January 2016 (has links)
In an effort to assess the predictive ability of exchange rate models when data on African countries is sampled, this paper studies nonlinear modelling and prediction of the nominal exchange rate series of the United States dollar to currencies of thirty-eight African states using the smooth transition autoregressive (STAR) model. A three step analysis is undertaken. One, it investigates nonlinearity in all nominal exchange rate series examined using a chain of credible statistical in-sample tests. Significantly, evidence of nonlinear exponential STAR (ESTAR) dynamics is detected across all series. Two, linear models are provided another chance to make it right by shuffling to data on African countries to investigate their predictive power against the tough random walk without drift model. Linear models again failed significantly. Lastly, the predictive ability of nonlinear models against both the random walk without drift and the corresponding linear models is investigated. Nonlinear models display useful forecasting gains over all contending models.
|
12 |
Evaluation of probabilistic forecasts in Uppsala and its potential use in winter road maintenance / Utvärdering av probabilistiska väderprognoser i Uppsala och den potentiella användningen inom vinterväghållningJohansson, Elisabet January 2023 (has links)
Efficient winter road maintenance is crucial for safety and societal function during the winter months in Sweden. This report aims to evaluate the MetCoOp ensemble system CMEPS and investigate its potential use as a basis for formulating criteria for snow removal that accounts for forecasted weather. Today the criteria for activating snow removal in Sweden are static, meaning they start after a set amount of snow and should end within a set time span. The verification metrics rank-histogram, continuously rankprobability score, reliability diagram, and Brier score were used to evaluate temperature and solid precipitation. Observations used as verification were taken at the measuring station Geocentrum in Uppsala during the winters of 2020/2021, 2021/2022, and November-December 2022. The analysis shows the temperature forecast to be under-dispersive and with a cold bias. The ensemble system is shown to be less reliable for predicting temperatures below 0 °C the first 24 hours after the forecast is issued. Still, the forecast generally performs better for short lead times. The forecast overestimates solid and liquid precipitation. The wet bias is greatest for short lead times and long accumulation times. Short lead times are most reliable regarding solid precipitation over 1mm and 3mm. The first 24-30 hours are most important for an application in winter road maintenance, and based on how the forecast system performs for these lead times in this study, it would need calibration. For larger amounts of snow, new criteria could help adjust the starting time and time limits. Before implementing such criteria, practical questions as if dynamic criteria would lead to an improvement and how high the probability threshold should be must be answered. The sample size is also found to be too small, and further analysis is required, especially with data allowing for evaluation of higher thresholds. / En ensembleprognos består av flera prognoser som genom att baseras på något olika information beskriver ett antal möjliga framtida väderutfall. Idag används sannolikhetsprognoser i många delar av samhället då det ger möjligheten att se vilken sannolikhet en viss väderhändelse har. I det här arbetet har ensembleprognosen bestående av 30 medlemmar från MetCoOp utvärderats för temperatur och snö. I rapporten diskuteras även om det finns potential för att sannolikheter om det framtida vädret kan användas i kriterier för att bestämma när åtgärder för snö och is ska påbörjas och avslutas. Effektiv snöröjning och halkbekämpning är samhällsviktiga uppdrag som är kostsamma och kräver mycket planering. Sannolikhetsprognoser används redan som en hjälp för de som jobbar med vinterväghållning, främst för halkbekämpning, men kriterierna är idag fasta och snöröjning påbörjas när en viss mängd snö är uppmätt. Observationer av temperatur, nederbörd och nederbördstyp från mätstationen Geocentrum i Uppsala för vintrarna 2020/2021, 2021/2022 samt november-december 2022 har använts som verifikation. Prognosen har utvärderats med hjälp av rank-histogram, CRPS, reliability diagram och Brier score. Det framgick att temperaturprognosen hade liten och otillräcklig spridning, speciellt för korta ledtider. Ensemblesystemet visade samtidigt ofta för låga temperaturer. Analysen indikerade att mängden fast nederbörd överskattades av prognosen speciellt för 24-timmar ackumulation. Prognosen visade sig vara mest pålitlig för att prognosticera snö över 1mm och 3mm för korta ledtider. Studien visade även på att modellen överskattade regn vilket innebär att ensemblen har svårt att uppskatta nederbörd i allmänhet och inte snö i synnerhet. Prognosen visade sig inte vara pålitlig för att förutsäga om temperaturen 12 och 24 timmar efter observerat snöfall var konsekvent under 0 °C. Analysen är mindre pålitlig på grund av få snöfall under perioden i Uppsala. För att dra säkra slutsatser behöver ytterligare data analyseras med fler snöfall. Det finns dock potential att använda ensemblen från MetCoOp för att formulera kriterier för snöröjning, speciellt om den kalibreras. Med dynamiska kriterier skulle start- och sluttider kunna justeras så att de var anpassade till större snömängder. Det krävs ytterligare undersökning om hur inställningen bland yrkesverksamma ser ut och hur kriterierna skulle se ut i praktiken.
|
13 |
Essays on forecast evaluation and financial econometricsLund-Jensen, Kasper January 2013 (has links)
This thesis consists of three papers that makes independent contributions to the fields of forecast evaluation and financial econometrics. As such, the papers, chapter 1-3, can be read independently of each other. In Chapter 1, “Inferring an agent’s loss function based on a term structure of forecasts”, we provide conditions for identification, estimation and inference of an agent’s loss function based on an observed term structure of point forecasts. The loss function specification is flexible as we allow the preferences to be both asymmetric and to vary non-linearly across the forecast horizon. In addition, we introduce a novel forecast rationality test based on the estimated loss function. We employ the approach to analyse the U.S. Government’s preferences over budget surplus forecast errors. Interestingly, we find that it is relatively more costly for the government to underestimate the budget surplus and that this asymmetry is stronger at long forecast horizons. In Chapter 2, “Monitoring Systemic Risk”, we define systemic risk as the conditional probability of a systemic banking crisis. This conditional probability is modelled in a fixed effect binary response panel-model framework that allows for cross-sectional dependence (e.g. due to contagion effects). In the empirical application we identify several risk factors and it is shown that the level of systemic risk contains a predictable component which varies through time. Furthermore, we illustrate how the forecasts of systemic risk map into dynamic policy thresholds in this framework. Finally, by conducting a pseudo out-of-sample exercise we find that the systemic risk estimates provided reliable early-warning signals ahead of the recent financial crisis for several economies. Finally, in Chapter 3, “Equity Premium Predictability”, we reassess the evidence of out-of- sample equity premium predictability. The empirical finance literature has identified several financial variables that appear to predict the equity premium in-sample. However, Welch & Goyal (2008) find that none of these variables have any predictive power out-of-sample. We show that the equity premium is predictable out-of-sample once you impose certain shrinkage restrictions on the model parameters. The approach is motivated by the observation that many of the proposed financial variables can be characterised as ’weak predictors’ and this suggest that a James-Stein type estimator will provide a substantial risk reduction. The out-of-sample explanatory power is small, but we show that it is, in fact, economically meaningful to an investor with time-invariant risk aversion. Using a shrinkage decomposition we also show that standard combination forecast techniques tends to ’overshrink’ the model parameters leading to suboptimal model forecasts.
|
14 |
Rank statistics of forecast ensemblesSiegert, Stefan 08 March 2013 (has links) (PDF)
Ensembles are today routinely applied to estimate uncertainty in numerical predictions of complex systems such as the weather. Instead of initializing a single numerical forecast, using only the best guess of the present state as initial conditions, a collection (an ensemble) of forecasts whose members start from slightly different initial conditions is calculated. By varying the initial conditions within their error bars, the sensitivity of the resulting forecasts to these measurement errors can be accounted for. The ensemble approach can also be applied to estimate forecast errors that are due to insufficiently known model parameters by varying these parameters between ensemble members.
An important (and difficult) question in ensemble weather forecasting is how well does an ensemble of forecasts reproduce the actual forecast uncertainty. A widely used criterion to assess the quality of forecast ensembles is statistical consistency which demands that the ensemble members and the corresponding measurement (the ``verification\'\') behave like random independent draws from the same underlying probability distribution. Since this forecast distribution is generally unknown, such an analysis is nontrivial. An established criterion to assess statistical consistency of a historical archive of scalar ensembles and verifications is uniformity of the verification rank: If the verification falls between the (k-1)-st and k-th largest ensemble member it is said to have rank k. Statistical consistency implies that the average frequency of occurrence should be the same for each rank.
A central result of the present thesis is that, in a statistically consistent K-member ensemble, the (K+1)-dimensional vector of rank probabilities is a random vector that is uniformly distributed on the K-dimensional probability simplex. This behavior is universal for all possible forecast distributions. It thus provides a way to describe forecast ensembles in a nonparametric way, without making any assumptions about the statistical behavior of the ensemble data. The physical details of the forecast model are eliminated, and the notion of statistical consistency is captured in an elementary way. Two applications of this result to ensemble analysis are presented.
Ensemble stratification, the partitioning of an archive of ensemble forecasts into subsets using a discriminating criterion, is considered in the light of the above result. It is shown that certain stratification criteria can make the individual subsets of ensembles appear statistically inconsistent, even though the unstratified ensemble is statistically consistent. This effect is explained by considering statistical fluctuations of rank probabilities. A new hypothesis test is developed to assess statistical consistency of stratified ensembles while taking these potentially misleading stratification effects into account.
The distribution of rank probabilities is further used to study the predictability of outliers, which are defined as events where the verification falls outside the range of the ensemble, being either smaller than the smallest, or larger than the largest ensemble member. It is shown that these events are better predictable than by a naive benchmark prediction, which unconditionally issues the average outlier frequency of 2/(K+1) as a forecast. Predictability of outlier events, quantified in terms of probabilistic skill scores and receiver operating characteristics (ROC), is shown to be universal in a hypothetical forecast ensemble. An empirical study shows that in an operational temperature forecast ensemble, outliers are likewise predictable, and that the corresponding predictability measures agree with the analytically calculated ones.
|
15 |
Decision-making, uncertainty and the predictability of financial markets: Essays on interest rates, crude oil prices and exchange ratesKunze, Frederik 17 May 2018 (has links)
No description available.
|
16 |
Rank statistics of forecast ensemblesSiegert, Stefan 21 December 2012 (has links)
Ensembles are today routinely applied to estimate uncertainty in numerical predictions of complex systems such as the weather. Instead of initializing a single numerical forecast, using only the best guess of the present state as initial conditions, a collection (an ensemble) of forecasts whose members start from slightly different initial conditions is calculated. By varying the initial conditions within their error bars, the sensitivity of the resulting forecasts to these measurement errors can be accounted for. The ensemble approach can also be applied to estimate forecast errors that are due to insufficiently known model parameters by varying these parameters between ensemble members.
An important (and difficult) question in ensemble weather forecasting is how well does an ensemble of forecasts reproduce the actual forecast uncertainty. A widely used criterion to assess the quality of forecast ensembles is statistical consistency which demands that the ensemble members and the corresponding measurement (the ``verification\'\') behave like random independent draws from the same underlying probability distribution. Since this forecast distribution is generally unknown, such an analysis is nontrivial. An established criterion to assess statistical consistency of a historical archive of scalar ensembles and verifications is uniformity of the verification rank: If the verification falls between the (k-1)-st and k-th largest ensemble member it is said to have rank k. Statistical consistency implies that the average frequency of occurrence should be the same for each rank.
A central result of the present thesis is that, in a statistically consistent K-member ensemble, the (K+1)-dimensional vector of rank probabilities is a random vector that is uniformly distributed on the K-dimensional probability simplex. This behavior is universal for all possible forecast distributions. It thus provides a way to describe forecast ensembles in a nonparametric way, without making any assumptions about the statistical behavior of the ensemble data. The physical details of the forecast model are eliminated, and the notion of statistical consistency is captured in an elementary way. Two applications of this result to ensemble analysis are presented.
Ensemble stratification, the partitioning of an archive of ensemble forecasts into subsets using a discriminating criterion, is considered in the light of the above result. It is shown that certain stratification criteria can make the individual subsets of ensembles appear statistically inconsistent, even though the unstratified ensemble is statistically consistent. This effect is explained by considering statistical fluctuations of rank probabilities. A new hypothesis test is developed to assess statistical consistency of stratified ensembles while taking these potentially misleading stratification effects into account.
The distribution of rank probabilities is further used to study the predictability of outliers, which are defined as events where the verification falls outside the range of the ensemble, being either smaller than the smallest, or larger than the largest ensemble member. It is shown that these events are better predictable than by a naive benchmark prediction, which unconditionally issues the average outlier frequency of 2/(K+1) as a forecast. Predictability of outlier events, quantified in terms of probabilistic skill scores and receiver operating characteristics (ROC), is shown to be universal in a hypothetical forecast ensemble. An empirical study shows that in an operational temperature forecast ensemble, outliers are likewise predictable, and that the corresponding predictability measures agree with the analytically calculated ones.
|
17 |
[en] ON THE MISSING DISINFLATION PUZZLE: A DATA-DRIVEN APPROACH / [pt] SOBRE O MISSING DISINFLATION PUZZLE: UMA ABORDAGEM COM APRENDIZADO DE MÁQUINA23 September 2021 (has links)
[pt] O presente trabalho investiga as potenciais explicações para o fenômeno do Missing Disinflation Puzzle. Nós montamos uma base de dados contendo apenas variáveis associadas com o fenômeno, e utilizamos métodos de Machine Learning para calcular estimativas para a inflação do Consumer Price Index durante o período de interesse. Esses métodos podem lidar com bases de dados extensas, e realizar seleção de variáveis. Um exercício de seleção de melhores modelos utilizando a técnica de Model Confidence Set sobre previsões pseudo out-of-sample é proposto. Nós analisamos o padrão de seleção de variáveis entre os melhores modelos selecionados e encontramos evidência a favor das explicações associadas ao uso de diferentes métricas de expectativas de inflação - em especial aquelas ligadas a pesquisas feitas com consumidores. / [en] This paper examines the potential explanations for the Missing Disinflation Puzzle (MDP). We construct a data set containing only variables associated with the puzzle, and use of Machine Learning (ML) methods to
compute estimates for U.S. Consumer Price Index inflation over the period of interest. These methods can handle large data sets, and perform variable selection. A model selection exercise using Model Confidence Set over pseudo-out-of-sample forecasts is proposed to assess forecasting performance and to analyze the variable selection pattern of these models. We analyze the variable selection performed by the best models and find evidence for explanations associated with different metrics for inflation expectations - in particular those linked to consumers surveys.
|
18 |
Evaluating USDA Agricultural ForecastsBora, Siddhartha S. 01 September 2022 (has links)
No description available.
|
19 |
<b>Essays in Agricultural Finance</b>Megan N. Hughes (8775677) 18 July 2024 (has links)
<p dir="ltr">The Farm Service Agency's Guaranteed Loan Program supports eligible lender's ability to provide credit to farms who would otherwise not qualify for loans by guaranteeing up to 95% of principal and interest if the farmer defaults. The first chapter examines the degree to which bank characteristics influence FSA guaranteed loan rates paid by farmers. We leverage the unique characteristics of a panel of FSA guaranteed loans that include both borrower and lender information. Relative to pooled OLS, our preferred fixed-effects regression specification suggests that both time-varying and invariant lender effects are a significant determinant of FSA guaranteed loan rates. Further, when controlling for lender-effects, the significance of borrower characteristics largely diminish. These findings are consistent with prior studies of broader lending market interaction. This is the first study to examine FSA guaranteed loans which accounts for bank-level variation in lending terms. The findings may be of interest to policymakers, program administrators, lenders, and farmers.</p><p dir="ltr">Bankers’ expectations have been shown to provide reasonable forecasts of land value. In the second chapter, we test the informativeness of bankers’ expectations in predicting FSA guaranteed loan application volumes. Once again, we leverage proprietary administrative data from the FSA and, this time, pair it with survey data from the Federal Reserve Bank of Chicago to evaluate bankers’ forecasts. Results show that bankers’ forecasts are outperformed by naïve models, and including bankers’ expectations does not improve predictive models. Once again, these results will be of interest to FSA program administrators, lenders, and potential borrowers.</p><p dir="ltr">The study of risk is an important thread of farm management research as agriculture is an industry with many sources of risk. In the third chapter, we link broad measures of policy risk in the form of Equity Market Volatility trackers to farmer’s perceptions of risk and uncertainty. We consider disagreement in ex ante sentiment questions to measure farmer risk. Through a series of pairwise VARs, we show which sources of risk matriculate as concerns for farmers measured by uncertainty in the Purdue University-CME Group Ag Economy Barometer. Increases in tax policy, trade policy and infectious disease uncertainty are found to Granger-cause movement in farmer sentiment uncertainty.</p>
|
20 |
The Non-alcoholic Beverage Market in the United States: Demand Interrelationships, Dynamics, Nutrition Issues and Probability Forecast EvaluationDharmasena, Kalu Arachchillage Senarath 2010 May 1900 (has links)
There are many different types of non-alcoholic beverages (NAB) available in
the United States today compared to a decade ago. Additionally, the needs of beverage
consumers have evolved over the years centering attention on functionality and health
dimensions. These trends in volume of consumption are a testament to the growth in the
NAB industry.
Our study pertains to ten NAB categories. We developed and employed a unique
cross-sectional and time-series data set based on Nielsen Homescan data associated with
household purchases of NAB from 1998 through 2003.
First, we considered demographic and economic profiling of the consumption of
NAB in a two-stage model. Race, region, age and presence of children and gender of
household head were the most important factors affecting the choice and level of
consumption.
Second, we used expectation-prediction success tables, calibration, resolution,
the Brier score and the Yates partition of the Brier score to measure the accuracy of predictions generated from qualitative choice models used to model the purchase
decision of NAB by U.S. households. The Yates partition of the Brier score
outperformed all other measures.
Third, we modeled demand interrelationships, dynamics and habits of NAB
consumption estimating own-price, cross-price and expenditure elasticities. The
Quadratic Almost Ideal Demand System, the synthetic Barten model and the State
Adjustment Model were used. Soft drinks were substitutes and fruit juices were
complements for most of non-alcoholic beverages. Investigation of a proposed tax on
sugar-sweetened beverages revealed the importance of centering attention not only to
direct effects but also to indirect effects of taxes on beverage consumption.
Finally, we investigated factors affecting nutritional contributions derived from
consumption of NAB. Also, we ascertained the impact of the USDA year 2000 Dietary
Guidelines for Americans associated with the consumption of NAB. Significant factors
affecting caloric and nutrient intake from NAB were price, employment status of
household head, region, race, presence of children and the gender of household food
manager. Furthermore, we found that USDA nutrition intervention program was
successful in reducing caloric and caffeine intake from consumption of NAB.
The away-from-home intake of beverages and potential impacts of NAB
advertising are not captured in our work. In future work, we plan to address these
limitations.
|
Page generated in 0.0769 seconds