• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 2
  • 1
  • Tagged with
  • 19
  • 19
  • 8
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On the 3 M's of Epidemic Forecasting: Methods, Measures, and Metrics

Tabataba, Farzaneh Sadat 06 December 2017 (has links)
Over the past few decades, various computational and mathematical methodologies have been proposed for forecasting seasonal epidemics. In recent years, the deadly effects of enormous pandemics such as the H1N1 influenza virus, Ebola, and Zika, have compelled scientists to find new ways to improve the reliability and accuracy of epidemic forecasts. The improvement and variety of these prediction methods are undeniable. Nevertheless, many challenges remain unresolved in the path of forecasting the outbreaks using surveillance data. Obtaining the clean real-time data has always been an obstacle. Moreover, the surveillance data is usually noisy and handling the uncertainty of the observed data is a major issue for forecasting algorithms. Correct modeling assumptions regarding the nature of the infectious disease is another dilemma. Oversimplified models could lead to inaccurate forecasts, whereas more complicated methods require additional computational resources and information. Without those, the model may not be able to converge to a unique optimum solution. Through the last decade, there has been a significant effort towards achieving better epidemic forecasting algorithms. However, the lack of standard, well-defined evaluating metrics impedes a fair judgment on the proposed methods. This dissertation is divided into two parts. In the first part, we present a Bayesian particle filter calibration framework integrated with an agent-based model to forecast the epidemic trend of diseases like flu and Ebola. Our approach uses Bayesian statistics to estimate the underlying disease model parameters given the observed data and handle the uncertainty in the reasoning. An individual-based model with different intervention strategies could result in a large number of unknown parameters that should be properly calibrated. As particle filter could collapse in very large-scale systems (curse-of-dimensionality problem), achieving the optimum solution becomes more challenging. Our proposed particle filter framework utilizes machine learning concepts to restrain the intractable search space. It incorporates a smart analyzer in the state dynamics unit that examines the predicted and observed data using machine learning techniques to guide the direction and amount of perturbation of each parameter in the searching process. The second part of this dissertation focuses on providing standard evaluation measures for evaluating epidemic forecasts. We present an end-to-end framework that introduces epidemiologically relevant features (Epi-features), error measures, and ranking schema as the main modules of the evaluation process. Lastly, we provide the evaluation framework as a software package named Epi-Evaluator and demonstrate the potentials and capabilities of the framework by applying it to the output of different forecasting methods. / PHD / Epidemics impose substantial costs to societies by deteriorating the public health and disrupting economic trends. In recent years, the deadly effects of wide-spread pandemics such as H1N1, Ebola, and Zika, have compelled scientists to find new ways to improve the reliability and accuracy of epidemic forecasts. The reliable prediction of future pandemics and providing efficient intervention plans for health care providers could prevent or control disease propagations. Over the last decade, there has been a significant effort towards achieving better epidemic forecasting algorithms. The mission, however, is far from accomplished. Moreover, there has been no significant leap towards standard, well-defined evaluating metrics and criteria for a fair performance judgment between the proposed methods. This dissertation is divided into two parts. In the first part, we present a Bayesian particle filter calibration framework integrated with an agent-based model to forecast the epidemic trend of diseases like flu and Ebola. We model the disease propagation via a large scale agent-based model that simulates the disease spread across the contact network of people. The contact network consists of millions of nodes and is constructed based on demographic information of individuals achieved from the census data. The agent-based model’s configurations are mostly unknown parameters that should be properly calibrated. We present a Bayesian particle filter calibration approach to estimate the underlying disease model parameters given the observed data and handle the uncertainty in the reasoning. As particle filter could collapse in very large-scale systems, achieving the optimum solution becomes more challenging. Our proposed particle filter framework utilizes machine learning concepts to restrain the intractable search space. It incorporates a smart analyzer unit that examines the predicted and observed data using machine learning techniques to guide the direction and amount of perturbation of each parameter in the searching process. The second part of this dissertation focuses on providing standard evaluation measures for evaluating and comparing epidemic forecasts. We present a framework that introduces epidemiologically relevant features (Epi-features), error measures, and ranking schema as the main modules of the evaluation process. Lastly, we provide the evaluation framework as a software package named Epi-Evaluator and demonstrate the potentials and capabilities of the framework by applying it to the output of different forecasting methods.
2

Evaluation of electronic commerce forecasts and identification of problems affecting their evaluation

Knutsson, Mats January 1999 (has links)
<p>Businesses use forecasts in order to gather information concerning phenomena that are important to them. Since electronic commerce has grown in importance for businesses, forecasts concerning this area are becoming increasingly important. The work presented in this report aims at gathering information useful when improving forecast quality. In practice the report presents a collection of forecasts, concerning business-to-consumer electronic commerce, and a collection of factors affecting electronic commerce forecast outcomes. A categorisation and evaluation of the collected forecasts is performed, this evaluation is done by comparing the forecasts in the categories to the actual outcomes. Problems that occur during the evaluation process, such as problems with forecast wording and scope, are described, and suggestions of how to avoid these problems are provided. Structured methods to categorise and evaluate the forecasts are also presented. Finally, the outcome from the evaluation is analysed using the compiled factors and indications are given of how to use the results in order to improve future forecasting.</p>
3

Evaluation of electronic commerce forecasts and identification of problems affecting their evaluation

Knutsson, Mats January 1999 (has links)
Businesses use forecasts in order to gather information concerning phenomena that are important to them. Since electronic commerce has grown in importance for businesses, forecasts concerning this area are becoming increasingly important. The work presented in this report aims at gathering information useful when improving forecast quality. In practice the report presents a collection of forecasts, concerning business-to-consumer electronic commerce, and a collection of factors affecting electronic commerce forecast outcomes. A categorisation and evaluation of the collected forecasts is performed, this evaluation is done by comparing the forecasts in the categories to the actual outcomes. Problems that occur during the evaluation process, such as problems with forecast wording and scope, are described, and suggestions of how to avoid these problems are provided. Structured methods to categorise and evaluate the forecasts are also presented. Finally, the outcome from the evaluation is analysed using the compiled factors and indications are given of how to use the results in order to improve future forecasting.
4

An Analysis of Passenger Demand Forecast Evaluation Methods

Larsson, Felix, Linna, Robin January 2017 (has links)
In the field of aviation forecasting is used, among other things, to determine the number of passengers to expect for each flight. This is beneficial in the practice of revenue management, as the forecast is used as a base when setting the price for each flight. In this study, a forecast evaluation has been done on seven different routes with a total of 61 different flights, using four different methods. These are: Mean Absolute Scaled Error (MASE), Mean Absolute Percentage Error (MAPE), Tracking Signal, and a goodness of fit test to determine if the forecast errors are normally distributed. The MASE has been used to determine if the passenger forecasts are better or worse than a naïve forecast, while the MAPE provides an error value for internal comparisons between the flights. The Tracking Signal and the normal distribution test have been used in order to determine whether a flight has bias or not towards under- or overforecasting. The results point towards a general underforecast across all studied flights. A total of 89 % of the forecasts perform better than the naïve forecast, with an average MASE value of 0,78. As such, the forecast accuracy is better than that of the naïve forecast. There are however large error values among the observed flights, affecting the MAPE average. The MAPE average is 38,53 % while the median is 30,60 %. The measure can be used for internal comparisons, and one such way is to use the average value as a benchmark in order to focus on improving those forecasts with a higher than average MAPE. The authors have found that the MASE and MAPE are useful in measuring forecast accuracy and as such the recommendation of the authors is that these two error measures can be used together to evaluate forecast accuracy at frequent intervals. In addition to this there is value in examining the error distribution in conjunction with the Mean Error when searching for bias, as this will indicate if there is systematic error present.
5

[pt] NOWCASTING DE PIB COM MODELOS DE MACHINE LEARNING: EVIDÊNCIA DOS EUA / [en] NOWCASTING GDP WITH MACHINE LEARNING MODELS: EVIDENCE FROM THE US

LUCAS SEABRA MAYNARD DA SILVA 25 May 2020 (has links)
[pt] O presente trabalho investiga o uso de métodos de Machine Learning (ML) para efetuar estimativas para o trimestre corrente (nowcasts) da taxa de crescimento do PIB Real dos EUA. Esses métodos conseguem lidar com um grande volume de dados e séries com calendários de publicação dessincronizados, e os nowcasts são atualizados cada vez que novos dados são publicados ao longo do trimestre. Um exercício pseudo-out-of-sample é proposto para avaliar a performance de previsão e analisar o padrão de seleção de variável desses modelos. O método de ML que merece o maior destaque é o Target Factor, que supera o usualmente adotado DFM para alguns vintages dentro do trimestre. Ademais, as variáveis selecionadas apresentam consistência entre os modelos e com a intuição. / [en] This paper examines the use of Machine Learning (ML) models to compute estimates of current-quarter US Real GDP growth rate (nowcasts). These methods can handle large data sets with unsynchronized release dates, and nowcasts are updated each time new data are released along the quarter. A pseudo-out-of-sample exercise is proposed to assess forecasting performance and to analyze the variable selection pattern of these models. The ML method that deserves more attention is the Target Factor, which overcomes the usually adopted dynamic factor model for some predictions vintages in the quarter. We also analyze the variables selected, which are consistent between models and intuition.
6

Evaluating the USDA's Farm Balance Sheet Forecasts

Pedro Antonio Diaz Cachay (16631448) 26 July 2023 (has links)
<p>The United States Department of Agriculture (USDA) forecasts the Farm Balance Sheet  each year. The Farm Balance Sheet provides an estimate of the value of physical and financial  assets in the United States agriculture sector over time (USDA, 2023). The forecasts evaluated in  this paper are related to assets and debt in the farm sector, including total farm assets, farm assets  real estate, total farm debt, farm debt real estate, and farm debt non-real estate. These forecasts predict the growth in the agricultural sector and help various stakeholders, such as policy makers, USDA program administrators, and agricultural lenders make important decisions. Given the  importance of these forecasts in the agricultural sector, it is the main objective of this research to examine the degree to which the Farm Balance Sheet forecasts are optimal (unbiased and efficient).  During this study, forecasts from the Farm Balance Sheet in the 1986-2021 period are found to be unbiased using Holden and Peel test (1990). Also, using efficiency tests by Nordhaus (1987), it  was found that forecasts from the Farm Balance Sheet are inefficient. This, suggests all the  information is not efficiently incorporated when the forecast is produced .</p>
7

Three Essays Evaluating Long-term Agricultural Projections

Hari Prasad Regmi (15869132) 30 May 2023 (has links)
<p> This dissertation consists of three essays that evaluate long-term agricultural projections. The first essay focus on evaluating Congressional Budget Office’s (CBO) baseline projection of United States Department of Agriculture (USDA) mandatory farm and nutrition programs. The second essay examine USDA soybean ending stock projections, and the third essay investigate impact of macroeconomic assumptions on USDA’s baseline farm income projections.  We use publicly available data from Congressional Budget Office (CBO) and United States Department of Agriculture (USDA)</p>
8

Forecasting the term structure of volatility of crude oil price changes

Balaban, E., Lu, Shan 2016 February 1922 (has links)
Yes / This is a pioneering effort to test the comparative performance of two competing models for out-of-sample forecasting the term structure of volatility of crude oil price changes employing both symmetric and asymmetric evaluation criteria. Under symmetric error statistics, our empirical model using the estimated growth factor of volatility through time is overall superior, and it beats in most cases the benchmark model of the square-root-of-time for holding periods between one and 250 days. Under asymmetric error statistics, if over-prediction (under-prediction) of volatility is undesirable, the empirical (benchmark) model is consistently superior. Relative performance of the empirical model is much higher for holding periods up to fifty days.
9

Nonlinearity In Exchange Rates : Evidence From African Economies

Jobe, Ndey Isatou January 2016 (has links)
In an effort to assess the predictive ability of exchange rate models when data on African countries is sampled, this paper studies nonlinear modelling and prediction of the nominal exchange rate series of the United States dollar to currencies of thirty-eight African states using the smooth transition autoregressive (STAR) model. A three step analysis is undertaken. One, it investigates nonlinearity in all nominal exchange rate series examined using a chain of credible statistical in-sample tests. Significantly, evidence of nonlinear exponential STAR (ESTAR) dynamics is detected across all series. Two, linear models are provided another chance to make it right by shuffling to data on African countries to investigate their predictive power against the tough random walk without drift model. Linear models again failed significantly. Lastly, the predictive ability of nonlinear models against both the random walk without drift and the corresponding linear models is investigated. Nonlinear models display useful forecasting gains over all contending models.
10

Essays on forecast evaluation and financial econometrics

Lund-Jensen, Kasper January 2013 (has links)
This thesis consists of three papers that makes independent contributions to the fields of forecast evaluation and financial econometrics. As such, the papers, chapter 1-3, can be read independently of each other. In Chapter 1, “Inferring an agent’s loss function based on a term structure of forecasts”, we provide conditions for identification, estimation and inference of an agent’s loss function based on an observed term structure of point forecasts. The loss function specification is flexible as we allow the preferences to be both asymmetric and to vary non-linearly across the forecast horizon. In addition, we introduce a novel forecast rationality test based on the estimated loss function. We employ the approach to analyse the U.S. Government’s preferences over budget surplus forecast errors. Interestingly, we find that it is relatively more costly for the government to underestimate the budget surplus and that this asymmetry is stronger at long forecast horizons. In Chapter 2, “Monitoring Systemic Risk”, we define systemic risk as the conditional probability of a systemic banking crisis. This conditional probability is modelled in a fixed effect binary response panel-model framework that allows for cross-sectional dependence (e.g. due to contagion effects). In the empirical application we identify several risk factors and it is shown that the level of systemic risk contains a predictable component which varies through time. Furthermore, we illustrate how the forecasts of systemic risk map into dynamic policy thresholds in this framework. Finally, by conducting a pseudo out-of-sample exercise we find that the systemic risk estimates provided reliable early-warning signals ahead of the recent financial crisis for several economies. Finally, in Chapter 3, “Equity Premium Predictability”, we reassess the evidence of out-of- sample equity premium predictability. The empirical finance literature has identified several financial variables that appear to predict the equity premium in-sample. However, Welch & Goyal (2008) find that none of these variables have any predictive power out-of-sample. We show that the equity premium is predictable out-of-sample once you impose certain shrinkage restrictions on the model parameters. The approach is motivated by the observation that many of the proposed financial variables can be characterised as ’weak predictors’ and this suggest that a James-Stein type estimator will provide a substantial risk reduction. The out-of-sample explanatory power is small, but we show that it is, in fact, economically meaningful to an investor with time-invariant risk aversion. Using a shrinkage decomposition we also show that standard combination forecast techniques tends to ’overshrink’ the model parameters leading to suboptimal model forecasts.

Page generated in 0.056 seconds