1 |
The effect of multinationality on management earnings forecastsRunyan, Bruce Wayne 29 August 2005 (has links)
This study examines the relationship between a firm??s degree of multinationality
and its managers?? earnings forecasts. Firms with a high degree of multinationality are
subject to greater uncertainty regarding earnings forecasts due to the additional risk
resulting from the more complex multinational environment. Prior research demonstrates
that firms that fail to meet or beat market expectations experience disproportionate
market losses at earnings announcement dates. The complexities and greater uncertainty
resulting from higher levels of multinationality are expected to be negatively associated
with management earnings forecast precision, accuracy, and bias (downward versus
upward).
Results of the study are mixed. Regarding forecast precision, two measures of
multinationality (foreign sales / total sales and the number of geographic segments) are
significantly negatively related to management earnings forecast precision. This was the
expected relationship. Regarding forecast accuracy, contrary to expectations, forecast
accuracy is positively related to multinationality, with regard to the number of
geographic segments a firm discloses. Regarding forecast bias, unexpectedly, two
measures of multinationality (foreign sales / total sales and number of countries withforeign subsidiaries) are significantly positively related to more optimistic management
earnings forecasts.
|
2 |
Using the Hubbert curve to forecast oil production trends worldwideAlmulla, Jassim M. 17 September 2007 (has links)
Crude oil is by far the most important commodity to humans after water and food.
Having a continuous and affordable supply of oil is considered a basic human right in
this day and age. That is the main reason oil companies are in a constant search of cost
effective ways and technologies that allow for an improved oil recovery rate. This would
improve profitability as well.
What almost everyone knows and dreads at the same time is that oil is an
exhaustible resource. This means that as more oil is being produced every day, the
amount of oil that remains to be produced shrinks even more. With almost all big oil
fields worldwide having already been discovered, the challenge of finding new reserves
grows harder and harder.
A question that has always been asked is âÂÂwhen are we going to run out of
oil?â Given the available technologies and techniques, no one could give an exact
answer and if someone does, he/she would not be 100% sure of that answer. This study
tries to approximate future oil production rates to the year 2050 using the Hubbert
model. There are different models or tools to estimate future oil production rates, but the reason that the Hubbert model was chosen for this study is its simplicity and data
availability.
As any forecast, this study depends heavily on past trends but also factors
in the current conditions. It is safe to say that this forecast (study) is as any other
forecast, in which it will probably not mirror exactly what will happen in the future.
Still, forecasts have to be done, especially for such an important commodity.
This study predicts that the total oil to be recovered is 4.1 trillion barrels. It
also shows that most major oil-producing countries are either passed or about to pass
their peaks.
|
3 |
Increasing sales forecast accuracy with technique adoption in the forecasting processOrrebrant, Richard, Hill, Adam January 2014 (has links)
Abstract Purpose - The purpose with this thesis is to investigate how to increase sales forecast accuracy. Methodology – To fulfil the purpose a case study was conducted. To collect data from the case study the authors performed interviews and gathered documents. The empirical data was then analysed and compared with the theoretical framework. Result – The result shows that inaccuracies in forecasts are not necessarily because of the forecasting technique but can be a result from an unorganized forecasting process and having an inefficient information flow. The result further shows that it is not only important to review the information flow within the company but in the supply chain as whole to improve a forecast’s accuracy. The result also shows that time series can generate more accurate sales forecasts compared to only using qualitative techniques. It is, however, necessary to use a qualitative technique when creating time series. Time series only take time and sales history into account when forecasting, expertise regarding consumer behaviour, promotion activity, and so on, is therefore needed. It is also crucial to use qualitative techniques when selecting time series technique to achieve higher sales forecast accuracy. Personal expertise and experience are needed to identify if there is enough sales history, how much the sales are fluctuating, and if there will be any seasonality in the forecast. If companies gain knowledge about the benefits from each technique the combination can improve the forecasting process and increase the accuracy of the sales forecast. Conclusions – This thesis, with support from a case study, shows how time series and qualitative techniques can be combined to achieve higher accuracy. Companies that want to achieve higher accuracy need to know how the different techniques work and what is needed to take into account when creating a sales forecast. It is also important to have knowledge about the benefits of a well-designed forecasting process, and to do that, improving the information flow both within the company and the supply chain is a necessity. Research limitations – Because there are several different techniques to apply when creating a sales forecast, the authors could have involved more techniques in the investigation. The thesis work could also have used multiple case study objects to increase the external validity of the thesis.
|
4 |
Short-term electricity price point and probabilistic forecastsZhang, Chenxu 09 August 2022 (has links) (PDF)
Accurate short-term electricity price forecasts are essential to all electricity market participants. Generation companies adopt price forecasts to hedge generation shortage risks; load serving entities use price forecasts to purchase energy with low cost; and trading companies utilize price forecasts to arbitrage between markets.
Currently, researches on point forecast mainly focus on exploring periodic patterns of electricity price in time domain. However, frequency domain enables us to identify more information within price data to facilitate forecast. Besides, price spike forecast has not been fully studied in the existing works. Therefore, we propose a short-term electricity price forecast framework that analyzes price data in frequency domain and consider price spike predictions. First, the variational mode decomposition is adopted to decompose price data into multiple band-limited modes. Then, the extended discrete Fourier transform is used to transform the decomposed price mode into frequency domain and perform normal price forecasts. In addition, we utilize the enhanced structure preserving oversampling and synthetic minority oversampling technique to oversample price spike cases to improve price spike forecast accuracy.
In addition to point forecasts, market participants also need probabilistic forecasts to quantify prediction uncertainties. However, there are several shortcomings within current researches. Although wide prediction intervals satisfy reliability requirement, the over-width intervals incur market participants to derive conservative decisions. Besides, although electricity price data follow heteroscedasticity distribution, to reduce computation burden, many researchers assume that price data follow normal distribution. Therefore, to handle the above-mentioned deficiencies, we propose an optimal prediction interval method. 1) By considering both reliability and sharpness, we ensure the prediction interval has a narrow width without sacrificing reliability. 2) To avoid distribution assumptions, we utilize the quantile regression to estimate the bounds of prediction intervals. 3) Exploiting the versatile abilities, the extreme learning machine method is adopted to forecast prediction intervals.
The effectiveness of proposed point and probabilistic forecast methods are justified by using actual price data from various electricity markets. Comparing with the predictions derived from other researches, numerical results show that our methods could provide accurate and stable forecast results under different market situations.
|
5 |
How to calculate forecast accuracy for stocked items with a lumpy demand : A case study at Alfa LavalRagnerstam, Elsa January 2016 (has links)
Inventory management is an important part of a good functioning logistic. Nearly all the literature on optimal inventory management uses criteria of cost minimization and profit maximization. To have a well functioning forecasting system it is important to have a balance in the inventory. But, it exist different factors that can results in uncertainties and difficulties to maintain this balance. One important factor is the customers’ demand. Over half of the stocked items are in stock to prevent irregular orders and an uncertainty demand. The customers’ demand can be categorized into four categories: Smooth, Erratic, Intermittent and Lumpy. Items with a lumpy demand i.e. the items that are both intermittent and erratic are the hardest to manage and to forecast. The reason for this is that the quantity and demand for these items varies a lot. These items may also have periods of zero demand. Because of this, it is a challenge for companies to forecast these items. It is hard to manage the random values that appear at random intervals and leaving many periods with zero demand. Due to the lumpy demand, an ongoing problem for most organization is the inaccuracy of forecasts. It is almost impossible to predict exact forecasts. It does not matter how good the forecasts are or how complex the forecast techniques are, the instability of the markets confirm that the forecasts always will be wrong and that errors therefore always will exist. Therefore, we need to accept this but still work with this issue to keep the errors as minimal and small as possible. The purpose with measuring forecast errors is to identify single random errors and systematic errors that show if the forecast systematically is too high or too low. To calculate the forecast errors and measure the forecast accuracy also helps to dimensioning how large the safety stock should be and control that the forecast errors are within acceptable error margins. The research questions answered in this master thesis are: How should one calculate forecast accuracy for stocked items with a lumpy demand? How do companies measure forecast accuracy for stocked items with a lumpy demand, which are the differences between the methods? What kind of information do one need to apply these methods? To collect data and answer the research questions, a literature study have been made to compare how different researchers and authors write about this specific topic. Two different types of case studies have also been made. Firstly, a benchmarking process was made to compare how different companies work with this issue. And secondly, a case study in form of a hypothesis test was been made to test the hypothesis based on the analysis from the literature review and the benchmarking process. The analysis of the hypothesis test finally generated a conclusion that shows that a combination of the measurements WAPE, Weighted Absolute Forecast Error, and CFE, Cumulative Forecast Error, is a solution to calculate forecast accuracy for items with a lumpy demand. The keywords that have been used to search for scientific papers are: lumpy demand, forecast accuracy, forecasting, forecast error.
|
6 |
Market perceptions of efficiency and news in analyst forecast errorsChevis, Gia Marie 15 November 2004 (has links)
Financial analysts are considered inefficient when they do not fully incorporate relevant information into their forecasts. In this dissertation, I investigate differences in the observable efficiency of analysts' earnings forecasts between firms that consistently meet or exceed analysts' earnings expectations and those that do not. I then analyze the extent to which the market incorporates this (in)efficiency into its earnings expectations. Consistent with my hypotheses, I find that analysts are relatively less efficient with respect to prior returns for firms that do not consistently meet expectations than for firms that do follow such a strategy, especially when prior returns convey bad news. However, forecast errors for firms that consistently meet expectations do not appear to be serially correlated to a greater extent than those for firms that do not consistently meet expectations. It is not clear whether the market considers such inefficiency when setting its own expectations. While the evidence suggests they may do so in the context of a shorter historical pattern of realized forecast errors, other evidence suggests they may not distinguish between predictable and surprise components of forecast error when the historical forecast error pattern is more established.
|
7 |
Market perceptions of efficiency and news in analyst forecast errorsChevis, Gia Marie 15 November 2004 (has links)
Financial analysts are considered inefficient when they do not fully incorporate relevant information into their forecasts. In this dissertation, I investigate differences in the observable efficiency of analysts' earnings forecasts between firms that consistently meet or exceed analysts' earnings expectations and those that do not. I then analyze the extent to which the market incorporates this (in)efficiency into its earnings expectations. Consistent with my hypotheses, I find that analysts are relatively less efficient with respect to prior returns for firms that do not consistently meet expectations than for firms that do follow such a strategy, especially when prior returns convey bad news. However, forecast errors for firms that consistently meet expectations do not appear to be serially correlated to a greater extent than those for firms that do not consistently meet expectations. It is not clear whether the market considers such inefficiency when setting its own expectations. While the evidence suggests they may do so in the context of a shorter historical pattern of realized forecast errors, other evidence suggests they may not distinguish between predictable and surprise components of forecast error when the historical forecast error pattern is more established.
|
8 |
Improving hydrometeorologic numerical weather prediction forecast value via bias correction and ensemble analysisMcCollor, Douglas 11 1900 (has links)
This dissertation describes research designed to enhance hydrometeorological forecasts. The objective of the research is to deliver an optimal methodology to produce reliable, skillful and economically valuable probabilistic temperature and precipitation forecasts.
Weather plays a dominant role for energy companies relying on forecasts of watershed precipitation and temperature to drive reservoir models, and forecasts of temperatures to meet energy demand requirements. Extraordinary precipitation events and temperature extremes involve consequential water- and power-management decisions.
This research compared weighted-average, recursive, and model output statistics bias-correction methods and determined optimal window-length to calibrate temperature and precipitation forecasts. The research evaluated seven different methods for daily maximum and minimum temperature forecasts, and three different methods for daily quantitative precipitation forecasts, within a region of complex terrain in southwestern British Columbia, Canada.
This research then examined ensemble prediction system design by assessing a three-model suite of multi-resolution limited area mesoscale models. The research employed two different economic models to investigate the ensemble design that produced the highest-quality, most valuable forecasts.
The best post-processing methods for temperature forecasts included moving-weighted average methods and a Kalman filter method. The optimal window-length proved to be 14 days. The best post-processing methods for achieving mass balance in quantitative precipitation forecasts were a moving-average method and the best easy systematic estimator method. The optimal window-length for moving-average quantitative precipitation forecasts was 40 days. The best ensemble configuration incorporated all resolution members from all three models.
A cost/loss model adapted specifically for the hydro-electric energy sector indicated that operators managing rainfall-dominated, high-head reservoirs should lower their reservoir with relatively low probabilities of forecast precipitation. A reservoir-operation model based on decision theory and variable energy pricing showed that applying an ensemble-average or full-ensemble precipitation forecast provided a much greater profit than using only a single deterministic high-resolution forecast.
Finally, a bias-corrected super-ensemble prediction system was designed to produce probabilistic temperature forecasts for ten cities in western North America. The system exhibited skill and value nine days into the future when using the ensemble average, and 12 days into the future when employing the full ensemble forecast.
|
9 |
Improving hydrometeorologic numerical weather prediction forecast value via bias correction and ensemble analysisMcCollor, Douglas 11 1900 (has links)
This dissertation describes research designed to enhance hydrometeorological forecasts. The objective of the research is to deliver an optimal methodology to produce reliable, skillful and economically valuable probabilistic temperature and precipitation forecasts.
Weather plays a dominant role for energy companies relying on forecasts of watershed precipitation and temperature to drive reservoir models, and forecasts of temperatures to meet energy demand requirements. Extraordinary precipitation events and temperature extremes involve consequential water- and power-management decisions.
This research compared weighted-average, recursive, and model output statistics bias-correction methods and determined optimal window-length to calibrate temperature and precipitation forecasts. The research evaluated seven different methods for daily maximum and minimum temperature forecasts, and three different methods for daily quantitative precipitation forecasts, within a region of complex terrain in southwestern British Columbia, Canada.
This research then examined ensemble prediction system design by assessing a three-model suite of multi-resolution limited area mesoscale models. The research employed two different economic models to investigate the ensemble design that produced the highest-quality, most valuable forecasts.
The best post-processing methods for temperature forecasts included moving-weighted average methods and a Kalman filter method. The optimal window-length proved to be 14 days. The best post-processing methods for achieving mass balance in quantitative precipitation forecasts were a moving-average method and the best easy systematic estimator method. The optimal window-length for moving-average quantitative precipitation forecasts was 40 days. The best ensemble configuration incorporated all resolution members from all three models.
A cost/loss model adapted specifically for the hydro-electric energy sector indicated that operators managing rainfall-dominated, high-head reservoirs should lower their reservoir with relatively low probabilities of forecast precipitation. A reservoir-operation model based on decision theory and variable energy pricing showed that applying an ensemble-average or full-ensemble precipitation forecast provided a much greater profit than using only a single deterministic high-resolution forecast.
Finally, a bias-corrected super-ensemble prediction system was designed to produce probabilistic temperature forecasts for ten cities in western North America. The system exhibited skill and value nine days into the future when using the ensemble average, and 12 days into the future when employing the full ensemble forecast.
|
10 |
Improving hydrometeorologic numerical weather prediction forecast value via bias correction and ensemble analysisMcCollor, Douglas 11 1900 (has links)
This dissertation describes research designed to enhance hydrometeorological forecasts. The objective of the research is to deliver an optimal methodology to produce reliable, skillful and economically valuable probabilistic temperature and precipitation forecasts.
Weather plays a dominant role for energy companies relying on forecasts of watershed precipitation and temperature to drive reservoir models, and forecasts of temperatures to meet energy demand requirements. Extraordinary precipitation events and temperature extremes involve consequential water- and power-management decisions.
This research compared weighted-average, recursive, and model output statistics bias-correction methods and determined optimal window-length to calibrate temperature and precipitation forecasts. The research evaluated seven different methods for daily maximum and minimum temperature forecasts, and three different methods for daily quantitative precipitation forecasts, within a region of complex terrain in southwestern British Columbia, Canada.
This research then examined ensemble prediction system design by assessing a three-model suite of multi-resolution limited area mesoscale models. The research employed two different economic models to investigate the ensemble design that produced the highest-quality, most valuable forecasts.
The best post-processing methods for temperature forecasts included moving-weighted average methods and a Kalman filter method. The optimal window-length proved to be 14 days. The best post-processing methods for achieving mass balance in quantitative precipitation forecasts were a moving-average method and the best easy systematic estimator method. The optimal window-length for moving-average quantitative precipitation forecasts was 40 days. The best ensemble configuration incorporated all resolution members from all three models.
A cost/loss model adapted specifically for the hydro-electric energy sector indicated that operators managing rainfall-dominated, high-head reservoirs should lower their reservoir with relatively low probabilities of forecast precipitation. A reservoir-operation model based on decision theory and variable energy pricing showed that applying an ensemble-average or full-ensemble precipitation forecast provided a much greater profit than using only a single deterministic high-resolution forecast.
Finally, a bias-corrected super-ensemble prediction system was designed to produce probabilistic temperature forecasts for ten cities in western North America. The system exhibited skill and value nine days into the future when using the ensemble average, and 12 days into the future when employing the full ensemble forecast. / Science, Faculty of / Earth, Ocean and Atmospheric Sciences, Department of / Graduate
|
Page generated in 0.0655 seconds