• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 302
  • 277
  • 65
  • 62
  • 53
  • 40
  • 29
  • 27
  • 14
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 933
  • 184
  • 141
  • 88
  • 87
  • 86
  • 86
  • 83
  • 77
  • 74
  • 69
  • 62
  • 61
  • 61
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Forecast new home sales and prices: a case study for the United States

LIU, JING 24 August 2010 (has links)
A nation’s housing sector has been the cornerstone of economic activity over the past several years. Right now, the economy of the United States is in a recession. To recover the economy, activity in the U.S. housing market deserves more attention, especially the new home market. Economists in the United States believe that if new home sales could keep increasing in the future, recovery of the whole housing market, even the whole economy in the U.S., would be hopeful. To bring the hope closer to the reality, forecasting changes in the new home market is important. An accurate forecast can provide useful information for the future, so that proper planning can take peace. The purpose of this thesis is to look for an appropriate method which can accurately forecast changes in new home sales and prices in the U.S. housing market, so that policy makers base decisions on reliable information.
122

Three Essays on Updating Forecasts in Vector Autoregression Models

Zhu, Hui 30 April 2010 (has links)
Forecasting firms' earnings has long been an interest of market participants and academics. Traditional forecasting studies in a multivariate time series setting do not take into account that the timing of market data release for a specific time period of observation is often spread over several days or weeks. This thesis focuses on the separation of announcement timing or data release and the use of econometric real-time methods, which we refer to as an updated vector autoregression (VAR) forecast, to predict data that have yet to be released. In comparison to standard time series forecasting, we show that the updated forecasts will be more accurate the higher the correlation coefficients among the standard VAR innovations are. Forecasting with the sequential release of information has not been studied in the VAR framework, and our approach to U.S. nonfarm payroll employment and the six Canadian banks shows its value. By using the updated VAR forecast, we conclude that there are relative efficiency gains in the one-step-ahead forecast compared to the ordinary VAR forecast, and compared to professional consensus forecasts. Thought experiments emphasize that the release ordering is crucial in determining forecast accuracy. / Thesis (Ph.D, Economics) -- Queen's University, 2010-04-30 12:34:42.629
123

Inverse modelling to forecast enclosure fire dynamics

Jahn, Wolfram January 2010 (has links)
Despite advances in the understanding of fire dynamics over the past decades and despite the advances in computational capacity, our ability to predict the behaviour of fires in general and building fires in particular remains very limited. This thesis proposes and studies a method to use measurements of the real event in order to steer and accelerate fire simulations. This technology aims at providing forecasts of the fire development with a positive lead time, i.e. the forecast of future events is ready before those events take place. A simplified fire spread model is implemented, and sensor data are assimilated into the model in order to estimate the parameters that characterize the spread model and thus recover information lost by approximations. The assimilation process is posed as an inverse problem, which is solved minimizing a non linear cost function that measures the distance between sensor data and the forward model. In order to accelerate the optimization procedure, the ‘tangent linear model’ is implemented, i.e. the forward model is linearized around the initial guess of the governing parameters that are to be estimated, thus approximating the cost function by a quadratic function. The methodology was tested first with a simple two-zone forward model, and then with a coarse grid Computational Fluid Dynamics (CFD) fire model as forward model. Observations for the inverse modelling were generated using a fine grid CFD simulation in order to illustrate the methodology. A test case with observations from a real scale fire test is presented at the end of this document. In the two-zone model approach the spread rate, entrainment coefficient and gas transport time are the governing invariant parameters that are estimated. The parameters could be estimated correctly and the temperature and the height of the hot layer were reproduced satisfactorily. Moreover, the heat release rate and growth rate were estimated correctly with a positive lead time of up to 30 s. The results showed that the simple mass and heat balances and plume correlation of the zone model were enough to satisfactorily forecast the main features of the fire, and that positive lead times are possible. With the CFD forward model the growth rate, fuel mass loss rate and other parameters of a fire were estimated by assimilating measurements from the fire into the model. It was shown that with a field type forward model it is possible to estimate the growth rates of several different spread rates simultaneously. A coarse grid CFD model with very short computation times was used to assimilate measurements and it was shown that spatially resolved forecasts can be obtained in reasonable time, when combined with observations from the fire. The assimilation of observations from a real scale fire test into a coarse grid CFD model showed that the estimation of a fire growth parameter is possible in complicated scenarios in reasonable time, and that the resulting forecasts at localized level present good levels of accuracy. The proposed methodology is still subject to ongoing research. The limited capability of the forward model to represent the true fire has to be addressed with more detail, and the additional information that has to be provided in order to run the simulations has to be investigated. When using a CFD type forward model, additional to the detailed geometry, it is necessary to establish the location of the fire origin and the potential fuel load before starting the assimilation cycle. While the fire origin can be located easily (as a first approximation the location of the highest temperature reading can be used), the fuel load is potentially very variable and its exact distribution might be impractical to continually keep track of. It was however shown that for relatively small compartments the exact fuel distribution is not essential in order to produce an adequate forecast, and the fuel load could for example be established based on a statistical analysis of typical compartment layouts.
124

Vehicle Demand Forecasting with Discrete Choice Models: 2 Logit 2 Quit

Haaf, Christine Grace 01 December 2014 (has links)
Discrete choice models (DCMs) are used to forecast demand in a variety of engineering, marketing, and policy contexts, and understanding the uncertainty associated with model forecasts is crucial to inform decision-making. This thesis evaluates the suitability of DCMs for forecasting automotive demand. The entire scope of this investigation is too broad to be covered here, but I explore several elements with a focus on three themes: defining how to measure forecast accuracy, comparing model specifications and forecasting methods in terms of prediction accuracy, and comparing the implications of model specifications and forecasting methods on vehicle design. Specifically I address several questions regarding the accuracy and uncertainty of market share predictions resulting from choice of utility function and structural specification, estimation method, and data structure assumptions. I1 compare more than 9,000 models based on those used in peer-reviewed literature and academic and government studies. Firstly, I find that including more model covariates generally improves predictive accuracy, but that the form those covariates take in the utility function is less important. Secondly, better model fit correlates well with better predictive accuracy; however, the models I construct— representative of those in extant literature— exhibit substantial prediction error stemming largely from limited model fit due to unobserved attributes. Lastly, accuracy of predictions in existing markets is neither a necessary nor sufficient condition for use in design. Much of the econometrics literature on vehicle market modeling has presumed that biased coefficients make for bad models. For purely predictive purposes, the drawbacks of potentially mitigating bias using generalized method of moments estimation coupled with instrumental variables outweigh the expected benefits in the experiments conducted in this dissertation. The risk of specifying invalid instruments is high, and my results suggest that the instruments frequently used in the automotive demand literature are likely invalid. Furthermore, biased coefficients are not necessarily bad for maximizing the predictive power of the model. Bias can even aid predictions by implicitly capturing persistent unobserved effects in some circumstances. Including alternative specific constants (ASCs) in DCM utility functions improves model fit but not necessarily forecast accuracy. For frequentist estimated models all tested methods of forecasting ASCs improved share predictions of the whole midsize sedan market over excluding ASC in predictions, but only one method results in improved long term new vehicle, or entrant, forecasts. As seen in a synthetic data study, assuming an incorrect relationship between observed attributes and the ASC for forecasting risks making worse forecasts than would be made by a model that excludes ASCs entirely. Treating the ASCs as model parameters with full distributions of uncertainty via Bayesian estimation is more robust to selection of ASC forecasting method and less reliant on persistent market structures, however it comes at increased computational cost. Additionally, the best long term forecasts are made by the frequentist model that treats ASCs as calibration constants fit to the model post estimation of other parameters.
125

What's in a map? communicating natural hazard forecasts.

Baird, Nathanael Lloyd January 2014 (has links)
The number of people suffering from natural disasters, and the economic impact of those disasters, continue to increase as the years go by. Better preparation and risk management strategies can help lessen the impacts of these disasters. One important aspect of risk management is risk assessment, which can be accomplished with a hazard map. One application of hazard maps is to forecast volcanic ashfall following an eruption to help people and organisations prepare themselves for, and mitigate the detrimental impacts of, volcanic ashfall. This research evaluated the key elements of a hazard map and how to make a hazard map most effective through the study of short-term ashfall forecast maps in New Zealand. A mixed-methods approach was taken for this research. Interviews were conducted with scientists at GNS and stakeholders who use the ashfall forecast maps. After the data from the interviews was analysed, an internet-based survey was created and sent out to anyone interested in participating. The survey served as a low-resolution verification of the high-resolution data gathered in the interviews. After each stage of information gathering, the ashfall forecast map design was updated. This research found that there are seven basic elements which should be considered when creating a hazard map. These elements are: simplicity of the map, base map, map scale, the use of colour, geographical information, the inclusion of uncertainty, and time. This research also found key lessons which can be applied to any hazard map creation process. These lessons are: established practices should be revaluated periodically, communication between the information provider and the enduser is critical, the information provider must decide between satisfying the individual or the group, education and outreach are important, audience feedback is necessary for an effective map, and that hazard maps are just one step in the risk mitigation process.
126

Three essays on initial public offerings

Jin, Chuntai 10 April 2014 (has links)
This dissertation consists of three essays. In the first essay, we attempt to answer the following three questions about the new capital raised in IPOs: Why do some IPO companies raise a lot of new capital while some others don’t? Where do the IPO companies use the new capital they raise in IPOs? How does the use of new capital affect the operating performance of IPO companies? We find that companies with higher R&D spending, higher capital expenditure, lower working capital and more long term debt tend to raise more capital in IPOs. These firms also spend more on R&D and capital expenditure. The more new capital firms raise in IPOs, the lower sales growth rate they have. However, firms spending a higher proportion of new capital on R&D seem to have higher sales growth rate. In the second essay, we examine the relation between IPO valuation and offering size. Using a sample of 3,885 IPOs from the US, we find that IPO firms with larger offering size have lower valuation. Both primary share offering and secondary share offering are negatively related to IPO firm valuation. The valuation measures are positively related to the levels of capital expenditure and R&D before IPO, lending support to explanations based on Jensen (1986)’s free cash flow hypothesis. We also find evidence consistent with negative signals from larger secondary share offering size. Results of tests about long run IPO stock performance do not support the hypothesis that IPO stock demand curve is downward sloping. In the third essay, we examine how analysts react to IPO offering size. We find that analysts predict lower long-term growth rates for IPOs with larger offering size. The sizes of both primary and secondary offering are negatively related to long-term growth rate forecasts. We find evidence that the free cash flow effect may be related to the negative relation between primary offering size and growth forecast.
127

Improving statistical seismicity models

Bach, Christoph January 2013 (has links)
Several mechanisms are proposed to be part of the earthquake triggering process, including static stress interactions and dynamic stress transfer. Significant differences of these mechanisms are particularly expected in the spatial distribution of aftershocks. However, testing the different hypotheses is challenging because it requires the consideration of the large uncertainties involved in stress calculations as well as the appropriate consideration of secondary aftershock triggering which is related to stress changes induced by smaller pre- and aftershocks. In order to evaluate the forecast capability of different mechanisms, I take the effect of smaller--magnitude earthquakes into account by using the epidemic type aftershock sequence (ETAS) model where the spatial probability distribution of direct aftershocks, if available, is correlated to alternative source information and mechanisms. Surface shaking, rupture geometry, and slip distributions are tested. As an approximation of the shaking level, ShakeMaps are used which are available in near real-time after a mainshock and thus could be used for first-order forecasts of the spatial aftershock distribution. Alternatively, the use of empirical decay laws related to minimum fault distance is tested and Coulomb stress change calculations based on published and random slip models. For comparison, the likelihood values of the different model combinations are analyzed in the case of several well-known aftershock sequences (1992 Landers, 1999 Hector Mine, 2004 Parkfield). The tests show that the fault geometry is the most valuable information for improving aftershock forecasts. Furthermore, they reveal that static stress maps can additionally improve the forecasts of off--fault aftershock locations, while the integration of ground shaking data could not upgrade the results significantly. In the second part of this work, I focused on a procedure to test the information content of inverted slip models. This allows to quantify the information gain if this kind of data is included in aftershock forecasts. For this purpose, the ETAS model based on static stress changes, which is introduced in part one, is applied. The forecast ability of the models is systematically tested for several earthquake sequences and compared to models using random slip distributions. The influence of subfault resolution and segment strike and dip is tested. Some of the tested slip models perform very good, in that cases almost no random slip models are found to perform better. Contrastingly, for some of the published slip models, almost all random slip models perform better than the published slip model. Choosing a different subfault resolution hardly influences the result, as long the general slip pattern is still reproducible. Whereas different strike and dip values strongly influence the results depending on the standard deviation chosen, which is applied in the process of randomly selecting the strike and dip values. / Verschiedene Mechanismen werden für das Triggern von Erdbeben verantwortlich gemacht, darunter statische Spannungsänderungen und dynamischer Spannungstransfer. Deutliche Unterschiede zwischen diesen Mechanismen werden insbesondere in der räumlichen Nachbebenverteilung erwartet. Es ist allerdings schwierig diese Hypothesen zu überprüfen, da die großen Unsicherheiten der Spannungsberechnungen berücksichtigt werden müssen, ebenso wie das durch lokale sekundäre Spannungsänderungen hervorgerufene initiieren von sekundären Nachbeben. Um die Vorhersagekraft verschiedener Mechanismen zu beurteilen habe ich die Effekte von Erdbeben kleiner Magnitude durch Benutzen des "epidemic type aftershock sequence" (ETAS) Modells berücksichtigt. Dabei habe ich die Verteilung direkter Nachbeben, wenn verfügbar, mit alternativen Herdinformationen korreliert. Bodenbewegung, Bruchgeometrie und Slipmodelle werden getestet. Als Aproximation der Bodenbewegung werden ShakeMaps benutzt. Diese sind nach großen Erdbeben nahezu in Echtzeit verfügbar und können daher für vorläufige Vorhersagen der räumlichen Nachbebenverteilung benutzt werden. Alternativ können empirische Beziehungen als Funktion der minimalen Distanz zur Herdfläche benutzt werden oder Coulomb Spannungsänderungen basierend auf publizierten oder zufälligen Slipmodellen. Zum Vergleich werden die Likelihood Werte der Hybridmodelle im Falle mehrerer bekannter Nachbebensequenzen analysiert (1992 Landers, 1999 Hector Mine, 2004 Parkfield). Die Tests zeigen, dass die Herdgeometrie die wichtigste Zusatzinformation zur Verbesserung der Nachbebenvorhersage ist. Des Weiteren können statische Spannungsänderungen besonders die Vorhersage von Nachbeben in größerer Entfernung zur Bruchfläche verbessern, wohingegen die Einbeziehung von Bodenbewegungskarten die Ergebnisse nicht wesentlich verbessern konnte. Im zweiten Teil meiner Arbeit führe ich ein neues Verfahren zur Untersuchung des Informationsgehaltes von invertierten Slipmodellen ein. Dies ermöglicht die Quantifizierung des Informationsgewinns, der durch Einbeziehung dieser Daten in Nachbebenvorhersagen entsteht. Hierbei wird das im ersten Teil eingeführte erweiterte ETAS Modell benutzt, welches statische Spannungsänderung zur Vorhersage der räumlichen Nachbebenverteilung benutzt. Die Vorhersagekraft der Modelle wird systematisch anhand mehrerer Erdbebensequenzen untersucht und mit Modellen basierend auf zufälligen Slipverteilungen verglichen. Der Einfluss der Veränderung der Auflösung der Slipmodelle, sowie Streich- und Fallwinkel der Herdsegmente wird untersucht. Einige der betrachteten Slipmodelle korrelieren sehr gut, in diesen Fällen werden kaum zufällige Slipmodelle gefunden, welche die Nachbebenverteilung besser erklären. Dahingegen korrelieren bei einigen Beispielen nahezu alle zufälligen Slipmodelle besser als das publizierte Modell. Das Verändern der Auflösung der Bewegungsmodelle hat kaum Einfluss auf die Ergebnisse, solange die allgemeinen Slipmuster noch reproduzierbar sind, d.h. ein bis zwei größere Slipmaxima pro Segment. Dahingegen beeinflusst eine zufallsbasierte Änderung der Streich- und Fallwinkel der Segmente die Resultate stark, je nachdem welche Standardabweichung gewählt wurde.
128

Forecasting GDP Growth, or How Can Random Forests Improve Predictions in Economics?

Adriansson, Nils, Mattsson, Ingrid January 2015 (has links)
GDP is used to measure the economic state of a country and accurate forecasts of it is therefore important. Using the Economic Tendency Survey we investigate forecasting quarterly GDP growth using the data mining technique Random Forest. Comparisons are made with a benchmark AR(1) and an ad hoc linear model built on the most important variables suggested by the Random Forest. Evaluation by forecasting shows that the Random Forest makes the most accurate forecast supporting the theory that there are benefits to using Random Forests on economic time series.
129

A Framework of Incorporating Spatio-temporal Forecast in Look-ahead Grid Dispatch with Photovoltaic Generation

Yang, Chen 03 October 2013 (has links)
Increasing penetration of stochastic photovoltaic (PV) generation into the electric power system poses significant challenges to system operators. In the thesis, we evaluate the spatial and temporal correlations of stochastic PV generation at multiple sites. Given the unique spatial and temporal correlation of PV generation, an optimal data-driven forecast model for short-term PV power is proposed. This model leverages both spatial and temporal correlations among neighboring solar sites, and is shown to have improved performance compared with conventional persistent model. The tradeoff between communication cost and improved forecast quality is studied using realistic data sets collected from California and Colorado. n IEEE 14 bus system test case is used to quantify the value of improved forecast quality through the reduction of system dispatch cost. The Modified spatio-temporal forecast model which has the least forecast PV overestimate percentage shows the best performance in the dispatch cost reduction.
130

Analysis Of Safety Stock For Production - Inventory Problem Of A Company Under Multiplicative Form Of Forecast Evolution

Kayhan, Mehmet 01 January 2003 (has links) (PDF)
In this thesis, we focus on integration issue of manufacturing and sales functions from the perspective of aggregate production planning. The manufacturing function and sales function are performed by separate affiliated companies of the same business group, which operate as an integrated supplier-buyer system. In particular, this study provides theoretical and practical insight into the use of forecast volatility measure to better match supply with demand so as to reduce the costs of inventory and stock-outs in the manufacturer-buyer relationship under described master production-scheduling environment. Nature of forecast modifications provided by the buyer lays the foundation for the study. We modify the existing aggregate production planning model to accommodate a measure of historical forecast evolution. The overall objective of the thesis is to provide management with aforecast evolution-modeling framework to examine performance characteristics of the manufacturer-buyer interaction.

Page generated in 0.0416 seconds