Spelling suggestions: "subject:"forecasting "" "subject:"forecasting's ""
851 |
THEORY AND PRACTICE OF THE INTENSITY OF USE METHOD OF MINERAL CONSUMPTION FORECASTING (MINERAL, ECONOMICS).ROBERTS, MARK CULMER. January 1985 (has links)
The intensity of use of a mineral is traditionally defined as the consumption (production plus net imports) of the mineral divided by gross national product. It has been proposed that this ratio of raw material input to gross economic output is a predictable function of per capita income and that the relationship is based on economic theory. Though the theory has never been clearly defined, the intensity of use method has been used to make long term forecasts. This dissertation formulates a theoretical model of the consumption of minerals and the resulting intensity of use which is used to test the validity of the traditional intensity of use measure and its forecasting ability. Previous justifications of the intensity of use hypothesis state that changes in technical efficiency, substitution rates among inputs, and demands are explained by per capita income, which, as it grows, produces a regular intensity of use pattern. The model developed in this research shows that the life of the goods in use, foreign trade of raw and final goods, prices, consumer preferences, technical innovations, as well as the above factors fully explain economic use, which is not simply a function of per capita income. The complete model is used to restate the traditional theory of intensity of use and to examine the sensitivity of traditional measures to changes in the explanatory variables which are commonly omitted. The full model demonstrates the parameters that must be examined when making a long term forecast. Regular intensity of use patterns are observed for many minerals in many nations. Setting aside the theoretical questions, the intensity of use method is often used to make long term projections based on these trends in intensity of use as well as the trends in population and gross national product. This dissertation examines the forecasting ability of the traditional intensity of use method and finds that it is not necessarily an improvement over naive consumption time trend forecasts. Furthermore, it is unstable for very long term projections.
|
852 |
Predicting runoff and salinity intrusion using stochastic precipitation inputsRisley, John. January 1989 (has links)
A methodology is presented for forecasting the probabilistic response of salinity movement in an estuary to seasonal rainfall and freshwater inflows. The Gambia River basin in West Africa is used as a case study in the research. The rainy season is from approximately July to October. Highest flows occur in late September and early October. Agriculturalists are interested in a forecast of the minimum distance that occurs each year at the conclusion of the wet season between the mouth of the river and the 1 part per thousand (ppt) salinity level. They are also interested in the approximate date that the minimum distance will occur. The forecasting procedure uses two approaches. The first uses a multisite stochastic process to generate long-term synthetic records (100 to 200 years) of 10-day rainfall for two stations in the upper basin. A long-term record of 10-day average flow is then computed from multiple regression models that use the generated rainfall records and real-time initial flow data occurring on the forecast date as inputs. The flow series is then entered into a one-dimensional finite element salt intrusion model to compute the movement of the 1 ppt salinity level for each season. The minimum distances between the mouth of the river and the 1 ppt salinity front that occurred for each season in the long-term record are represented in a cumulative probability distribution curve. The curve is then used to assign probability values of the occurrence of the 1 ppt salinity level to various points along the river. In the second approach, instead of generating a rainfall series and computing flow from regression models, a long-term flow record was generated using a stochastic first-order Markov process. Probability curves were made for three forecast dates: mid- July, mid-August, and mid-September using both approaches. With the first approach, the initial conditions at the time of the forecast had a greater influence on the flow series than the second approach.
|
853 |
Advances in Dendrochronology, 1943Douglass, A. E. 01 1900 (has links)
No description available.
|
854 |
A linear catchment model for real time flood forecasting.Sinclair, D S. January 2001 (has links)
A linear reservoir cell model is presented which is proposed as a good candidate for real
time flood forecasting applications. The model is designed to be computationally efficient
since it should be able to run on a P.C and must operate online in real time. The model
parameters and forecasts can be easily updated in order to allow for a more accurate
forecast based on real time observations of streamflow and rainfall.
The final model, once calibrated, should be able to operate effectively without requiring
highly skilled and knowledgeable operators. Thus it is hoped to provide a tool which can be
incorporated into an early warning system for mitigation of flood damage, giving water
resources managers the extra lead-time to implement any contingency plans which may be
neccssary to ensure the safety of people and prevent damage to property.
The use of linear models for describing hydrological systems is not new, however the
model presented in this thesis departs from previous implementations. A particular
departure is the novel method used in the conversion of observed to effective rainlfall. The
physical processes that result in the rainfall to runoff conversion are non-linear in nature.
Most of the significant non-linearity results from rainfall losses, which occur largely due to
evaporation and human extraction. The remaining rainfall is converted to runoff. These
losses are particularly significant in the South African climate and in some regions may be
as much as 70-90 % of the total observed rainfall. Loss parameters are an integral part of
the model formulation and allow for losses to be dealt with directly. Thus, input to the
model is observed rainfall and not the "effective" rainfall normally associated with
conceptual catchment models.
The model is formulated in Finite Difference form similar to an Auto Regressive Moving
Average (ARMA) model; it is this formulation which provides the required computational
efficiency. The ARMA equation is a discretely coincident form of the State-Space
equations that govern the response of an arrangement of linear reservoirs. This results in a
functional relationship between the reservoir response constants and the ARMA
coefficients, which guarantees stationarity of the ARMA model. / Thesis (M.Sc.Eng.)-University of Natal, Durban, 2001.
|
855 |
Combined Use of Models and Measurements for Spatial Mapping of Concentrations and Deposition of PollutantsAmbachtsheer, Pamela January 2004 (has links)
When modelling pollutants in the atmosphere, it is nearly impossible to get perfect results as the chemical and mechanical processes that govern pollutant concentrations are complex. Results are dependent on the quality of the meteorological input as well as the emissions inventory used to run the model. Also, models cannot currently take every process into consideration. Therefore, the model may get results that are close to, or show the general trend of the observed values, but are not perfect. However, due to the lack of observation stations, the resolution of the observational data is poor. Furthermore, the chemistry over large bodies of water is different from land chemistry, and in North America, there are no stations located over the great lakes or the ocean. Consequently, the observed values cannot accurately cover these regions. Therefore, we have combined model output and observational data when studying ozone concentrations in north eastern North America. We did this by correcting model output at observational sites with local data. We then interpolated those corrections across the model grid, using a Kriging procedure, to produce results that have the resolution of model results with the local accuracy of the observed values. Results showed that the corrected model output is much improved over either model results or observed values alone. This improvement was observed both for sites that were used in the correction process as well as sites that were omitted from the correction process.
|
856 |
Eliciting and Aggregating Forecasts When Information is SharedPalley, Asa January 2016 (has links)
<p>Using the wisdom of crowds---combining many individual forecasts to obtain an aggregate estimate---can be an effective technique for improving forecast accuracy. When individual forecasts are drawn from independent and identical information sources, a simple average provides the optimal crowd forecast. However, correlated forecast errors greatly limit the ability of the wisdom of crowds to recover the truth. In practice, this dependence often emerges because information is shared: forecasters may to a large extent draw on the same data when formulating their responses. </p><p>To address this problem, I propose an elicitation procedure in which each respondent is asked to provide both their own best forecast and a guess of the average forecast that will be given by all other respondents. I study optimal responses in a stylized information setting and develop an aggregation method, called pivoting, which separates individual forecasts into shared and private information and then recombines these results in the optimal manner. I develop a tailored pivoting procedure for each of three information models, and introduce a simple and robust variant that outperforms the simple average across a variety of settings.</p><p>In three experiments, I investigate the method and the accuracy of the crowd forecasts. In the first study, I vary the shared and private information in a controlled environment, while the latter two studies examine forecasts in real-world contexts. Overall, the data suggest that a simple minimal pivoting procedure provides an effective aggregation technique that can significantly outperform the crowd average.</p> / Dissertation
|
857 |
MAINFRAME: Military acquisition inspired framework for architectural modeling and evaluationZellers, Eric M. 27 May 2016 (has links)
Military acquisition programs have long been criticized for the exponential growth in program costs required to generate modest improvements in capability. One of the most promising reform efforts to address this trend is the open system architecture initiative, which uses modular design principles and commercial interface standards as a means to reduce the cost and complexity of upgrading systems over time. While conceptually simple, this effort has proven to be exceptionally difficult to implement in practice. This difficulty stems, in large part, from the fact that open systems trade additional cost and risk in the early phases of development for the option to infuse technology at a later date, but the benefits provided by this option are inherently uncertain. Practical implementation therefore requires a decision support framework to determine when these uncertain, future benefits are worth the cost and risk assumed in the present. The objective of this research is to address this gap by developing a method to measure the expected costs, benefits and risks associated with open systems. This work is predicated on three assumptions: (1) the purpose of future technology infusions is to keep pace with the uncertain evolution of operational requirements, (2) successful designs must justify how future upgrades will be used to satisfy these requirements, and (3) program managers retain the flexibility to adapt prior decisions as new information is made available over time. The analytical method developed in this work is then applied to an example scenario for an aerial Intelligence, Surveillance, and Reconnaissance platform with the potential to upgrade its sensor suite in future increments. Final results demonstrate that the relative advantages and drawbacks between open and integrated system architectures can be presented in the context of a cost-effectiveness framework that is currently used by acquisition professionals to manage complex design decisions.
|
858 |
Perception or fact : measuring the performance of the Terrorism Early Warning (TEW) groupGrossman, Michael. 09 1900 (has links)
CHDS State/Local / This thesis examines the structure and intelligence process of the Los Angeles Terrorism Early Warning (TEW) Group to assess its effectiveness as measured through the application of a Program Logic Model. This model verifies the links between the assumptions on which the program is based and actual program activities. It further assesses its status as a â smart practiceâ based on measurable criteria that are beyond perception or peer approval alone. The TEW is a regional, multi-agency and multi-disciplinary network that functions as a focal point for analyzing the strategic and operational information needed to prevent, mitigate, disrupt and respond to threats and acts of terrorism. Although efforts toward prevention are difficult to measure in any program, input and outcome are assessable. This method provides an effective means to evaluate a program while documenting what works and why. Effectiveness should not be based solely on outputs; a structure that produces them is also an indicator. The objective of this thesis is to establish a benchmark of practical standards for collaborative intelligence sharing operations that can be replicated by other regions and that will establish a common nationwide homeland security intelligence network. Based on these criteria, it is reasonable to conclude that the TEW is in fact a â smart practice.â It meets its intended goals and objectives when measured according to the parameters of the Program Logic Model, and has a structured process and system that leads to preferred outcomes. / Commander, Los Angeles County (California) Sheriff's Department
|
859 |
Objective identification of environmental patterns related to tropical cyclone track forecast errorsSanabia, Elizabeth R. 09 1900 (has links)
The increase in skill of numerical model guidance and the use of consensus forecast techniques have led to significant improvements in the accuracy of tropical cyclone track forecasts at ranges beyond 72 h. Identification of instances when the forecast track from an individual numerical model may be in error could lead to additional improvement in the accuracy of tropical cyclone track forecasts. An objective methodology is tested to characterize the spread among the three primary global numerical model forecast tracks used as guidance by the Joint Typhoon Warning Center. Statistically-significant principal components derived from empirical orthogonal functions of mid-tropospheric height and vorticity forecast fields identify cases of large spread among model forecasts. Cases in which the three-model average forecast track resulted in a large error were characterized by a distribution of principal components such that one component was significantly different from the other two. Removal of the forecast track associated with the outlying principal component resulted in a reduced forecast error. Therefore, the objective methodology may be utilized to define a selective consensus by removing forecast tracks from consideration based on the projection of forecast fields onto empirical orthogonal functions and inspecting the distribution of the resulting principal components.
|
860 |
Studies in forecasting upper-level turbulenceKuhl, Christopher T. 09 1900 (has links)
Encounters with turbulence generated by complex topography, convection, or mechanical forcing present a significant threat to military aircraft operations. Properly forecasting the initiation, duration, and intensity of such encounters is a tremendous challenge to forecasters often resulting in the over-forecasting of turbulence. Over-forecasting the presence or intensity of turbulence can result in unnecessary mission delays, cancellations, and re-routing. The lack of observations and the fact that turbulence is a microscale phenomenon which Numerical Weather Prediction (NWP) models currently can not resolve are what make forecasting turbulence so difficult. Progress has been made in the last several decades in both the observation of turbulence and the resolution of NWP models. A new turbulence forecast approach has been created based on recent developments in observing turbulence and using automated turbulence diagnostics. The development of an in-situ observation platform, using the Eddy Dissipation Rate (EDR), and the Graphical Turbulence Guidance (GTG) model are discussed. A turbulence forecast approach is derived that includes the synoptic patterns which create or allow the turbulent environment to exist, the use of current tools to observe turbulence, and the use of models to help form the turbulence forecast. A turbulence forecasting manual has been created to give the new forecaster improved guidance to effectively forecast turbulence.
|
Page generated in 0.0719 seconds