181 |
Forecast Combination with Multiple Models and Expert CorrelationsSoule, David P 01 January 2019 (has links)
Combining multiple forecasts in order to generate a single, more accurate one is a well-known approach. A simple average of forecasts has been found to be robust despite theoretically better approaches, increasing availability in the number of expert forecasts, and improved computational capabilities. The dominance of a simple average is related to the small sample sizes and to the estimation errors associated with more complex methods. We study the role that expert correlation, multiple experts, and their relative forecasting accuracy have on the weight estimation error distribution. The distributions we find are used to identify the conditions when a decision maker can confidently estimate weights versus using a simple average. We also propose an improved expert weighting approach that is less sensitive to covariance estimation error while providing much of the benefit from a covariance optimal weight. These two improvements create a new heuristic for better forecast aggregation that is simple to use. This heuristic appears new to the literature and is shown to perform better than a simple average in a simulation study and by application to economic forecast data.
|
182 |
Terrain-Relative and Beacon-Relative Navigation for Lunar Powered Descent and LandingChristensen, Daniel Porter 01 May 2009 (has links)
As NASA prepares to return humans to the moon and establish a long-term presence on the surface, technologies must be developed to access previously unvisited terrain regardless of the condition. Among these technologies is a guidance, navigation and control (GNC) system capable of safely and precisely delivering a spacecraft, whether manned or robotic, to a predetermined landing area. This thesis presents detailed research of both terrain-relative navigation using a terrain-scanning instrument and beacon-relative radiometric navigation using beacons in lunar orbit or on the surface of the moon. The models for these sensors are developed along with a baseline sensor suite that includes an altimeter, IMU, velocimeter, and star camera. Linear covariance analysis is used to rapidly perform the trade studies relevant to this problem and to provide the navigation performance data necessary to determine which navigation method is best suited to support a 100 m 3-σ navigation requirement for landing anytime and anywhere on the moon.
|
183 |
Linear Covariance Analysis For Gimbaled Pointing SystemsChristensen, Randall S. 01 August 2013 (has links)
Linear covariance analysis has been utilized in a wide variety of applications. Historically, the theory has made significant contributions to navigation system design and analysis. More recently, the theory has been extended to capture the combined effect of navigation errors and closed-loop control on the performance of the system. These advancements have made possible rapid analysis and comprehensive trade studies of complicated systems ranging from autonomous rendezvous to vehicle ascent trajectory analysis. Comprehensive trade studies are also needed in the area of gimbaled pointing systems where the information needs are different from previous applications. It is therefore the objective of this research to extend the capabilities of linear covariance theory to analyze the closed-loop navigation and control of a gimbaled pointing system. The extensions developed in this research include modifying the linear covariance equations to accommodate a wider variety of controllers. This enables the analysis of controllers common to gimbaled pointing systems, with internal states and associated dynamics as well as actuator command filtering and auxiliary controller measurements. The second extension is the extraction of power spectral density estimates from information available in linear covariance analysis. This information is especially important to gimbaled pointing systems where not just the variance but also the spectrum of the pointing error impacts the performance. The extended theory is applied to a model of a gimbaled pointing system which includes both flexible and rigid body elements as well as input disturbances, sensor errors, and actuator errors. The results of the analysis are validated by direct comparison to a Monte Carlo-based analysis approach. Once the developed linear covariance theory is validated, analysis techniques that are often prohibitory with Monte Carlo analysis are used to gain further insight into the system. These include the creation of conventional error budgets through sensitivity analysis and a new analysis approach that combines sensitivity analysis with power spectral density estimation. This new approach resolves not only the contribution of a particular error source, but also the spectrum of its contribution to the total error. In summary, the objective of this dissertation is to increase the utility of linear covariance analysis for systems with a wide variety of controllers and for whom the spectrum of the errors is critical to performance.
|
184 |
Data Analysis Using Experimental Design Model Factorial Analysis of Variance/Covariance (DMAOVC.BAS)Newton, Wesley E. 01 May 1985 (has links)
DMAOVC.BAS is a computer program written in the compiler version of microsoft basic which performs factorial analysis of variance/covariance with expected mean squares. The program accommodates factorial and other hierarchical experimental designs with balanced sets of data. The program is writ ten for use on most modest sized microprocessors, in which the compiler is available. The program is parameter file driven where the parameter file consists of the response variable structure, the experimental design model expressed in a similar structure as seen in most textbooks, information concerning the factors (i.e. fixed or random, and the number of levels), and necessary information to perform covariance analysis. The results of the analysis are written to separate files in a format that can be used for reporting purposes and further computations if needed.
|
185 |
Fluxes of Energy and Water Vapour from Grazed Pasture on a Mineral Soil in the WaikatoKuske, Tehani Janelle January 2009 (has links)
The eddy covariance (EC) technique was used to measure half hourly fluxes of energy and evaporation from 15 December 2007 to 30 November 2008 at the Scott Research Farm, located 7 km east of Hamilton. Many other supporting measurements of climate and soil variables were also made. The research addressed three objectives: 1. To examine the accuracy of the eddy covariance measurement technique. 2. Understand the surface partitioning of energy and water vapour on a diurnal to annual timescale. 3. Compare measurements of evaporation to methods of estimation. Average energy balance closure at Scott Farm was deficient by 24%, comparable to published studies of up to 30%. Three lysimeter studies were carried out to help verify eddy covariance data. These resulted in the conclusions that; 1) lysimeter pots needed to be deeper to allow for vegetation rooting depths to be encompassed adequately; 2) forcing energy balance closure was not supported by two of the studies (summer and winter); 3) latent heat flux (λE) gap filling of night time EC data during winter over estimated values by about 10 W m-2; and 4) the spring lysimeter study verified eddy covariance measurements including the closure forcing method. Some uncertainty still exists as to the accuracy of both lysimeter and EC methods of evaporation measurement because both methods still have potential biases, however for the purpose of this study, it would appear data are sufficiently accurate to have confidence in results. Energy and water vapour fluxes varied on both a diurnal and seasonal timescale. Diurnally, fluxes were small or negative at night and were highest during the day, usually at solar noon. Seasonally, spring and summer had the highest energy and evaporation fluxes and winter rates were small but tended to exceed available energy supply. Evaporation was constrained by soil moisture availability during summer and by energy availability during winter. Estimated annual evaporation at Scott Farm was 755 mm, 72% of precipitation. Two evaporation models were compared to eddy covariance evaporation (EEC) measurements; the FAO56 Penman-Monteith model (Eo) and the Priestley-Taylor model (EPT). Both models over estimated evaporation during dry conditions and slightly under estimated during winter. The α coefficient that is applied to EPT was not constant and a seasonally adjusted value would be most appropriate. A crop coefficient of 1.13 is needed for Eo measurements during moist conditions. Eo began over estimating evaporation when soil moisture contents dropped below ~44%. A water stress adjustment was applied to both models which improved evaporation estimates, however early onset of drying was not able to be adjusted for. The adjusted Eo model is the most accurate overall, when compared to EEC.
|
186 |
Multi-scalarité du phénomène feu de forêt en régions méditerranéennes françaises de 1973 à 2006Mangiavillano, Adrien 20 November 2008 (has links) (PDF)
Les feux de forêt, landes, garrigues ou maquis constituent un « problème » extrêmement sérieux pour les régions euro-méditerranéennes. Les variations de ce phénomène complexe sont issues des interactions entre des paramètres de nature différente ; interactions qui peuvent être non-linéaires. Sans être propre à l'étude de cette non-linéarité, cette thèse démontre qu'il est possible, d'une part, d'en mesurer les effets sur les propriétés statistiques et physiques du phénomène et d'autre part, que l'auto-organisation, l'instabilité chronologique et l'irrégularité morphologique limitent intrinsèquement la prédictibilité d'un tel phénomène. Nous analysons ainsi, à partir de l'exemple des régions méditerranéennes françaises de 1973 à 2006 (base Prométhée), les différentes modalités de son émergence selon la situation géographique des lieux étudiés et les échelles temporelles et spatiales auxquelles il se réfère. Cette approche, essentiellement structurelle et statistique, est à même de compléter les travaux existants sur le sujet et donne l'opportunité d'aboutir à des indicateurs innovants pour une différenciation spatiale centrée sur la question des échelles. Aussi, l'enjeu de ce travail est majeur puisqu'il suppose de rechercher des récurrences, un ordre qui transcende les cas particuliers, tout en faisant écho à la prise en compte de l'incertitude chez les hommes de terrain, les ingénieurs, les physiciens, les climatologues et les géographes.
|
187 |
Inférence statistique pour l'optimisation stochastique : applications en finance et en gestion de productionGuigues, Vincent 30 June 2005 (has links) (PDF)
L'objet de cette thèse est de modéliser et analyser des problèmes d'optimisation stochastique et de proposer des méthodes de résolution pour ces problèmes.<br />Dans une première partie, on considère des problèmes d'allocation d'actifs se formulant comme des problèmes d'optimisation convexes. La fonction coût et les contraintes dépendent d'un paramètre multidimensionnel inconnu. On montre, sous l'hypothèse d'homogénéité temporelle locale pour le processus des rendements, que l'on peut construire des approximations du problème original se servant d'une estimation adaptative du paramètre inconnu. La précision du problème approché est fournie. Cette méthode a été appliquée sur les problèmes VaR et de Markowitz et l'on présente les résultats de simulations numériques sur des données réelles et simulées. On propose ensuite une analyse de sensibilité pour une classe de problèmes quadratiques dont on déduit une analyse de sensibilité du problème de Markowitz. Pour ce problème, on propose alors une calibration stable de la matrice de covariance et des contreparties robustes. <br />La deuxième partie porte sur l'analyse de problèmes de gestion de production et en particulier le problème de gestion de production électrique. Nous proposons de nouvelles modélisations pour ce problème et des moyens pour les mettre en oeuvre. L'un des modèles conduit à une résolution par décomposition par les prix. Dans ce cas, on montre comment calculer la fonction duale par programmation dynamique. On explique enfin comment dans chaque cas, une stratégie de gestion est mise en place. Les différentes méthodes de gestion sont comparées sur des données réelles et simulées.
|
188 |
The microdosimetric variance-covariance method used for beam quality characterization in radiation protection and radiation therapyLillhök, Jan Erik January 2007 (has links)
<p>Radiation quality is described by the RBE (relative biological effectiveness) that varies with the ionizing ability of the radiation. Microdosimetric quantities describe distributions of energy imparted to small volumes and can be related to RBE. This has made microdosimetry a powerful tool for radiation quality determinations in both radiation protection and radiation therapy. The variance-covariance method determines the dose-average of the distributions and has traditionally been used with two detectors to correct for beam intensity variations. Methods to separate dose components in mixed radiation fields and to correct for beam variations using only one detector have been developed in this thesis. Quality factor relations have been optimized for different neutron energies, and a new algorithm that takes single energy deposition events from densely ionizing radiation into account has been formulated. The variance-covariance technique and the new methodology have been shown to work well in the cosmic radiation field onboard aircraft, in the mixed photon and neutron fields in the nuclear industry and in pulsed fields around accelerators.</p><p>The method has also been used for radiation quality characterization in therapy beams. The biological damage is related to track-structure and ionization clusters and requires descriptions of the energy depositions in nanometre sized volumes. It was shown that both measurements and Monte Carlo simulation (condensed history and track-structure) are needed for a reliable nanodosimetric beam characterization. The combined experimental and simulated results indicate that the dose-mean of the energy imparted to an object in the nanometre region is related to the clinical RBE in neutron, proton and photon beams. The results suggest that the variance-covariance technique and the dose-average of the microdosimetric quantities could be well suited for describing radiation quality also in therapy beams.</p>
|
189 |
Identification of stochastic systems : Subspace methods and covariance extensionDahlen, Anders January 2001 (has links)
No description available.
|
190 |
The microdosimetric variance-covariance method used for beam quality characterization in radiation protection and radiation therapyLillhök, Jan Erik January 2007 (has links)
Radiation quality is described by the RBE (relative biological effectiveness) that varies with the ionizing ability of the radiation. Microdosimetric quantities describe distributions of energy imparted to small volumes and can be related to RBE. This has made microdosimetry a powerful tool for radiation quality determinations in both radiation protection and radiation therapy. The variance-covariance method determines the dose-average of the distributions and has traditionally been used with two detectors to correct for beam intensity variations. Methods to separate dose components in mixed radiation fields and to correct for beam variations using only one detector have been developed in this thesis. Quality factor relations have been optimized for different neutron energies, and a new algorithm that takes single energy deposition events from densely ionizing radiation into account has been formulated. The variance-covariance technique and the new methodology have been shown to work well in the cosmic radiation field onboard aircraft, in the mixed photon and neutron fields in the nuclear industry and in pulsed fields around accelerators. The method has also been used for radiation quality characterization in therapy beams. The biological damage is related to track-structure and ionization clusters and requires descriptions of the energy depositions in nanometre sized volumes. It was shown that both measurements and Monte Carlo simulation (condensed history and track-structure) are needed for a reliable nanodosimetric beam characterization. The combined experimental and simulated results indicate that the dose-mean of the energy imparted to an object in the nanometre region is related to the clinical RBE in neutron, proton and photon beams. The results suggest that the variance-covariance technique and the dose-average of the microdosimetric quantities could be well suited for describing radiation quality also in therapy beams.
|
Page generated in 0.0187 seconds