301 |
Stochastic Hydrologic Modeling in Real Time Using a Deterministic Model (Streamflow Synthesis and Reservoir Regulation Model), Time Series Model, and Kalman FilterTang, Philip Kwok Fan 08 November 1991 (has links)
The basic concepts of hydrologic forecasting using the Streamflow Synthesis And Reservoir Regulation Model of the U.S. Army Corps of Engineers, auto-regressive-moving-average time series models (including Greens' functions, inverse functions, auto covariance Functions, and model estimation algorithm), and the Kalman filter (including state space modeling, system uncertainty, and filter algorithm), were explored. A computational experiment was conducted in which the Kalman filter was applied to update Mehama local basin model (Mehama is a 227 sq. miles watershed located on the North Santiam River near Salem, Oregon.), a typical SSARR basin model, to streamflow measurements as they became available in simulated real time. Among the candidate AR and ARMA models, an ARMA(l,l) time series model was selected as the best-fit model to represent the residual of the basin model. It was used to augment the streamflow forecasts created by the local basin model in simulated real time. Despite the limitations imposed by the quality of the moisture input forecast and the design and calibration of the basin model, the experiment shows that the new stochastic methods are effective in significantly improving the flood forecast accuracy of the SSARR model.
|
302 |
Valeur monétaire de modifications permanentes au niveau de santé : un essai d'estimation basé sur les fonctions de bien-être individuellesBastien, Michel. January 1983 (has links)
No description available.
|
303 |
Structural equation modeling by extended redundancy analysisHwang, Heungsun, 1969- January 2000 (has links)
No description available.
|
304 |
Comparison of the new "econophysics" approach to dealing with problems of financial to traditional econometric methodsKoh, Jason S. H., University of Western Sydney, College of Business, School of Economics and Finance January 2008 (has links)
We begin with the outlining the motivation of this research as there are still so many unanswered research questions on our complex financial and economic systems. The philosophical background and the advances of econometrics and econophysics are discussed to provide an overview of the stochastic and nonstochastic modelling and these disciplines are set as a central theme for the thesis. This thesis investigates the effectiveness of financial econometrics models such as Gaussian, ARCH (1), GARCH (1, 1) and its extensions as compared to econophysics models such as Power Law model, Boltzmann-Gibbs (BG) and Tsallis Entropy as statistical models of volatility in US S&P500, Dow Jones and NASDAQ stock index using daily data. The data demonstrate several distinct behavioural characteristics, particularly the increased volatility during 1998 to 2004. Power Laws appear to describe the large fluctuations and other characteristics of stock price changes. Surprisingly, these Power Laws models also show significant correlations for different types and sizes of markets and for different periods and sub-periods of markets. The results show the robustness of Power Law analysis, with the Power Law exponent (0.4 to 2.4) staying within the acceptable range of significance (83% to 97%), regardless of the percentage change in the index return. However, the procedure for testing empirical data against a hypothesised power-law distribution using a simple rank-frequency plot of the data and the data binning process can turn out to be a spurious result for the distribution. As for the stochastic processes such as ARCH (1) and GARCH (1, 1) the models are explicitly confined to the conditional behaviour of the data and the unconditional behaviour has often been described via moments. In reality, it is the unconditional tail behaviour that accounts for the tail behaviour and hence, we have to convert the unconditional tail behaviour and express the models as two-dimensional stochastic difference equation using the processes of Starica (Mikosch 2000). The results show the random walk prediction successfully describes the stock movements for small price fluctuations but fails to handle large price fluctuations. The Power Law tests prove superior to the stochastic tests when stock price fluctuations are substantially divergent from the mean. One of the main points of the thesis is that these empirical phenomena are not present in the stochastic process but emerge in the non-parametric process. The main objective of the thesis is to study the relatively new field of Econophysics and put its work in perspective relative to the established if not altogether successful practice of econometric analysis of stock market volatility. One of the most exciting characteristics of Econophysics is that, as a developing field, no models as yet perfectly represent the market and there is still a lot of fundamental research to be done. Therefore, we begin to explore the application of statistical physics method particularly Tsallis entropy to give a new insights into problems traditionally associated with financial markets. The results of Tsallis entropy surpass all expectations and it is therefore one of the most robust methods of analysis. However, it is now subject to some challenge from McCauley, Bassler et. al., as they found that the stochastic dynamic process (sliding interval techniques) used in fat tail distributions is time dependent. / Doctor of Philosophy (PhD)
|
305 |
Automating the aetiological classification of descriptive injury dataShepherd, Gareth William, Safety Science, Faculty of Science, UNSW January 2006 (has links)
Injury now surpasses disease as the leading global cause of premature death and disability, claiming over 5.8 millions lives each year. However, unlike disease, which has been subjected to a rigorous epidemiologic approach, the field of injury prevention and control has been a relative newcomer to scientific investigation. With the distribution of injury now well described (i.e. ???who???, ???what???, ???where??? and ???when???), the underlying hypothesis is that progress in understanding ???how??? and ???why??? lies in classifying injury occurrences aetiologically. The advancement of a means of classifying injury aetiology has so far been inhibited by two related limitations: 1. Structural limitation: The absence of a cohesive and validated aetiological taxonomy for injury, and; 2. Methodological limitation: The need to manually classify large numbers of injury cases to determine aetiological patterns. This work is directed at overcoming these impediments to injury research. An aetiological taxonomy for injury was developed consistent with epidemiologic principles, along with clear conventions and a defined three-tier hierarchical structure. Validation testing revealed that the taxonomy could be applied with a high degree of accuracy (coder/gold standard agreement was 92.5-95.0%), and with high inter- and intra- coder reliability (93.0-96.3% and 93.5-96.3%). Practical application demonstrated the emergence of strong aetiological patterns which provided insight into causative sequences leading to injury, and led to the identification of effective control measures to reduce injury frequency and severity. However, limitations related to the inefficient and error-prone manual classification process (i.e. average 4.75 minute/case processing time and 5.0-7.5% error rate), revealed the need for an automated approach. To overcome these limitations, a knowledge acquisition (KA) software tool was developed, tested and applied, based on an expertsystems technique known as ripple down rules (RDR). It was found that the KA system was able acquire tacit knowledge from a human expert and apply learned rules to efficiently and accurately classify large numbers of injury cases. Ultimately, coding error rates dropped to 3.1%, which, along with an average 2.50 minute processing time, compared favourably with results from manual classification. As such, the developed taxonomy and KA tool offer significant advantages to injury researchers who have a need to deduce useful patterns from injury data and test hypotheses regarding causation and prevention.
|
306 |
Statistical exploratory analysis of genetic algorithmsCzarn, Andrew Simon Timothy January 2008 (has links)
[Truncated abstract] Genetic algorithms (GAs) have been extensively used and studied in computer science, yet there is no generally accepted methodology for exploring which parameters significantly affect performance, whether there is any interaction between parameters and how performance varies with respect to changes in parameters. This thesis presents a rigorous yet practical statistical methodology for the exploratory study of GAs. This methodology addresses the issues of experimental design, blocking, power and response curve analysis. It details how statistical analysis may assist the investigator along the exploratory pathway. The statistical methodology is demonstrated in this thesis using a number of case studies with a classical genetic algorithm with one-point crossover and bit-replacement mutation. In doing so we answer a number of questions about the relationship between the performance of the GA and the operators and encoding used. The methodology is suitable, however, to be applied to other adaptive optimization algorithms not treated in this thesis. In the first instance, as an initial demonstration of our methodology, we describe case studies using four standard test functions. It is found that the effect upon performance of crossover is predominantly linear while the effect of mutation is predominantly quadratic. Higher order effects are noted but contribute less to overall behaviour. In the case of crossover both positive and negative gradients are found which suggests using rates as high as possible for some problems while possibly excluding it for others. .... This is illustrated by showing how the use of Gray codes impedes the performance on a lower modality test function compared with a higher modality test function. Computer animation is then used to illustrate the actual mechanism by which this occurs. Fourthly, the traditional concept of a GA is that of selection, crossover and mutation. However, a limited amount of data from the literature has suggested that the niche for the beneficial effect of crossover upon GA performance may be smaller than has traditionally been held. Based upon previous results on not-linear-separable problems an exploration is made by comparing two test problem suites, one comprising non-rotated functions and the other comprising the same functions rotated by 45 degrees in the solution space rendering them not-linear-separable. It is shown that for the difficult rotated functions the crossover operator is detrimental to the performance of the GA. It is conjectured that what makes a problem difficult for the GA is complex and involves factors such as the degree of optimization at local minima due to crossover, the bias associated with the mutation operator and the Hamming Distances present in the individual problems due to the encoding. Furthermore, the GA was tested on a real world landscape minimization problem to see if the results obtained would match those from the difficult rotated functions. It is demonstrated that they match and that the features which make certain of the test functions difficult are also present in the real world problem. Overall, the proposed methodology is found to be an effective tool for revealing relationships between a randomized optimization algorithm and its encoding and parameters that are difficult to establish from more ad-hoc experimental studies alone.
|
307 |
Prosodic features for a maximum entropy language modelChan, Oscar January 2008 (has links)
A statistical language model attempts to characterise the patterns present in a natural language as a probability distribution defined over word sequences. Typically, they are trained using word co-occurrence statistics from a large sample of text. In some language modelling applications, such as automatic speech recognition (ASR), the availability of acoustic data provides an additional source of knowledge. This contains, amongst other things, the melodic and rhythmic aspects of speech referred to as prosody. Although prosody has been found to be an important factor in human speech recognition, its use in ASR has been limited. The goal of this research is to investigate how prosodic information can be employed to improve the language modelling component of a continuous speech recognition system. Because prosodic features are largely suprasegmental, operating over units larger than the phonetic segment, the language model is an appropriate place to incorporate such information. The prosodic features and standard language model features are combined under the maximum entropy framework, which provides an elegant solution to modelling information obtained from multiple, differing knowledge sources. We derive features for the model based on perceptually transcribed Tones and Break Indices (ToBI) labels, and analyse their contribution to the word recognition task. While ToBI has a solid foundation in linguistic theory, the need for human transcribers conflicts with the statistical model's requirement for a large quantity of training data. We therefore also examine the applicability of features which can be automatically extracted from the speech signal. We develop representations of an utterance's prosodic context using fundamental frequency, energy and duration features, which can be directly incorporated into the model without the need for manual labelling. Dimensionality reduction techniques are also explored with the aim of reducing the computational costs associated with training a maximum entropy model. Experiments on a prosodically transcribed corpus show that small but statistically significant reductions to perplexity and word error rates can be obtained by using both manually transcribed and automatically extracted features.
|
308 |
The prediction and management of the variability of manufacturing operationsSteele, Clint, n/a January 2005 (has links)
Aim: To investigate methods that can be used to predict and manage the effects of
manufacturing variability on product quality during the design process.
Methodology: The preliminary investigation is a review and analysis of probabilistic methods and
quality metrics. Based on this analysis, convenient robustification methods are
developed. In addition, the nature of the flow of variability in a system is considered.
This is then used to ascertain the information needed for an input variable when
predicting the quality of a proposed design.
The second, and major, part of the investigation is a case-by-case analysis of a
collection of manufacturing operations and material properties. Each is initially
analysed from first principles. On completion, the fundamental causes of variability of
the key characteristic(s) are identified. Where possible, the expected variability for each
of those characteristics has been determined. Where this determination was not possible,
qualitative conclusions about the variability are made instead. In each case, findings on
the prediction and management of manufacturing variability are made.
|
309 |
Hierarchical capture-recapture modelsSchofield, Matthew R, n/a January 2007 (has links)
A defining feature of capture-recapture is missing data due to imperfect detection of individuals. The standard approach used to deal with the missing data is to integrate (or sum) over all the possible unknown values. The missing data is completely removed and the resulting likelihood is in terms of the observed data. The problem with this approach is that often biologically unnatural parameters are chosen to make the integration (summation) tractable. A related consequence is that latent variables of interest, such as the population size and the number of births are only available as derived quantities. As they are not explicitly in the model they are not available to be used in the model as covariates to describe population dynamics. Therefore, models including density dependence are unable to be examined using standard methods.
Instead of explicitly integrating out missing data, we choose to include it using data augmentation. Instead of being removed, the missing data is now present in the likelihood as if it were actually observed. This means that we are able to specify models in terms of the data we would like to have observed, instead of the data we actually did observe. Having the complete data allows us to separate the processes of demographic interest from the sampling process. The separation means that we can focus on specifying the model for the demographic processes without worrying about the sampling model. Therefore, we no longer need to choose parameters in order to simplify the removal of missing data, but we are free to naturally write the model in terms of parameters that are of demographic interest. A consequence of this is that we are able write complex models in terms of a series of simpler conditional likelihood components. We show an example of this where we fit a CJS model that has an individual-specific time-varying covariate as well as live re-sightings and dead recoveries.
Data augmentation is naturally hierarchical, with parameters that are specified as random effects treated as any other latent variable and included into the likelihood. These hierarchical random effects models make it possible to explore stochastic relationships both (i) between parameters in the model, and (ii) between parameters and any covariates that are available.
Including all of the missing data means that latent variables of interest, including the population size and the number of births, can now be included and used in the model. We present an example where we use the population size (i) to allow us to parameterize birth in terms of the per-capita birth rates, and (ii) as a covariate for both the per-capita birth rate and the survival probabilities in a density dependent relationship.
|
310 |
New tools and approaches to uncertainty estimation in complex ecological modelsBrugnach, Marcela 19 December 2002 (has links)
This dissertation investigates the problem of uncertainty in complex ecological
models. The term "complex" is used to convey both the common and scientific
meanings. Increasingly, ecological models have become complex because they are
more complicated; ecological models are generally multi-variate and multi-leveled in
structure. Many ecological models are complex because they simulate the dynamics
of complex systems. As a result, and as science moves from the modern/normal to
postmodern/post-normal paradigm view of the world, the definition of uncertainty and
the problem of uncertainty estimation in models tread the lines between the technical
and the philosophical. With this in mind, I have chosen to examine uncertainty from
several perspectives and under the premise that the needs and goals of uncertainty
estimation, like ecological models themselves, are evolving. Each chapter represents
a specific treatment of uncertainty and introduces new methodologies to evaluate the
nature, source, and significance of model uncertainty. In the second chapter,
'Determining the significance of threshold values uncertainty in rule-based
classification models', I present a sensitivity analysis methodology to determine the
significance of uncertainty in spatially-explicit rule-based classification models. In the
third chapter, 'Process level sensitivity analysis for complex ecological models', I
present a sensitivity analysis methodology at the process level, to determine the
sensitivity of a model to variations in the processes it describes. In the fourth chapter,
'A Component Based Approach for the Development of Ecological Simulations',
investigate how the process of developing an ecological simulation can be advanced
by using component-based simulation frameworks. I conclude with reflection on the
future of modeling and studies of uncertainty. / Graduation date: 2003
|
Page generated in 0.0979 seconds