Spelling suggestions: "subject:"amathematical estatistics"" "subject:"amathematical cstatistics""
141 |
Exact probabilities of the Kolmogorov-Smirnov statisticHefner, Oscar Vernon 08 1900 (has links)
No description available.
|
142 |
Bounds for the probability of a unionGrenon, Gilles January 1975 (has links)
No description available.
|
143 |
Testing for uniformity on the sphereBerengut, David. January 1969 (has links)
No description available.
|
144 |
HIERARCHICAL BAYESIAN MODELLING FOR THE ANALYSIS OF THE LACTATION OF DAIRY ANIMALSLombaard, Carolina Susanna 03 November 2006 (has links)
This thesis was written with the aim of modelling the lactation process in dairy cows and
goats by applying a hierarchical Bayesian approach. Information on cofactors that could
possibly affect lactation is included in the model through a novel approach using covariates.
Posterior distributions of quantities of interest are obtained by means of the Markov chain
Monte Carlo methods. Prediction of future lactation cycle(s) is also performed.
In chapter one lactation is defined, its characteristics considered, the factors that could
possibly influence lactation mentioned, and the reasons for modelling lactation explained.
Chapter two provides a historical perspective to lactation models, considers typical lactation
curve shapes and curves fitted to the lactation composition traits fat and protein of milk.
Attention is also paid to persistency of lactation.
Chapter three considers alternative methods of obtaining total yield and producing Standard
Lactation Curves (SLACâs). Attention is paid to methods used in fitting lactation curves and
the assumptions about the errors.
In chapter four the generalised Bayesian model approach used to simultaneous ly model more
than one lactation trait, while also incorporating information on cofactors that could possibly
influence lactation, is developed. Special attention is paid not only to the model for complete
data, but also how modelling is adjusted to make provision for cases where not all lactation
cycles have been observed for all animals, also referred to as incomplete data. The use of the
Gibbs sampler and the Metropolis-Hastings algorithm in determining marginal posterior
distributions of model parameters and quantities that are functions of such parameters are also
discussed. Prediction of future lactation cycles using the model is also considered.
In chapter five the Bayesian approach together with the Wood model, applied to 4564
lactation cycles of 1141 Jersey cows, is used to illustrate the approach to modelling and
prediction of milk yield, percentage of fat and percentage of protein in milk composition in
the case of complete data. The incorporation of cofactor information through the use of the
covariate matrix is also considered in greater detail. The results from the Gibbs sampler are
evaluated and convergence there-of investigated. Attention is also paid to the expected
lactation curve characteristics as defined by Wood, as well as obtaining the expected lactation
254
curve of one of the levels of a cofactor when the influence of the other cofactors on the
lactation curve has be eliminated.
Chapter six considers the use of the Bayesian approach together with the general exponential
and 4-parameter Morant model, as well as an adaptation of a model suggested by Wilmink, in
modelling and predicting milk yield, fat content and protein content of milk for the Jersey
data.
In chapter seven a diagnostic comparison by means of Bayes factors of the results from the
four models in the preceding two chapters, when used together with the Bayesian approach, is
performed. As a result the adapted form of the Wilmink model fared best of the models
considered!
Chapter eight illustrates the use of the Bayesian approach, together with the four lactation
models considered in this study, to predict the lactation traits for animals similar to, but not
contained in the data used to develop the respective models.
In chapter nine the Bayesian approach together with the Wood model, applied to 755 lactation
cycles of 493 Saanen does collected during either or both of two consecutive year, is used to
illustrate the approach to modelling and predicting milk yield, percentage of fat and
percentage of protein in milk in the case of incomplete data.
Chapter ten provides a summary of the results and a perspective of the contribution of this
research to lactation modelling.
|
145 |
ON THE USE OF EXTREME VALUE THEORY IN ENERGY MARKETSMicali, V 16 November 2007 (has links)
The thesis intent is to provide a set of statistical
methodologies in the field of Extreme Value Theory
(EVT) with a particular application to energy losses,
in Gigawatt-hours (GWh) experienced by electrical
generating units (GUâs).
Due to the complexity of the energy market, the thesis
focuses on the volume loss only and does not expand
into the price, cost or mixes thereof (although the
strong relationship between volume and price is
acknowledged by some initial work on the energy
price [SMP] is provided in Appendix B)
Hence, occurrences of excessive unexpected energy
losses incurred by these GUâs formulate the problem.
Exploratory Data Analysis (EDA) structures the data
and attempts at giving an indication on the
categorisation of the excessive losses. The size of the
GU failure is also investigated from an aggregated
perspective to relate to the Generation System. Here
the effect of concomitant variables (such as the Load
Factor imposed by the market) is emphasised. Cluster
Analysis (2-Way Joining) provided an initial
categorising technique. EDA highlights the shortfall
of a scientific approach to determine the answer to the
question at when is a large loss sufficiently large that
it affects the System. The usage of EVT shows that
the GWh Losses tend to behave as a variable in the
Fréchet domain of attraction. The Block Maxima
(BM) and Peak-Over-Threshold (POT), the latter as
semi and full parametric, methods are investigated.
The POT methodologies are both applicable. Of
particular interest is the Q-Q plots results on the semiparametric
POT method, which yielded results that fit the data satisfactorily (pp 55-56). The Generalised
Pareto Distribution (GPD) models well the tail of the
GWh Losses above a threshold under the POT full
parametric method. Different methodologies were
explored in determining the parameters of the GPD.
The method of 3-LM (linear combinations of
Probability Weighted Moments) is used to arrive at
initial estimates of the GPD parameters. A GPD is
finally parameterised for the GWh Losses above 766
GWh. The Bayesian philosophy is also utilised in this
thesis as it provides a predictive distribution of (high
quantiles) the large GWh Losses. Results are found in
this part of the thesis in so far that it utilises the ratio
of the Mean Excess Function (the expectation of a
loss above a certain threshold) over its probability of
exceeding the threshold as an indicator to establish
the minimum of this ratio. The technique was
developed for the GPD by using the Fisher
Information Matrix (FIM) and the Delta-Method.
Prediction of high quantiles were done by using
Markov Chain Monte Carlo (MCMC) and eliciting
the GPD Maximal Data Information (MDI) prior. The
last EVT methodology investigated in the thesis is the
one that uses the Dirichlet process and the method of
Negative Differential Entropy (NDE). The thesis also
opened new areas of pertinent research.
|
146 |
BAYESIAN INFERENCE FOR LINEAR AND NONLINEAR FUNCTIONS OF POISSON AND BINOMIAL RATESRaubenheimer, Lizanne 16 August 2012 (has links)
This thesis focuses on objective Bayesian statistics, by evaluating a number of noninformative priors.
Choosing the prior distribution is the key to Bayesian inference. The probability matching prior for
the product of different powers of k binomial parameters is derived in Chapter 2. In the case of two
and three independently distributed binomial variables, the Jeffreys, uniform and probability matching
priors for the product of the parameters are compared. This research is an extension of the work by
Kim (2006), who derived the probability matching prior for the product of k independent Poisson
rates. In Chapter 3 we derive the probability matching prior for a linear combination of binomial
parameters. The construction of Bayesian credible intervals for the difference of two independent
binomial parameters is discussed.
The probability matching prior for the product of different powers of k Poisson rates is derived in
Chapter 4. This is achieved by using the differential equation procedure of Datta & Ghosh (1995). The
reference prior for the ratio of two Poisson rates is also obtained. Simulation studies are done to com-
pare different methods for constructing Bayesian credible intervals. It seems that if one is interested
in making Bayesian inference on the product of different powers of k Poisson rates, the probability
matching prior is the best. On the other hand, if we want to obtain point estimates, credibility intervals
or do hypothesis testing for the ratio of two Poisson rates, the uniform prior should be used.
In Chapter 5 the probability matching prior for a linear contrast of Poisson parameters is derived,
this prior is extended in such a way that it is also the probability matching prior for the average of
Poisson parameters. This research is an extension of the work done by Stamey & Hamilton (2006). A
comparison is made between the confidence intervals obtained by Stamey & Hamilton (2006) and the
intervals derived by us when using the Jeffreys and probability matching priors. A weighted Monte
Carlo method is used for the computation of the Bayesian credible intervals, in the case of the proba-
bility matching prior. In the last section of this chapter hypothesis testing for two means is considered.
The power and size of the test, using Bayesian methods, are compared to tests used by Krishnamoorthy
& Thomson (2004). For the Bayesian methods the Jeffreys prior, probability matching prior and two
other priors are used.
Bayesian estimation for binomial rates from pooled samples are considered in Chapter 6, where
the Jeffreys prior is used. Bayesian credibility intervals for a single proportion and the difference of
two binomial proportions estimated from pooled samples are considered. The results are compared This thesis focuses on objective Bayesian statistics, by evaluating a number of noninformative priors.
Choosing the prior distribution is the key to Bayesian inference. The probability matching prior for
the product of different powers of k binomial parameters is derived in Chapter 2. In the case of two
and three independently distributed binomial variables, the Jeffreys, uniform and probability matching
priors for the product of the parameters are compared. This research is an extension of the work by
Kim (2006), who derived the probability matching prior for the product of k independent Poisson
rates. In Chapter 3 we derive the probability matching prior for a linear combination of binomial
parameters. The construction of Bayesian credible intervals for the difference of two independent
binomial parameters is discussed.
The probability matching prior for the product of different powers of k Poisson rates is derived in
Chapter 4. This is achieved by using the differential equation procedure of Datta & Ghosh (1995). The
reference prior for the ratio of two Poisson rates is also obtained. Simulation studies are done to com-
pare different methods for constructing Bayesian credible intervals. It seems that if one is interested
in making Bayesian inference on the product of different powers of k Poisson rates, the probability
matching prior is the best. On the other hand, if we want to obtain point estimates, credibility intervals
or do hypothesis testing for the ratio of two Poisson rates, the uniform prior should be used.
In Chapter 5 the probability matching prior for a linear contrast of Poisson parameters is derived,
this prior is extended in such a way that it is also the probability matching prior for the average of
Poisson parameters. This research is an extension of the work done by Stamey & Hamilton (2006). A
comparison is made between the confidence intervals obtained by Stamey & Hamilton (2006) and the
intervals derived by us when using the Jeffreys and probability matching priors. A weighted Monte
Carlo method is used for the computation of the Bayesian credible intervals, in the case of the proba-
bility matching prior. In the last section of this chapter hypothesis testing for two means is considered.
The power and size of the test, using Bayesian methods, are compared to tests used by Krishnamoorthy
& Thomson (2004). For the Bayesian methods the Jeffreys prior, probability matching prior and two
other priors are used.
Bayesian estimation for binomial rates from pooled samples are considered in Chapter 6, where
the Jeffreys prior is used. Bayesian credibility intervals for a single proportion and the difference of
two binomial proportions estimated from pooled samples are considered. The results are compared
|
147 |
A study of fully developed turbulent flow between parallel plates by a statistical methodSrinavasan, Ramanujam 08 1900 (has links)
No description available.
|
148 |
Problems in decision theoryCabrilio, Paul. January 1968 (has links)
No description available.
|
149 |
Multivariate normal estimation with missing dataLegault-Giguère, Monique Andrée January 1974 (has links)
No description available.
|
150 |
Bayesian edge-detection in image processingStephens, David A. January 1990 (has links)
Problems associated with the processing and statistical analysis of image data are the subject of much current interest, and many sophisticated techniques for extracting semantic content from degraded or corrupted images have been developed. However, such techniques often require considerable computational resources, and thus are, in certain applications, inappropriate. The detection localised discontinuities, or edges, in the image can be regarded as a pre-processing operation in relation to these sophisticated techniques which, if implemented efficiently and successfully, can provide a means for an exploratory analysis that is useful in two ways. First, such an analysis can be used to obtain quantitative information relating to the underlying structures from which the various regions in the image are derived about which we would generally be a priori ignorant. Secondly, in cases where the inference problem relates to discovery of the unknown location or dimensions of a particular region or object, or where we merely wish to infer the presence or absence of structures having a particular configuration, an accurate edge-detection analysis can circumvent the need for the subsequent sophisticated analysis. Relatively little interest has been focussed on the edge-detection problem within a statistical setting. In this thesis, we formulate the edge-detection problem in a formal statistical framework, and develop a simple and easily implemented technique for the analysis of images derived from two-region single edge scenes. We extend this technique in three ways; first, to allow the analysis of more complicated scenes, secondly, by incorporating spatial considerations, and thirdly, by considering images of various qualitative nature. We also study edge reconstruction and representation given the results obtained from the exploratory analysis, and a cognitive problem relating to the detection of objects modelled by members of a class of simple convex objects. Finally, we study in detail aspects of one of the sophisticated image analysis techniques, and the important general statistical applications of the theory on which it is founded.
|
Page generated in 0.1362 seconds