Statistical Inference for the Common Mean of Two Independent Log-Normal Distributions and Some Applications in ReliabilityLi, Xue January 2004 (has links) (PDF)
No description available.
Ahuja, Jagdish Chand
The problem of estimating the parameters of several growth curves has been considered for the case where repeated correlated observations are taken on the same individual or population. These curves are the logistic, the Gompertz, the modified exponential, the ɵ-generalized logistic, and their modified forms with lower asymptotes different from zero. Three methods of estimation have been suggested and the mathematical procedure of each has been discussed. The different methods of estimation yield the vector equations for the estimators whose solutions require the inverse of the variance and covariance matrix. A procedure is given for obtaining the inverse of the type of covariance matrix used in our model. The procedure given holds good for all matrices of this type of any order and does not require the use of computers. The methods of estimation suggested are all of iterative type and require starting values of the parameters. A method for obtaining the starting values of the parameters has been given for each curve. The method for obtaining the starting values involves the estimation of the derivatives of the growth function w(t) or log w(t) with respect to t. The differentiation formulas for the estimation of these derivatives from the observed data, when the series of values may be given at equal or unequal intervals, have been obtained. The stochastic models for the logistic, the Gompertz, and the modified exponential laws of growth have been formulated as pure birth Markov processes. The solutions of the differential-difference equations describing the probability laws of the processes have been obtained by solving the partial differential equations for their generating functions. The properties of the processes have been studied by deriving the expressions for the means, variances and correlations. A method for obtaining the maximum likelihood estimators of the parameters involved has also been given in each case. The problem of distinguishing the different phases of growth has been attacked by deriving orthogonal expansions from the logistic, the Gompertz, and the exponential densities, in a manner similar to the way in which Gram (1879) and Charlier (1906) derived an orthogonal expansion from the normal density. The φ-generali zed Gompertz, the (φ,θ)-generalized logistic, and the (φ,θ)-generalized modified exponential densities have been obtained as generalizations of the Gompertz, the θ-generalized logistic, and the θ-generalized modified exponential respectively. The limiting cases of these densities have been found as φ or θ or both are allowed to go to infinity or zero. Lastly, the recurrence relation for the orthogonal polynomials qn (x) (leading coefficient one) of degree n associated n with the density function f(x) over the interval [ a, b] has been derived explicitly in terms of the moments of f(x). Further, an alternative proof has been given of the theorem that if f(x) is symmetrical about x = 0, then the polynomials qn(x) are even or odd functions according as n is even or odd. / Science, Faculty of / Mathematics, Department of / Graduate
Delbrouck, Lucien Elie Nicolas
A basic result of Doob states that, under very weak measurability assumptions, Bayes’ estimators are consistent for almost all parameter points. First it is shown that even when this exceptional set is finite, the effect of putting positive prior mass on each point of the set may result in creating a new exceptional set, larger than the original one, rather than in eliminating the lack of consistency. The .posterior densities are then studied and it is shown that under fairly strong regularity conditions the corresponding posterior distributions tend, in the limit, to concentrate their mass on a particular point in the parameter set. If in addition, distinct parameter points correspond to distinct probability measures, then it is shown that both the maximum likelihood and the Bayes' estimators are consistent for all parameter values. / Science, Faculty of / Mathematics, Department of / Graduate
Bibliography: pages 121-126. / In this thesis, the various methods of variable selection which have been proposed in the statistical, epidemiological and medical literature for prediction and estimation problems in logistic regression will be described. The procedures will be applied to medical data sets. On the basis of the literature review as well as the applications to examples, strengths and weaknesses of the approaches will be identified. The procedures will be compared on the basis of the results obtained, their appropriateness for the specific aim of the analysis, and demands they place on the analyst and researcher, intellectually and computationally. In particular, certain selection procedures using bootstrap samples, which have not been used before, will be investigated, and the partial Gauss discrepancy will be extended to the case of logistic regression. Recommendations will be made as to which approaches are the most suitable or most practical in different situations. Most statistical texts deal with issues regarding prediction, whereas the epidemiological literature focuses on estimation. It is therefore hoped that the thesis will be a useful reference for those, statistically or epidemiologically trained, who have to deal with issues regarding variable selection in logistic regression. When fitting models in general, and logistic regression models in particular, it is standard practice to determine the goodness of fit of models, and to ascertain whether outliers or influential observations are present in a data set. These aspects will not be discussed in this thesis, although they were considered when fitting the models.
The impact of estimation frequency on Value at Risk (VaR) and Expected Shortfall (ES) forecasts: an empirical study on conditional extreme value modelsCoyne, Alice Elizabeth 19 January 2021 (has links)
This study investigates extreme market events which occur in the tails of a distribution. The extreme events occur with a very low probability, but with significant consequences, which is what makes them of interest. In this study 20 years of data from both the S&P 500 and the JSE All Share index have been used. An extreme value approach has been taken to quantify the risks associated with extreme market events. To achieve this a two phased process is used to calculated the Value at Risk and Expected Shortfall. The first phase involved running the daily returns through the GARCH model, and then extracting the residuals. The second phase involves using the Block Maxima Method, or Peaks over Threshold method to fit the residuals to the Generalized Extreme Value Distribution or the Generalized Pareto Distribution. Finally, the impact of estimation frequency is considered for each of the models. In conclusion, taking an extreme value approach to provide a statistically sound method to calculate risk, even when the parameters of the model are updated less frequently, this is preferable to simpler models where the parameter estimates are updated daily.
02 August 2021
We tackle the question of whether Trade and Quote data from high-frequency finance are representative of discrete connected events, or whether these measurements can still be faithfully represented as random samples of some underlying Brownian diffusion in the context of modelling correlation dynamics. In particular, if the implicit notion of instantaneous correlation dynamics that are independent of the time-scale a reasonable assumption. To this end, we apply kernel averaging non-uniform fast Fourier transforms in the context of the Malliavin-Mancino integrated and instantaneous volatility estimators to speed up the estimators. We demonstrate the implicit time-scale investigated by the estimator by comparing it to the theoretical Epps effect arising from asynchrony. We compare the Malliavin-Mancino and Cuchiero-Teichmann Fourier instantaneous estimators and demonstrate the relationship between the instantaneous Epps effect and the cutting frequencies in the Fourier estimators. We find that using the previous tick interpolation in the Cuchiero-Teichmann estimator results in unstable estimates when dealing with asynchrony, while the ability to bypass the time domain with the Malliavin-Mancino estimator allows it to produce stable estimates and is therefore better suited for ultra high-frequency finance. We derive the Epps effect arising from asynchrony and provide a refined approach to correct the effect. We compare methods to correct for the Epps effect arising from asynchrony when the underlying process is a Brownian diffusion, and when the underlying process is from discrete connected events (proxied using a D-type Hawkes process). We design three experiments using the Epps effect to discriminate the underlying processes. These experiments demonstrate that using a Hawkes representation recovers the empiricism reported in the literature under simulation conditions that cannot be achieved when using a Brownian representation. The experiments are applied to Trade and Quote data from the Johannesburg Stock Exchange and the evidence suggests that the empirical measurements are from a system of discrete connected events where correlations are an emergent property of the time-scale rather than an instantaneous quantity that exists at all time-scales.
Includes bibliographical references (leaves 140-149). / Identifying outliers and/or influential observations is a fundamental step in any statistical analysis, since their presence is likely to lead to erroneous results. Numerous measures have been proposed for detecting outliers and assessing the influence of observations on least squares regression results. Since outliers can arise in different ways, the above mentioned measures are based on motivational arguments and they are designed to measure the influence of observations on different aspects of various regression results. In what follows, we investigate how one can combine different test statistics based on residuals and diagnostic plots to identify outliers and influential observations (both in the single and multiple case) in general linear regression models.
Dunne, Timothy Terence
Column-space conditions are shown to be at the heart of a number of identities linking generalized inverses of rectangular matrices. These identities give some new insights into reparametrizations of the general linear model, and into the imposition of constraints, when the variance-covariance structure is ÏƒÂ².I. Hypothesis-test statistics for non-estimable functions are shown to give no further information than underlying estimable functions. For an arbitrary variance-covariance structure the "sweep-out" method is generalized. The John and Draper model for outliers is extended, and distributional results established. Some diagnostic statistics for outlying or influential observations are considered. A Bayesian formulation of outliers in the general linear model is attempted.
Borchers, D L
Bibliography: leaves 225-233. / After critically reviewing developments in line transect estimation theory to date, general likelihood functions are derived for the case in which detection probabilities are modelled as functions of any number of explanatory variables and detection of animals on the trackline (i.e. directly in the observer's path) is not certain. Existing models are shown to correspond to special cases of the general models. Maximum likelihood estimators are derived for some special cases of the general model and some existing line transect estimators are shown to correspond to maximum likelihood estimators for other special cases. The likelihoods are shown to be extensions of existing mark-recapture likelihoods as well as being generalizations of existing line transect likelihoods. Two new abundance estimators are developed. The first is a Horvitz-Thompson-like estimator which utilizes the fact that for point estimation of abundance the density of perpendicular distances in the population can be treated as known in appropriately designed line transect surveys. The second is based on modelling the probability density function of detection probabilities in the population. Existing line transect estimators are shown to correspond to special cases of the new Horvitz-Thompson-like estimator, so that this estimator, together with the general likelihoods, provides a unifying framework for estimating abundance from line transect surveys.
Outliers, influential observations and robust estimation in non-linear regression analysis and discriminant analysisVan Deventer, Petrus Jacobus Uys January 1993 (has links)
Page generated in 4.5207 seconds