• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 39
  • 39
  • 39
  • 10
  • 7
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Match tests for nonparametric analysis of variance problems

Worthington, P. L. B. January 1982 (has links)
The thesis is presented in two parts, (a) "Nonparametric Analysis of Variance", and (b) "An Asymptotic Expansion of the Null Distributions of Kruskal and Wallis's and Friedman's Statistics". In the first part we present a number of new nonparametric tests designed for a variety of experimental situations. These tests are all based on a so-called "matching" principle. The range of situations covered by the tests are: (i) Two-way analysis of variance with a general alternative hypothesis (without interaction). (ii) Two-way analysis of variance with an ordered alternative hypothesis (without interaction). (iii) Interaction in two-way analysis of variance, both the univariate and. multivariate cases. (iv) Latin square designs. (v) Second-order interaction in three-way analysis of variance. (vi) Third-order interaction in four-way analysis of variance. The validity of the tests is supported by a series of simulation studies which were performed with a number of different distributions. In the second part of the thesis we develop an asymptotic expansion for the construction of improved approximations to the null distributions of Kruskal and Wallis's (1952) and Friedman's (1937) statistics. The approximation is founded on the method of steepest descents, a procedure that is better known in Numerical Analysis than in Statistics. In order to implement this approximation it was necessary to derive the third and fourth moments of the Kruskal-Wallis statistic and the fourth moment of Friedman's statistic. Tables of approximate critical values based on this approximation are presented for both statistics.

Mathematical modelling of eukaryotic stress-response gene networks

Haque, Mainul January 2012 (has links)
Mathematical modelling of gene regulatory networks is a relatively new area which is playing an important role in theoretical and experimental investigations that seek to open the door to understanding the real mechanisms that take place in living systems. The current thesis concentrates on studying the animal stress-response gene regulatory network by seeking to predict the consequence of environmental hazards caused by chemical mixtures (typical of industrial pollution). Organisms exposed to pollutants display multiple defensive stress responses, which together constitute an interlinked gene network (the Stress-Response Network; SRN). Multiple SRN reporter-gene outputs have been monitored during single and combined chemical exposures in transgenic strains of two invertebrates, Caenorhabditis elegans and Drosophila melanogaster. Reporter expression data from both species have been integrated into mathematical models describing the dynamic behaviour of the SRN and incorporating its known regulatory gene circuits. We describe some mathematical models of several types of different stress response networks, incorporating various methods of activation and inhibition, including formation of complexes and gene regulation (through several known transcription factors). Although the full details of the protein interactions forming these types of circuits are not yet well-known, we seek to include the relevant proteins acting in different cellular compartments. We propose and analyse a number of different models that describe four different stress response gene networks and through a combination of analytical (including stability, bifurcation and asymptotic) and numerical methods, we study these models to gain insight on the effect of several stresses on gene networks. A detailed time-dependent asymptotic analysis is performed for relevant models in order to clarify the roles of the distinct biochemical reactions that make up several important proteins production processes. In two models we were able to verify the theoretical predictions with the corresponding laboratory experimental observations that carried out by my coworkers in Britain and India.

Determining the location of an impact site from bloodstain spatter patterns : computer-based analysis of estimate uncertainty

March, Jack January 2005 (has links)
The estimation of the location in which an impact event took place from its resultant impact spatter bloodstain pattern can be a significant investigative issue in the reconstruction of a crime scene. The bloodstain pattern analysis methods through which an estimate is constructed utilise the established bloodstain pattern analysis principles of spatter bloodstain directionality, impact angle calculation, and straight-line trajectory approximation. Uncertainty, however, can be shown to be present in the theoretical definition and practical approximation of an impact site; the theoretical justification for impact angle calculation; spatter bloodstain sample selection; the dimensional measurement of spatter bloodstain morphologies; the inability to fully incorporate droplet flight dynamics; and the limited numerical methods used to describe mathematical estimates. An experimental computer-based research design was developed to investigate this uncertainty. A series of experimental impact spatter patterns were created, and an exhaustive spatter bloodstain recording methodology developed and implemented. A computer application was developed providing a range of analytical approaches to the investigation of estimate uncertainty, including a three-dimensional computer graphic virtual investigative environment. The analytical computer application was used to generate a series of estimates using a broad spatter bloodstain sampling strategy, with six potentially probative estimates analysed in detail. Two additional pilot projects investigating the utility of a sampled photographic recording methodology and an automated image analysis approach to spatter bloodstain measurement were also conducted. The results of these analyses indicate that, with further development, the application of similar analytical approaches to the construction and investigation of an estimate could prove effective in minimising the effect that estimate uncertainty might have on informing the conclusions of this forensic reconstructive process, and thereby reaffirm the scientific expert evidential status of estimate techniques within legal contexts.

Mathematical models for class-D amplifiers

Hall, Fenella T. H. January 2011 (has links)
We here analyse a number of class-D amplifier topologies. Class-D amplifiers operate by converting an audio input signal into a high-frequency square wave output, whose lower-frequency components can accurately reproduce the input. Their high power efficiency and potential for low distortion makes them suitable for use in a wide variety of electronic devices. By calculating the outputs from a classical class-D design implementing different sampling schemes we demonstrate that a more recent method, called the Fourier transform/Poisson resummation method, has many advantages over the double Fourier series method, which is the traditional technique employed for this analysis. We thereby show that when natural sampling is used the input signal is reproduced exactly in the low-frequency part of the output, with no distortion. Although this is a known result, our calculations present the method and notation that we later develop. The classical class-D design is prone to noise, and therefore negative feedback is often included in the circuit. Subsequently we incorporate the Fourier transform/Poisson resummation method into a formalised and succinct analysis of a first-order negative feedback amplifier. Using perturbation expansions we derive the audio-frequency part of the output, demonstrating that negative feedback introduces undesirable distortion. Here we reveal the next order terms in the output compared with previous work, giving further insight into the nonlinear distortion. We then further extend the analysis to examine two more complex negative feedback topologies, namely a second-order and a derivative negative feedback design. Modelling each of these amplifiers presents an increased challenge due to the differences in their respective circuit designs, and in addition, for the derivative negative feedback amplifier we must consider scaling regimes based on the relative magnitudes of the frequencies involved. For both designs we establish novel expressions for the output, including the most significant distortion terms.

Goodness-of-fit tests for discrete and censored data, based on the empirical distribution function

Pettitt, Anthony January 1973 (has links)
In this thesis two general problems concerning goodness-of- fit statistics based on the empirical distribution are considered. The first concerns the problem of adapting Kolmogorov-Smirnov type statistics to test for discrete populations. The significance points of the statistics are given and various power comparisons made. The second problem concerns testing for goodness-of-fit with censored data using the Cramér-von Mises type statistics. The small and large sample distributions are given and the tests are modified so that they can be used to test for the normal and the exponential distributions. The asymptotic theory is developed. Percentage points for the statistics are given and various small sample and large sample power studies are made, for the various cases.

Topics in flow in fractured media

Milne, Andrew January 2011 (has links)
Many geological formations consist of crystalline rocks that have very low matrix permeability but allow flow through an interconnected network of fractures. Understanding the flow of groundwater through such rocks is important in considering disposal of radioactive waste in underground repositories. A specific area of interest is the conditioning of fracture transmissivities on measured values of pressure in these formations. This is the process where the values of fracture transmissivities in a model are adjusted to obtain a good fit of the calculated pressures to measured pressure values. While there are existing methods to condition transmissivity fields on transmissivity, pressure and flow measurements for a continuous porous medium there is little literature on conditioning fracture networks. Conditioning fracture transmissivities on pressure or flow values is a complex problem because the measured pressures are dependent on all the fracture transmissivities in the network. This thesis presents two new methods for conditioning fracture transmissivities in a discrete fracture network on measured pressure values. The first approach adopts a linear approximation when fracture transmissivities are mildly heterogeneous; this approach is then generalised to the minimisation of an objective function when fracture transmissivities are highly heterogeneous. This method is based on a generalisation of previous work on conditioning transmissivity values in a continuous porous medium. The second method developed is a Bayesian conditioning method. Bayes’ theorem is used to give an expression of proportionality for the posterior distribution of fracture log transmissivities in terms of the prior distribution and the data available through pressure measurements. The fracture transmissivities are assumed to be log normally distributed with a given mean and covariance, and the measured pressures are assumed to be normally distributed values each with a given error. From the expression of proportionality for the posterior distribution of fracture transmissivities the modes of the posterior distribution (the points of highest likelihood for the fracture transmissivities given the measured pressures) are numerically computed. Both algorithms are implemented in the existing finite element code NAPSAC developed and marketed by Serco Technical Services, which models groundwater flow in a fracture network.

Goodnes of fit of prediction models and two step prediction

Janacek, G. January 1973 (has links)
Given a second order stationary time series it can be shown that there exists an optimum linear predictor of Xk, say X*k which is constructed from {Xt ,t=O,-l,-2 …} the mean square error of prediction being given by ek = E [|Xk- X*k|2]. In some cases however a series can be considered to have started at a point in the past and an attempt is made to see how well the optimum linear form of the predictor behaves in this case. Using the fundamental result due to Kolmogorov relating the prediction error e1 to the power spectrum f(w) e1 = exp. {1/2 pi Log from – pi to p log 2 pi f(w) dw} estimates of e1 are constructed using the estimated periodogram and power spectrum estimates. As is argued in some detail the quantity e1 is a natural one to look at when considering prediction and estimation problems and the estimates obtained are non-parametric. The characteristic functions of these estimates are obtained and it is shown that asymptotically they have distributions which are approximately normal. The rate of convergence to normality is also investigated. A previous author has used a similar estimate as the basis of a test of white noise and the published results are extended and in the light of the simulation results obtained some modifications are suggested. To increase the value of the estimates e1 their small sample distribution is approximated and extensive tables of percentage points are provided. Using these approximations one can construct a more powerful and versatile test for white noise and simulation results confirm that the theoretical results work well. The same approximation technique is used to derive the small sample distribution of some new estimates of the coefficients in the model generating {Xt}. These estimates are also based on the power spectrum. While it is shown small sample theory is limited in this situation the asymptotic results are very interesting and useful. Several suggestions are made as to further fields of investigation in both the univariate and multivariate cases.

A Bayesian model for the unlabelled size-and-shape analysis

Sajib, Anamul January 2018 (has links)
This thesis considers the development of efficient MCMC sampling methods for Bayesian models used for the pairwise alignment of two unlabelled configurations. We introduce ideas from differential geometry along with other recent developments in unlabelled shape analysis as a means of creating novel and more efficient MCMC sampling methods for such models. For example, we have improved the performance of the sampler for the model of Green and Mardia (2006) by sampling rotation, A ∈ SO(3), and matching matrix using geodesic Monte Carlo (MCMC defined on manifold) and Forbes and Lauritzen (2014) matching sampler, developed for finger print matching problem, respectively. We also propose a new Bayesian model, together with implementation methods, motivated by the desire for further improvement. The model and its implementation methods proposed exploit the continuous nature of the parameter space of our Bayesian model and thus move around easily in this continuous space, providing highly efficient convergence and exploration of the target posterior distribution. The proposed Bayesian model and its implementation methods provide generalizations of the existing two models, Bayesian Hierarchical and regression models, introduced by Green and Mardia (2006) and Taylor, Mardia and Kent (2003) respectively, and resolve many shortcomings of existing implementation methods; slow convergence, traps in local mode and dependence on initial starting values when sampling from high dimensional and multi-modal posterior distributions. We illustrate our model and its implementation methods on the alignment of two proteins and two gels, and we find that the performance of proposed implementation methods under proposed model is better than current implementation techniques of existing models in both real and simulated data sets.

Bayesian nonparametric inference for stochastic epidemic models

Xu, Xiaoguang January 2015 (has links)
Modelling of infectious diseases is a topic of great importance. Despite the enormous attention given to the development of methods for efficient parameter estimation, there has been relatively little activity in the area of nonparametric inference for epidemics. In this thesis, we develop new methodology which enables nonparametric estimation of the parameters which govern transmission within a Bayesian framework. Many standard modelling and data analysis methods use underlying assumptions (e.g. concerning the rate at which new cases of disease will occur) which are rarely challenged or tested in practice. We relax these assumptions and analyse data from disease outbreaks in a Bayesian nonparametric framework. We first apply our Bayesian nonparametric methods to small-scale epidemics. In a standard SIR model, the overall force of infection is assumed to have a parametric form. We relax this assumption and treat it as a function which only depends on time. Then we place a Gaussian process prior on it and infer it using data-augmented Markov Chain Monte Carlo (MCMC) algorithms. Our methods are illustrated by applications to simulated data as well as Smallpox data. We also investigate the infection rate in the SIR model using our methods. More precisely, we assume the infection rate is time-varying and place a Gaussian process prior on it. Results are obtained using data augmentation methods and standard MCMC algorithms. We illustrate our methods using simulated data and respiratory disease data. We find our methods work fairly well for the stochastic SIR model. We also investigate large-scaled epidemics in a Bayesian nonparametric framework. For large epidemics in large populations, we usually observe surveillance data which typically provide number of new infection cases occurring during observation periods. We infer the infection rate for each observation period by placing Gaussian process priors on them. Our methods are illustrated by the real data, i.e. a time series of incidence of measles in London (1948-1957). Please note, the pagination in the online version differs slightly from the official, printed version because of the insertion of a list of corrections. The incorporation of the corrections into the text of the online version means that the page breaks appear at different points on p. 39-47, and p. 47-147 of the electronic version correspond to p. 48-148 of the printed version.

Bayesian edge-detection in image processing

Stephens, David A. January 1990 (has links)
Problems associated with the processing and statistical analysis of image data are the subject of much current interest, and many sophisticated techniques for extracting semantic content from degraded or corrupted images have been developed. However, such techniques often require considerable computational resources, and thus are, in certain applications, inappropriate. The detection localised discontinuities, or edges, in the image can be regarded as a pre-processing operation in relation to these sophisticated techniques which, if implemented efficiently and successfully, can provide a means for an exploratory analysis that is useful in two ways. First, such an analysis can be used to obtain quantitative information relating to the underlying structures from which the various regions in the image are derived about which we would generally be a priori ignorant. Secondly, in cases where the inference problem relates to discovery of the unknown location or dimensions of a particular region or object, or where we merely wish to infer the presence or absence of structures having a particular configuration, an accurate edge-detection analysis can circumvent the need for the subsequent sophisticated analysis. Relatively little interest has been focussed on the edge-detection problem within a statistical setting. In this thesis, we formulate the edge-detection problem in a formal statistical framework, and develop a simple and easily implemented technique for the analysis of images derived from two-region single edge scenes. We extend this technique in three ways; first, to allow the analysis of more complicated scenes, secondly, by incorporating spatial considerations, and thirdly, by considering images of various qualitative nature. We also study edge reconstruction and representation given the results obtained from the exploratory analysis, and a cognitive problem relating to the detection of objects modelled by members of a class of simple convex objects. Finally, we study in detail aspects of one of the sophisticated image analysis techniques, and the important general statistical applications of the theory on which it is founded.

Page generated in 0.1193 seconds