• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 462
  • 32
  • 16
  • 16
  • 15
  • 14
  • 14
  • 14
  • 14
  • 14
  • 13
  • 13
  • 10
  • 6
  • 6
  • Tagged with
  • 683
  • 683
  • 142
  • 141
  • 115
  • 89
  • 86
  • 57
  • 55
  • 49
  • 49
  • 40
  • 38
  • 38
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

The use of mixture models in capture-recapture

Yu, Chen January 2015 (has links)
Mixture models have been widely used to model heterogeneity. In this thesis, we focus on the use of mixture models in capture--recapture, for both closed populations and open populations. We provide both practical and theoretical investigations. A new model is proposed for closed populations and the practical difficulties of model fitting for mixture models are demonstrated for open populations. As the number of model parameters can increase with the number of mixture components, whether we can estimate all of the parameters using the method of maximum likelihood is an important issue. We explore this using formal methods and develop general rules to ensure that all parameters are estimable.
192

Parameter redundancy with applications in statistical ecology

Hubbard, Ben Arthur January 2014 (has links)
This thesis is concerned with parameter redundancy in statistical ecology models. If it is not possible to estimate all the parameters, a model is termed parameter redundant. Parameter redundancy commonly occurs when parameters are confounded in the model so that the model could be reparameterised in terms of a smaller number of parameters. In principle, it is possible to use symbolic algebra to determine whether or not all the parameters of a certain ecological model can be estimated using classical methods of statistical inference. We examine a variety of different ecological models: We begin by exploring models based on marking a number of animals and observing the same animals at future time points. These observations can either be when the animal is marked and then recovered dead in mark-recovery modelling, or when the animal is marked and then recaptured alive in capture-recapture modelling. We also explore capture-recapture-recovery models where both dead recoveries and alive recaptures can be observed in the same study. We go on to explore occupancy models which are used to obtain estimates of the probability of presence, or absence, for living species by the use of repeated detection surveys, where these models have the advantage that individuals are not required to be marked. A variety of different occupancy models are examined included the addition of season-dependent parameters, group-dependent parameters and species-dependent, along with other models. We investigate parameter redundancy by deriving general results for a variety of different models where the model's parameter dependencies can be relaxed suited to different studies. We also analyse how the results change for specific data sets and how sparse data influence whether or not a model is parameter redundant using procedures written in Maple. This theory on parameter redundancy is vital for the correct use of these ecological models so that valid statistical inference can be made.
193

Statistical methods for detecting genetic risk factors of a disease with applications to genome-wide association studies

Ali, Fadhaa January 2015 (has links)
This thesis aims to develop various statistical methods for analysing the data derived from genome wide association studies (GWAS). The GWAS often involves genotyping individual human genetic variation, using high-throughput genome-wide single nucleotide polymorphism (SNP) arrays, in thousands of individuals and testing for association between those variants and a given disease under the assumption of common disease/common variant. Although GWAS have identified many potential genetic factors in the genome that affect the risks to complex diseases, there is still much of the genetic heritability that remains unexplained. The power of detecting new genetic risk variants can be improved by considering multiple genetic variants simultaneously with novel statistical methods. Improving the analysis of the GWAS data has received much attention from statisticians and other scientific researchers over the past decade. There are several challenges arising in analysing the GWAS data. First, determining the risk SNPs might be difficult due to non-random correlation between SNPs that can inflate type I and II errors in statistical inference. When a group of SNPs are considered together in the context of haplotypes/genotypes, the distribution of the haplotypes/genotypes is sparse, which makes it difficult to detect risk haplotypes/genotypes in terms of disease penetrance. In this work, we proposed four new methods to identify risk haplotypes/genotypes based on their frequency differences between cases and controls. To evaluate the performances of our methods, we simulated datasets under wide range of scenarios according to both retrospective and prospective designs. In the first method, we first reconstruct haplotypes by using unphased genotypes, followed by clustering and thresholding the inferred haplotypes into risk and non-risk groups with a two-component binomial-mixture model. In the method, the parameters were estimated by using the modified Expectation-Maximization algorithm, where the maximisation step was replaced the posterior sampling of the component parameters. We also elucidated the relationships between risk and non-risk haplotypes under different modes of inheritance and genotypic relative risk. In the second method, we fitted a three-component mixture model to genotype data directly, followed by an odds-ratio thresholding. In the third method, we combined the existing haplotype reconstruction software PHASE and permutation method to infer risk haplotypes. In the fourth method, we proposed a new way to score the genotypes by clustering and combined it with a logistic regression approach to infer risk haplotypes. The simulation studies showed that the first three methods outperformed the multiple testing method of (Zhu, 2010) in terms of average specificity and sensitivity (AVSS) in all scenarios considered. The logistic regression methods also outperformed the standard logistic regression method. We applied our methods to two GWAS datasets on coronary artery disease (CAD) and hypertension (HT), detecting several new risk haplotypes and recovering a number of the existing disease-associated genetic variants in the literature.
194

Periodic behaviours emergent in discrete systems with random dynamics

Pickton, John-Nathan Edward January 2017 (has links)
Periodic behaviours in continuous media can be described with great power and economy using conceptual machinery such as the notion of a field. However periodic effects can also be `observed' in collections of discrete objects, be they individuals sending emails, fire-flies signalling to attract mates, synapses firing in the brain or photons emerging from a cavity. The origin of periodic behaviours becomes more difficult to identify and interpret in these instances; particularly for systems whose individual components are fundamentally stochastic and memoryless. This thesis describes how periodic behaviour can emerge from intrinsic fluctuations in a fully discrete system that is completely isolated from any external coherent forcing. This thesis identifies the essential elements required to produce naturally emerging periodic behaviours in a collection of interacting `particles' which are constrained to a finite set of `states', represented by the nodes of a network. The network can be identified with a type of a spatial structure throughout which particles can move by spontaneously jumping between nodes. The particles interact by affecting the rate at which other particles jump. In such systems it is the collective ensemble of particles, rather than the individual particles themselves, that exhibit periodic behaviours. The existence or non-existence of such collective periodic behaviours is attributed to the structure of the network and the form of interaction between particles that together describe the microscopic dynamics of the system. This thesis develops a methodology for deriving the macroscopic description of the ensemble of particles from the microscopic dynamics that govern the behaviour of individual particles and uses this to find key ingredients for collective periodic behaviour. In order for periodic behaviours to emerge and persist it is necessary that the microscopic dynamics be irreversible and hence violate the principle of detailed balance. However such a condition is not sufficient and irreversibility must also manifest on the macroscopic level. Simple systems that admit collective periodic behaviours are presented, analysed and used to hypothesise on the essential elements needed for such behaviour. Important general results are then proven. It is necessary that the network have more than two nodes and directed edges such that particles jump between states at different rates in both directions. Perhaps most significantly, it is demonstrated that collective periodic behaviours are possible without invoking `action at a distance' - there need not be a field providing a mechanism for the interactions between particles.
195

Stochastic modelling of repeat-mediated phase variation in Campylobacter jejuni

Howitt, Ryan January 2018 (has links)
It is of interest to determine how populations of bacteria whose genes exhibit an ON/OFF switching property (phase variation) evolve over time from an initial population. By statistical analysis of two in vitro experimental Campylobacter jejuni datasets containing 28 genes assumed to be phase variable, we find evidence of small networks of genes which exhibit dependent evolutionary behaviour. This violates the assumption that the genes in these datasets do not interact with one another in the way they mutate during the division of cells, motivating the development of a model which attempts to explain evolution of such genes with factors other than mutation alone. We show that discrete probability distributions at observation times can be estimated by utilising two stochastic models. One model provides an explanation with mutation rates in genes, resembling a Markov chain under the assumption of having a near infinite population size. The second provides an explanation with both mutation and natural selection. However, the addition of selection parameters makes this model resemble a non-linear Markov process, which makes further analysis less straight-forward. An algorithm is constructed to test whether the mutation-only model can sufficiently explain evolution of single phase variable genes, using distributions and mutation rates from data as examples. This algorithm shows that applying this model to the same phase variable genes believed to show dependent evolutionary behaviour is inadequate. We use Approximate Bayesian Computation to estimate selection parameters for the mutation with selection model, whereby inference is derived from samples drawn from an approximation of the joint posterior distribution of the model parameters. We illustrate this method on an example of three genes which show evidence of dependent evolutionary behaviour from our two datasets.
196

Sparse regression methods with measurement-error for magnetoencephalography

Davies, Jonathan January 2017 (has links)
Magnetoencephalography (MEG) is a neuroimaging method for mapping brain activity based on magnetic field recordings. The inverse problem associated with MEG is severely ill-posed and is complicated by the presence of high collinearity in the forward (leadfield) matrix. This means that accurate source localisation can be challenging. The most commonly used methods for solving the MEG problem do not employ sparsity to help reduce the dimensions of the problem. In this thesis we review a number of the sparse regression methods that are widely used in statistics, as well as some more recent methods, and assess their performance in the context of MEG data. Due to the complexity of the forward model in MEG, the presence of measurement-error in the leadfield matrix can create issues in the spatial resolution of the data. Therefore we investigate the impact of measurement-error on sparse regression methods as well as how we can correct for it. We adapt the conditional score and simulation extrapolation (SIMEX) methods for use with sparse regression methods and build on an existing corrected lasso method to cover the elastic net penalty. These methods are demonstrated using a number of simulations for different types of measurement-error and are also tested with real MEG data. The measurement-error methods perform well in simulations, including high dimensional examples, where they are able to correct for attenuation bias in the true covariates. However the extent of their correction is much more restricted in the more complex MEG data where covariates are highly correlated and there is uncertainty over the distribution of the error.
197

Computer simulation of the structure and properties of a particulate dispersion

Coverdale, Geoffrey Norman January 1994 (has links)
A dispersion of 0.25p elongated iron particles has been simulated by a 3-D forcebias Monte-Carlo computation employing an ensemble of 1000 sphero-cylinders with aspect ratio 10 1. The particles exhibit a strong magnetostatic interaction, derived from a bulk magnetisation of 1700emu/cc, which is modelled as a pole-pole interaction. They also have a surface coating of surfactant which is modelled as a short range surface-surface repulsive potential. The energetic behaviour of the particle ensemble is determined by the interactions derived from these two potentials. A primitive magnetisation reversal algorithm is employed to lessen the effect of any artificially high energy interactions between particles, therefore investigations are limited to the cases of zero field and saturating field. An initial state of small particles is simulated by randon placements within a cubic cell with 3-D periodic boundary conditions. A secondary computation scheme is employed to expand slowly the particle lengths, including a corresponding scaling of the the magnetic moment. During the computation small groups of particles may become mutually bound by the strong magnetostatic interactions and exhibit co-operative behaviour. The simulation therefore includes a cluster-allocating algorithm and a cluster moving algorithm in order to take account of this behaviour. Analysis of the equilibrium configuration indicates that clusters, i.e. small groups of strongly bound particles, are an important characteristic of the dispersion microstructure. The interactions between clusters are predominantly negative in zero field and although they appear to be relatively weakly bound, they may give rise to extended networks of connected clusters. These considerations imply that the clustering of particles is a significant factor in determining the physical and magnetic properties of a dispersion.
198

The bootstrap approach to autoregressive time series analysis

De Koster, Felicia Henriette 20 November 2006 (has links)
Please read the abstract in the section 00front of this document Accompanied by 1 disc available at main counter with shelf number EM 519.5 DE KOSTER. Master copy at back of book / Dissertation (MSc (Mathematical Statistics))--University of Pretoria, 2006. / Statistics / unrestricted
199

Least-squares estimates of genetic and enviromental [i.e. environmental] parameters in a beef cattle population

Hamann, Hans Kermit January 1961 (has links)
No description available.
200

Theoretical studies of stray field images of magnetic systems

Walmsley, Nicholas S. January 1995 (has links)
Research in magnetic recording is currently at a stage where an understanding of the magnetic microstructure of recording media is vitai for the continual improvement in recording performance and the achievement of higher recording densities. Experimental techniques such as Lorentz Imaging and Magnetic Force Microscopy provide powerful tools to achieve help these objectives. This thesis provides a theoretical study of the relationship between magnetic systems and their microstructures by developing models to simulate the imaging of such systems. An important aspect of the recording process is the design and manufacture of head systems to improve the read/write process. Lorentz Imaging of recording heads is used to characterise the stray field profile of such components; we have developed a theoretical model which predicts the major characteristics of the stray field profile of a thin film recording head. A major part of the thesis is devoted to the simulation of magnetic reversal and imaging of longitudinal thin films. This has been carried out by considering a system of interacting grains positioned on an irregular physical structure; previously, micromagnetic models have been based on Hexagonal Close Packing structures. This enables the investigation into the effect of physical structure on the macromagnetic properties of such systems. We have developed theoretical models to simulate the imaging of longitudinal thin film. Both the simulated Lorentz and Magnetic Force Microscopy images highlight features caused by the underlying physical structure. This analysis contributes to an explanation of the relationship between magnetic inicrostructure, physical structure and images of magnetic systems; it provides a wider context for the discussion of experimental data.

Page generated in 0.3019 seconds