• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 32
  • 10
  • 4
  • 1
  • Tagged with
  • 1313
  • 484
  • 92
  • 86
  • 67
  • 54
  • 49
  • 43
  • 42
  • 41
  • 40
  • 39
  • 36
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Branching diffusions on the boundary and the interior of balls

Hesse, Marion January 2013 (has links)
The object of study in this thesis are branching diffusions which arise as stochastic models for evolving populations. Our focus lies on studying branching diffusions in which particles or, more generally, mass gets killed upon exiting a ball. In particular, we investigate the way in which populations can survive within a ball and how the mass evolves upon its exit from an increasing sequence of balls.
92

Transformation bias in mixed effects models of meta-analysis

Bakbergenuly, Ilyas January 2017 (has links)
When binary data exhibit the greater variation than expected, the statistical methods have to account for extra-binomial variation. Possible explanations for extra-binomial variation include intra-cluster dependence or the variability of binomial probabilities. Both of these reasons lead to overdispersion of binomial counts and the resulting heterogeneity in their meta-analysis. Variance stabilizing or normalizing transformations are often applied to binomial counts to enable the use of standard methods based on normality. In meta-analysis, this is routinely done for the inference on overall effect measure. However, these transformations might result in biases in the presence of overdispersion. We study biases arising in the result of transformations of binary variables in the random or mixed effects models. We demonstrate considerable biases arising from standard log-odds and arcsine transformations both for single studies and in meta-analysis. We also explore possibilities of bias correction. In meta-analysis, the heterogeneity of the log odds ratios across the studies is usually incorporated by standard (additive) random effects model (REM). An alternative, multiplicative random effects model is based on the concept of an overdispersion. The multiplicative factor in this overdispersed random effects model can be interpreted as an intra-class correlation parameter. This model arises when one or both binomial distributions in the 2 by 2 tables are changed to betabinomial distributions. The Mantel-Haenzsel and inverse-variance approaches are extended to this setting. The estimation of the random effect parameter is based on profiling the modified Breslow-Day test and improving the approximation for distribution of Q statistic in Mandel-Paule method. The biases and coverages from new methods are compared to standard methods through simulation studies. The misspecification of the REM in respect to the mechanism of its generation is an important issue which is also discussed in this thesis.
93

Quasi-likelihood inference for modulated non-stationary time series

Guillaumin, Arthur P. January 2018 (has links)
In this thesis we propose a new class of non-stationary time series models and a quasi-likelihood inference method that is computationally efficient and consistent for that class of processes. A standard class of non-stationary processes is that of locally-stationary processes, where a smooth time-varying spectral representation extends the spectral representation of stationary time series. This allows us to apply stationary estimation methods when analysing slowly-varying non-stationary processes. However, stationary inference methods may lead to large biases for more rapidly-varying non-stationary processes. We present a class of such processes based on the framework of modulated processes. A modulated process is formed by pointwise multiplying a stationary process, called the latent process, by a sequence, called the modulation sequence. Our interest lies in estimating a parametric model for the latent stationary process from observing the modulated process in parallel with the modulation sequence. Very often exact likelihood is not computationally viable when analysing large time series datasets. The Whittle likelihood is a stan- dard quasi-likelihood for stationary time series. Our inference method adapts this function by specifying the expected periodogram of the modulated process for a given parameter vector of the latent time series model, and then fits this quantity to the sample periodogram. We prove that this approach conserves the computational efficiency and convergence rate of the Whittle likelihood under increasing sample size. Finally, our real-data application is concerned with the study of ocean surface currents. We analyse bivariate non-stationary velocities obtained from instruments following the ocean surface currents, and infer key physical quantities from this dataset. Our simulations show the benefit of our modelling and estimation method.
94

Wavelet methods and inverse problems

Aljohani, Hassan Musallam S. January 2017 (has links)
Archaeological investigations are designed to acquire information without damaging the archaeological site. Magnetometry is one of the important techniques for producing a surface grid of readings, which can be used to infer underground features. The inversion of this data, to give a fitted model, is an inverse problem. This type of problem can be ill-posed or ill-conditioned, making the estimation of model parameters less stable or even impossible. More precisely, the relationship between archaeological data and parameters is expressed by a likelihood. It is not possible to use the standard regression estimate obtained through the likelihood, which means that no maximum likelihood estimate exists. Instead, various constraints can be added through a prior distribution with an estimate produced using the posterior distribution. Current approaches incorporate prior information describing smoothness, which is not always appropriate. The biggest challenge is that the reconstruction of an archaeological site as a single layer requires various physical features such as depth and extent to be assumed. By applying a smoothing prior in the analysis of stratigraphy data, however, these features are not easily estimated. Wavelet analysis has proved to be highly efficient at eliciting information from noisy data. Additionally, complicated signals can be explained by interpreting only a small number of wavelet coefficients. It is possible that a modelling approach, which attempts to describe an underlying function in terms of a multi-level wavelet representation will be an improvement on standard techniques. Further, a new method proposed uses an elastic-net based distribution as the prior. Two methods are used to solve the problem, one is based on one-stage estimation and the other is based on two stages. The one-stage considers two approaches a single prior for all wavelet resolution levels and a level-dependent prior, with separate priors at each resolution level. In a simulation study and a real data analysis, all these techniques are compared to several existing methods. It is shown that the methodology using a single prior provides good reconstruction, comparable even to several established wavelet methods that use mixture priors.
95

Dynamic modelling for image analysis

Alfaer, Nada Mansour January 2018 (has links)
Image segmentation is an important task in many image analysis applications, where it is an essential first stage before further analysis is possible. The levelset method is an implicit approach to image segmentation problems. The main advantages are that it can handle an unknown number of regions and can deal with complicated topological changes in a simple and natural way. The research presented in this thesis is motivated by the need to develop statistical methodologies for modelling image data through level sets. The fundamental idea is to combine the level-set method with statistical modelling based on the Bayesian framework to produce an attractive approach for tackling a wider range of segmentation problems in image analysis. A complete framework for a Bayesian level set model is given to allow a wider interpretation of model components. The proposed model is described based on a Gaussian likelihood and exponential prior distributions on object area and boundary length, and an investigation of uncertainty and a sensitivity analysis are carried out. The model is then generalized using a more robust noise model and more flexible prior distributions. A new Bayesian modelling approach to object identification is introduced. The proposed model is based on the level set method which assumes the implicit representation of the object outlines as a zero level set contour of a higher dimensional function. The Markov chain Monte Carlo (MCMC) algorithm is used to estimate the model parameters, by generating approximate samples from the posterior distribution. The proposed method is applied to simulated and real datasets. A new temporal model is proposed in a Bayesian framework for level-set based image sequence segmentation. MCMC methods are used to explore the model and to obtain information about solution behaviour. The proposed method is applied to simulated image sequences.
96

Statistical shape analysis of helices

Alfahad, Mai F. A. M. January 2018 (has links)
Consider a sequence of equally spaced points along a helix in three-dimensional space, which are observed subject to statistical noise. In this thesis, maximum likelihood (ML) method is developed to estimate the parameters of the helix. Statistical properties of the estimator are studied and comparisons are made to other estimators found in the literature. Methods are established here for the fitting of unkinked and kinked helices. For an unkinked helix an initial estimate of a helix axis is estimated by a modified eigen-decomposition or a method from the literature. Mardia-Holmes model can be used to estimate the initial helix axis but it is often not very successful one since it requires initial parameters. A better method for initial axis estimation is the Rotfit method. If the the axis is known, we minimize the residual sum of squares (RSS) to estimate the helix parameters and then optimize the axis estimate. For a kinked helix, we specify a test statistic by simulating the null distribution of unkinked helices. If the kink position is known, then the test statistic approximately follows an F-distribution. If the null hypothesis is rejected i.e. the helix has a change point, and then cut the helix into two sub-helices between the change point where the helix has the maximum statistic. Statistics test are studied to test how differ these two sub-helices from each other. Parametric bootstrap procedure is used to study these statistics. The shapes of protein alpha-helices are used to illustrate the procedure.
97

Equilibrium strategies for mean-variance problem

He, Zeyu January 2018 (has links)
This research is devoted to study equilibrium strategies in a game theoretical framework for the mean-variance problem. The thesis explores the investment behaviour and interlinks between different types of equilibrium strategies. In order to find the open-loop strategy in discrete time, we incorporate the idea based on Hu et al. (2012) and the concept of open-loop strategies in engineering study. In engineering study, there are two types of strategies: open-loop and closed-loop control strategies. We find the interpretations for both strategies in a Nash equilibrium context from a financial perspective. This thesis extends the literature by providing the existence and uniqueness of the solution of open-loop equilibrium strategy in discrete time. Our findings point to the causes of different equilibrium strategies in the existing literature. We show the common issue of equilibrium strategies, i.e. that the amount of money invested in risky assets decays to 0 as time moves away from the maturity. Furthermore, the closed-loop strategy tends to a negative limit depending on the assets’ Sharpe ratio. We call this phenomenon as Mean-variance puzzle. The reason is that the variance term penalises the wealth changes quadratically as well as the expectation only increases linearly. By drawing in the concepts from behavioural economics, we solve this puzzle by using the present-biased preference. The advantage of the present-biased preference is that equilibrium investors have the flexibility to adjust their risk attitude based on their anticipated future. We simulate three types of control strategies existing in the literature and compare the investment performance. Furthermore, we evaluate the performance with respect to different rebalancing periods.
98

Hybrid methods for protein loop modelling

Marks, Claire January 2016 (has links)
Loops are often vital for protein function, and therefore accurate prediction of their structures is highly desirable. A particularly important example is the H3 loop of antibodies. Antibodies are proteins of the immune system that are able to bind to a huge variety of different substances, in order to initiate their removal from the body. The binding characteristics of an antibody are mainly determined by the six loops, or complementarity determining regions, that make up their binding site. The most important of these is the H3 loop - however, since it is extremely variable in structure, the accuracy of H3 structure prediction is often poor. Current loop modelling algorithms can mostly be divided into two categories: knowledge-based, where databases of fragments are searched to find suitable conformations; and ab initio, where conformations are generated computationally. In this thesis, we test the ability of such methods to predict H3 structures using one of each: the previously published, knowledge-based algorithm FREAD; and our own new ab initio method MECHANO. Existing knowledge-based methods only use fragments that are the same length as the target, even though loops of slightly different lengths may adopt similar conformations. We describe the development of a novel algorithm, Sphinx, which combines ab initio techniques with the potential extra structural information contained within loops of a different length to improve structure prediction. Finally we look at protein flexibility, by identifying loops for which there are multiple structures deposited in the PDB. We examine the outcome of performing structure prediction on loops with varying amounts of flexibility, and investigate differences between those loops that show a high degree of structural variability and those that do not.
99

How much data are required to develop and validate a risk prediction model?

Taiyari, Khadijeh January 2017 (has links)
It has been suggested that when developing risk prediction models using regression, the number of events in the dataset should be at least 10 times the number of parameters being estimated by the model. This rule was originally proposed to ensure the unbiased estimation of regression coefficients with confidence intervals that have correct coverage. However, only limited research has been conducted to assess the adequacy of this rule with regards to predictive performance. Furthermore, there is only limited guidance regarding the number of events required to develop risk prediction models using hierarchical data, for example when one has observations from several hospitals. One of the aims of this dissertation is to determine the number of events required to obtain reliable predictions from standard or hierarchical models for binary outcomes. This will be achieved by conducting several simulation studies based on real clinical data. It has also been suggested that when validating risk prediction models, there should be at least 100 events in the validation dataset. However, few studies have examined the adequacy of this recommendation. Furthermore, there are no guidelines regarding the sample size requirements when validating a risk prediction model based on hierarchical data. The second main aim of this dissertation is to investigate the sample size requirements for model validation using both simulation and analytical methods. In particular we will derive the relationship between sample size and the precision of some common measures of model performance such as the C statistic, D statistic, and calibration slope. The results from this dissertation will enable researchers to better assess their sample size requirements when developing and validating prediction models using both standard (independent) and clustered data.
100

Reduced-bias estimation and inference for mixed-effects models

Kyriakou, S. January 2018 (has links)
A popular method for reducing the mean and median bias of the maximum likelihood estimator in regular parametric models is through the additive adjustment of the score equation (Firth, 1993; Kenne Pagui et al., 2017). The current work focuses on mean and median bias-reducing adjusted score equations in models with latent variables. First, we give estimating equations based on a mean bias-reducing adjustment of the score function for mean bias reduction in linear mixed models. Second, we propose an extension of the adjusted score equation approach (Firth, 1993) to obtain bias-reduced estimates for models with either computationally infeasible adjusted score equations and/or intractable likelihood. The proposed bias-reduced estimator is obtained by solving an approximate adjusted score equation, which uses an approximation of the log-likelihood to obtain tractable derivatives, and Monte Carlo approximation of the bias function to get feasible expressions. Under certain general conditions, we prove that the feasible and tractable bias-reduced estimator is consistent and asymptotically normally distributed. The “iterated bootstrap with likelihood adjustment” algorithm is presented that can compute the solution of the new bias-reducing adjusted score equation. The effectiveness of the proposed method is demonstrated via simulation studies and real data examples in the case of generalised linear models and generalised linear mixed models. Finally, we derive the median bias-reducing adjusted scores for linear mixed models and random-effects meta-analysis and meta-regression models.

Page generated in 0.0338 seconds