• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 15
  • 7
  • 1
  • 1
  • Tagged with
  • 267
  • 42
  • 32
  • 28
  • 22
  • 20
  • 20
  • 16
  • 15
  • 15
  • 14
  • 14
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Numerical approximation of Stratonovich SDEs and SPDEs

Tzitzili, Efthalia January 2015 (has links)
We consider the numerical approximation of stochastic differential and partial differential equations S(P)DEs, by means of time-differencing schemes which are based on exponential integrator techniques. We focus on the study of two numerical schemes, both appropriate for the simulation of Stratonovich- interpreted S(P)DEs. The first, is a basic strong order 1=2 scheme, called Stratonovich Exponential Integrators (SEI). Motivated by SEI and aiming at benefiting both from the higher order of the standard Milstein scheme and the efficiency of the exponential schemes when dealing with stiff problems, we develop a new Milstein type scheme called Milstein Stratonovich Exponential Integrators (MSEI). We prove strong convergence of the SEI scheme for high-dimensional semilinear Stratonovich SDEs with multiplicative noise and we use SEI as well as the MSEI scheme to approximate solutions of the stochastic Landau-Lifschitz- Gilbert (LLG) equation in three dimensions. We examine the L2(Ω ) approximation error of the SEI and MSEI schemes numerically and we prove analytically that MSEI achieves a higher order of convergence than SEI. We generalise SEI so that it is suited not only for Stratonovich SDEs, but also for It^o and for SDEs interpreted by the 'in-between' calculi. Moreover, we provide a general expression for the predictor contained in SEI and we study the theoretical convergence for the generalised version of the scheme. We show that the order of the scheme used in order to obtain the predictor as well as the stochastic integral interpretation do not affect the overall order of the scheme. We extend the convergence results for SEI to a space-time context by considering a second order semilinear Stratonovich SPDE with multiplicative noise. We discretise in space with the nite element method and we use SEI for discretising in time. We consider the case where we have trace class noise and we examine analytically the strong order of convergence for SEI. We implement SEI as a time discretisation scheme and present the results when simulating SPDEs with stochastic travelling wave solutions. Then, we use an alternative method, called 'freezing' method, for approximating wave solutions and estimating the speed of the waves for the stochastic Nagumo and FitzHugh-Nagumo models. The wave position and hence the speed is found by minimising the L2 distance between a reference function and the travelling wave. While the results obtained from the two different approaches agree, we observe that the behaviour of the wave solution is captured in a smaller computational domain, when we use the freezing method, making it more efficient for long time simulations.
102

Deep Gaussian processes and variational propagation of uncertainty

Damianou, Andreas January 2015 (has links)
Uncertainty propagation across components of complex probabilistic models is vital for improving regularisation. Unfortunately, for many interesting models based on non-linear Gaussian processes (GPs), straightforward propagation of uncertainty is computationally and mathematically intractable. This thesis is concerned with solving this problem through developing novel variational inference approaches. From a modelling perspective, a key contribution of the thesis is the development of deep Gaussian processes (deep GPs). Deep GPs generalise several interesting GP-based models and, hence, motivate the development of uncertainty propagation techniques. In a deep GP, each layer is modelled as the output of a multivariate GP, whose inputs are governed by another GP. The resulting model is no longer a GP but, instead, can learn much more complex interactions between data. In contrast to other deep models, all the uncertainty in parameters and latent variables is marginalised out and both supervised and unsupervised learning is handled. Two important special cases of a deep GP can equivalently be seen as its building components and, historically, were developed as such. Firstly, the variational GP-LVM is concerned with propagating uncertainty in Gaussian process latent variable models. Any observed inputs (e.g. temporal) can also be used to correlate the latent space posteriors. Secondly, this thesis develops manifold relevance determination (MRD) which considers a common latent space for multiple views. An adapted variational framework allows for strong model regularisation, resulting in rich latent space representations to be learned. The developed models are also equipped with algorithms that maximise the information communicated between their different stages using uncertainty propagation, to achieve improved learning when partially observed values are present. The developed methods are demonstrated in experiments with simulated and real data. The results show that the developed variational methodologies improve practical applicability by enabling automatic capacity control in the models, even when data are scarce.
103

Learning with structured covariance matrices in linear Gaussian models

Kalaitzis, Alfredo January 2013 (has links)
We study structured covariance matrices in a Gaussian setting for a variety of data analysis scenarios. Despite its simplistic nature, we argue for the broad applicability of the Gaussian family through its second order statistics. We focus on three types of common structures in the machine learning literature: covariance functions, low-rank and sparse inverse covariances. Our contributions boil down to combin- ing these structures and designing algorithms for maximum-likelihood or MAP fitting: for instance, we use covariance functions in Gaus- sian processes to encode the temporal structure in a gene-expression time-series, with any residual structure generating iid noise. More generally, for a low-rank residual structure (correlated residuals) we introduce the residual component analysis framework: based on a generalised eigenvalue problem, it decomposes the residual low-rank term given a partial explanation of the covariance. In this example the explained covariance would be an RBF kernel, but it can be any positive-definite matrix. Another example is the low-rank plus sparse- inverse composition for structure learning of GMRFs in the presence of confounding latent variables. We also study RCA as a novel link between classical low-rank methods and modern probabilistic counter- parts: the geometry of oblique projections shows how PCA, CCA and linear discriminant analysis reduce to RCA. Also inter-battery factor analysis, a precursor of multi-view learning, is reduced to an itera- tive application of RCA. Finally, we touch on structured precisions of matrix-normal models based on the Cartesian factorisation of graphs, with appealing properties for regression problems and interpretabil- ity. In all cases, experimental results and simulations demonstrate the performance of the different methods proposed.
104

The supermarket model with system-size dependent parameters

Fairthorne, Marianne January 2010 (has links)
No description available.
105

Analysing and forecasting transitions in complex systems

Piovani, Duccio January 2015 (has links)
We analyse in detail a new approach to the monitoring and forecasting of the onset of transitions in high dimensional complex systems by application to the Tangled Nature model of evolutionary ecology and high dimensional replicator systems with a stochastic element, the Stochastic Replicator model. A high dimensional stability matrix is derived for the mean field approximation to the stochastic dynamics. This allows us to determine the stability spectrum about the observed quasi-stable configurations. From overlap of the instantaneous configuration vector of the full stochastic system with the eigenvectors of the unstable directions of the deterministic mean field approximation we are able to construct a good early-warning indicator of the transitions occurring intermittently.
106

A robust Bayesian land use model for crop rotations

Paton, Lewis William January 2016 (has links)
Often, in dynamical systems, such as farmers’ crop choices, the dynamics are driven by external non-stationary factors, such as rainfall and agricultural input and output prices. Such dynamics can be modelled by a non-stationary stochastic process, where the transition probabilities are functions of such external factors. We propose using a multinomial logit model for these transition probabilities, and investigate the problem of estimating the parameters of this model from data. We adapt the work of Chen and Ibrahim to propose a conjugate prior distribution for the parameters of the multinomial logit model. Inspired by the imprecise Dirichlet model, we will perform a robust Bayesian analysis by proposing a fairly broad class of prior distributions, in order to accommodate scarcity of data and lack of strong prior expert opinion. We discuss the computation of bounds for the posterior transition probabilities, using a variety of calculation methods. These sets of posterior transition probabilities mean that our land use model consists of a non-stationary imprecise stochastic process. We discuss computation of future events in this process. Finally, we use our novel land use model to investigate real-world data. We investigate the impact of external variables on the posterior transition probabilities, and investigate a scenario for future crop growth. We also use our model to solve a hypothetical yet realistic policy problem.
107

Gaussian process regression models for the analysis of survival data with competing risks, interval censoring and high dimensionality

Barrett, James Edward January 2015 (has links)
We develop novel statistical methods for analysing biomedical survival data based on Gaussian process (GP) regression. GP regression provides a powerful non-parametric probabilistic method of relating inputs to outputs. We apply this to survival data which consist of time-to-event and covariate measurements. In the context of GP regression the covariates are regarded as `inputs' and the event times are the `outputs'. This allows for highly exible inference of non-linear relationships between covariates and event times. Many existing methods for analysing survival data, such as the ubiquitous Cox proportional hazards model, focus primarily on the hazard rate which is typically assumed to take some parametric or semi-parametric form. Our proposed model belongs to the class of accelerated failure time models and as such our focus is on directly characterising the relationship between the covariates and event times without any explicit assumptions on what form the hazard rates take. This provides a more direct route to connecting the covariates to survival outcomes with minimal assumptions. An application of our model to experimental data illustrates its usefulness. We then apply multiple output GP regression, which can handle multiple potentially correlated outputs for each input, to competing risks survival data where multiple event types can occur. In this case the multiple outputs correspond to the time-to-event for each risk. By tuning one of the model parameters we can control the extent to which the multiple outputs are dependent thus allowing the specication of correlated risks. However, the identiability problem, which states that it is not possible to infer whether risks are truly independent or otherwise on the basis of observed data, still holds. In spite of this fundamental limitation simulation studies suggest that in some cases assuming dependence can lead to more accurate predictions. The second part of this thesis is concerned with high dimensional survival data where there are a large number of covariates compared to relatively few individuals. This leads to the problem of overtting, where spurious relationships are inferred from the data. One strategy to tackle this problem is dimensionality reduction. The Gaussian process latent variable model (GPLVM) is a powerful method of extracting a low dimensional representation of high dimensional data. We extend the GPLVM to incorporate survival outcomes by combining the model with a Weibull proportional hazards model (WPHM). By reducing the ratio of covariates to samples we hope to diminish the eects of overtting. The combined GPLVM-WPHM model can also be used to combine several datasets by simultaneously expressing them in terms of the same low dimensional latent variables. We construct the Laplace approximation of the marginal likelihood and use this to determine the optimal number of latent variables, thereby allowing detection of intrinsic low dimensional structure. Results from both simulated and real data show a reduction in overtting and an increase in predictive accuracy after dimensionality reduction.
108

Some aspects of diophantine approximation and ergodic theory of translation surfaces

Artigiani, Mauro January 2016 (has links)
This thesis deals with two different topics. In the first part we study Lagrange spectra of Veech translation surfaces, which are a generalisation of the classical Lagrange spectrum. We show that any such Lagrange spectrum contains a Hall ray. We start from the concrete example given by the surface obtained glueing a regular octagon. We use the coding developed by Smillie and Ulcigrai for the surfaces obtained glueing the regular 2ɳ-gons to code geodesics in the Teichmiiller disk of the octagon and prove a formula which allows to express large values in the Lagrange spectrum as sums of Cantor sets. In particular this yields an estimate on the beginning point of the Hall ray. Generalising the approach of the octagon, in a joint work with Luca Marchese and Corinna Ulcigrai, we prove the existence of a Hall ray for the Lagrange spectrum of any Veech translation surface. In this case, we use the boundary expansion developed by Bowen and Series. In the second part, we construct exceptional examples of ergodic vertical flows in periodic configurations of Eaton lenses of fixed radius. We achieve this by studying a family of infinite translation surfaces that are Z²-covers of slit tori. We show that the Hausdorff dimension of lattices for which the vertical flow is ergodic is bigger than 3/2. Moreover, the lattices are explicitly constructed. iii
109

Meta-stochastic simulation for systems and synthetic biology using classification

Sanassy, Daven January 2015 (has links)
To comprehend the immense complexity that drives biological systems, it is necessary to generate hypotheses of system behaviour. This is because one can observe the results of a biological process and have knowledge of the molecular/genetic components, but not directly witness biochemical interaction mechanisms. Hypotheses can be tested in silico which is considerably cheaper and faster than “wet” lab trialand- error experimentation. Bio-systems are traditionally modelled using ordinary differential equations (ODEs). ODEs are generally suitable for the approximation of a (test tube sized) in vitro system trajectory, but cannot account for inherent system noise or discrete event behaviour. Most in vivo biochemical interactions occur within small spatially compartmentalised units commonly known as cells, which are prone to stochastic noise due to relatively low intracellular molecular populations. Stochastic simulation algorithms (SSAs) provide an exact mechanistic account of the temporal evolution of a bio-system, and can account for noise and discrete cellular transcription and signalling behaviour. Whilst this reaction-by-reaction account of system trajectory elucidates biological mechanisms more comprehensively than ODE execution, it comes at increased computational expense. Scaling to the demands of modern biology requires ever larger and more detailed models to be executed. Scientists evaluating and engineering tissue-scale and bacterial colony sized biosystems can be limited by the tractability of their computational hypothesis testing techniques. This thesis evaluates a hypothesised relationship between SSA computational performance and biochemical model characteristics. This relationship leads to the possibility of predicting the fastest SSA for an arbitrary model - a method that can provide computational headroom for more complex models to be executed. The research output of this thesis is realised as a software package for meta-stochastic simulation called ssapredict. Ssapredict uses statistical classification to predict SSA performance, and also provides high performance stochastic simulation implementations to the wider community.
110

The global error in weak approximations of stochastic differential equations

Ghazali, Saadia January 2007 (has links)
In this thesis, the convergence analysis of a class of weak approximations of solutions of stochastic differential equations is presented. This class includes recent approximations such as Kusuoka’s moment similar families method and the Lyons-Victoir cubature of Wiener Space approach. It is shown that the rate of convergence depends intrinsically on the smoothness of the chosen test function. For smooth functions (the required degree of smoothness depends on the order of the approximation), an equidistant partition of the time interval on which the approximation is sought is optimal. For functions that are less smooth, for example Lipschitz functions, the rate of convergence decays and the optimal partition is no longer equidistant. An asymptotic rate of convergence is also established for the Lyons-Victoir method. The analysis rests upon Kusuoka- Stroock’s results on the smoothness of the distribution of the solution of a stochastic differential equation. Finally the results are applied to the numerical solution of the filtering problem and the pricing of asian options.

Page generated in 0.0199 seconds