• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 33
  • 33
  • 7
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Stochastic epidemic models for emerging diseases incorporating household structure and contact tracing

Knock, Edward Stuart January 2011 (has links)
In this thesis, three stochastic epidemic models for intervention for emerging diseases are considered. The models are variants of real-time, responsive intervention, based upon observing diagnosed cases and targeting intervention towards individuals they have infected or are likely to have infected, be they housemates or named contacts. These models are: (i) a local tracing model for a disease spreading amongst a community of households, wherein intervention (vaccination and/or isolation) is directed towards housemates of diagnosed individuals, (ii) a contact tracing model for a disease spreading amongst a homogeneously-mixing population, with isolation of traced contacts of a diagnosed individual, (iii) a local tracing and contact tracing model for a disease spreading amongst a community of households, with intervention directed towards housemates of both diagnosed and traced individuals. These are quantified by deriving threshold parameters that determine whether the disease will infect a few individuals or a sizeable proportion of the population, as well as probabilities for such events occurring.

A Martingale approach to optimal portfolios with jump-diffusions and benchmarks

Michelbrink, Daniel January 2012 (has links)
We consider various portfolio optimization problems when the stock prices follow jump-diusion processes. In the first part the classical optimal consumption-investment problem is considered. The investor's goal is to maximize utility from consumption and terminal wealth over a finite investment horizon. We present results that modify and extend the duality approach that can be found in Kramkov and Schachermayer (1999). The central result is that the optimal trading strategy and optimal equivalent martingale measure can be determined as a solution to a system of non-linear equations. In another problem a benchmark process is introduced, which the investor tries to outperform. The benchmark can either be a generic jump-diusion process or, as a special case, a wealth process of a trading strategy. Similar techniques as in the first part of the thesis can be applied to reach a solution. In the special case that the benchmark is a wealth process, the solution can be deduced from the first part's consumption-investment problem via a transform of the parameters. The benchmark problem presented here gives a dierent approach to benchmarks as in, for instance, Browne (1999b) or Pra et al. (2004). It is also, as far as the author is aware, the first time that martingale methods are employed for this kind of problem. As a side effect of our analysis some interesting relationships to Platen's benchmark approach (cf. Platen (2006)) and change of numeraire techniques (cf. German et al. (1995)) can be observed. In the final part of the thesis the set of trading strategies in the previous two problems are restricted to constraints. These constraints are, for example, a prohibition of shortselling or the restriction on the number of assets. Conditions are provided under which a solution to the two problems can still be found. This extends the work of Cvitanic and Karatzas (1993) to jump diffusions where the initial market set-up is incomplete.

Mathematical modelling of carbon dioxide dissolution and reaction processes

Mitchell, Mark J. January 2012 (has links)
Carbon dioxide dissolution into water is a ubiquitous chemical process on earth, and having a full understanding of this process is becoming ever more important as we seek to understand the consequences of 250 years of exponentially-increasing anthropogenic C02 emissions to the atmosphere since the start of the Industrial Revolution. We examine the dissolution of C02 into water in a number of contexts. First, we analyse what happens to a range of chemical species dissolved in water following an injection of additional C02. We consider the well-mixed problem, and use the method of matched asymptotic expansions to obtain new expressions for the changes in the species' concentrations with time, the new final chemical equilibrium, and the time scales over which this equilibrium is reached, as functions of time, the parameters and the initial condition. These results can be used to help predict the changes in the pH and concentrations of dissolved carbonic species that will occur in the oceans as a result of anthropogenic C02 emissions, and in saline aquifer formations after pumping C02 deep underground. Second, we consider what happens deep underground in a saline aquifer when C02 has been pumped in, spreads through the pore space, and dissolves into the resident water, when advection, diffusion, and chemical reaction have varying levels of relative importance. We examine the length scales over which the dissolved C02 will spread out through an individual pore, ahead of a spreading drop of C02, and the concentrations of the different chemical species within the pore, in the steady-state case. Finally, some experiments have been carried out to investigate the effect of an injection of gaseous C02 on the chemical composition and pH of a saturated limestone aquifer formation. As the C02 enters the soil, it dissolves into the water, and we model the changes in the chemical composition of the water/limestone mixture with time.

Optical limits in Left-Handed Media

Ingrey, Philip Charles January 2010 (has links)
This thesis determines the response of Left-Handed Media (LHM) to surface effects. A LHM half-space with a roughened interface, modelled by a graded index boundary, is shown to give rise to an analytical solution for the propagation of electromagnetic radiation through this inhomogeneous layer. Significant field localization is generated within the layer, caused by the coherent superposition of evanescent waves. The localization is shown to greatly deteriorate transmission when losses are present. The addition of a second interface to the LHM, creating a perfect lens configuration, allows for the exploration of evanescent mode propagation through a perfect lens with roughened boundaries. The effects of the field localisations at the boundaries serves to diminish the resolving capability of the lens. Specifically the layers produce an effect that is qualitatively similar to nonlinearly enhanced dissipation. Ray-optics is used to analyse negative refraction through a roughened interface, prescribed by Gaussian statistics. This shows that rays can focus at smaller distances from the interface due to the negative refractive effects. Moreover, a new reflection mechanism is shown to exist for LHM. Consequently an impedance matched configuration involving LHM (such as the perfect lens) with a roughened interface can still display reflection. A physical-optics approach is used to determine the mean intensity and fluctuations of a wave passing into a half-space of LHM through a roughened interface in two ways. Firstly through the perturbation analysis of Rice theory which shows that the scattered field evolves from a real Gaussian process near the surface into a complex Gaussian process as distance into the second media increases. Secondly through large-scale Monte-Carlo simulations that show that illuminating a roughened interface between air and a LHM produces a regime for enhanced focussing of light close to the boundary, generating caustics that are brighter, fluctuate more, and cause Gaussian speckle at distances closer to the interface than in right-handed matter.

Central limit theorems and statistical inference for some random graph models

Baaqeel, Hanan January 2015 (has links)
Random graphs and networks are of great importance in any fields including mathematics, computer science, statistics, biology and sociology. This research aims to develop statistical theory and methods of statistical inference for random graphs in novel directions. A major strand of the research is the development of conditional goodness-of-fit tests for random graph models and for random block graph models. On the theoretical side, this entails proving a new conditional central limit theorem for a certain graph statistics, which are closely related to the number of two-stars and the number of triangles, and where the conditioning is on the number of edges in the graph. A second strand of the research is to develop composite likelihood methods for estimation of the parameters in exponential random graph models. Composite likelihood methods based on edge data have previously been widely used. A novel contribution of the thesis is the development of composite likelihood methods based on more complicated data structures. The goals of this PhD thesis also include testing the numerical performance of the novel methods in extensive simulation studies and through applications to real graphical data sets.

Gaussian process emulators for uncertainty analysis in groundwater flow

Stone, Nicola January 2011 (has links)
In the field of underground radioactive waste disposal, complex computer models are used to describe the flow of groundwater through rocks. An important property in this context is transmissivity, the ability of the groundwater to pass through rocks, and the transmissivity field can be represented by a stochastic model. The stochastic model is included in complex computer models which determine the travel time for radionuclides released at one point to reach another. As well as the uncertainty due to the stochastic model, there may also be uncertainties in the inputs of these models. In order to quantify the uncertainties, Monte Carlo analyses are often used. However, for computationally expensive models, it is not always possible to obtain a large enough sample to provide accurate enough uncertainty analyses. In this thesis, we present the use of Bayesian emulation methodology as an alternative to Monte Carlo in the analysis of stochastic models. The idea behind Bayesian emulation methodology is that information can be obtained from a small number of runs of the model using a small sample from the input distribution. This information can then be used to make inferences about the output of the model given any other input. The current Bayesian emulation methodology is extended to emulate two statistics of a stochastic computer model; the mean and the distribution function of the output. The mean is a simple output statistic to emulate and provides some information about how the output changes due to changes in each input. The distribution function is more complex to emulate, however it is an important statistic since it contains information about the entire distribution of the outputs. Distribution functions of radionuclide travel times have been used as part of risk analyses for underground radioactive waste disposal. The extended methodology is presented using a case study.

Stochastic models of ion channels

Ball, Sue January 2001 (has links)
This thesis is concerned with models and inference for single ion channels. Molecular modelling studies are used as the basis for biologically realistic, large state-space gating models of the nicotinic acetylcholine receptor which enable single-channel kinetic behaviour to be characterized in terms of a small number of free parameters. A model is formulated which incorporates known structural information concerning pentameric subunit composition, interactions between neighbouring subunits and knowledge of the behaviour of agonist binding sites within the receptor-channel proteins. Expressions are derived for various channel properties and results are illustrated using numerical examples. The model is adapted and extended to demonstrate how properties of the calcium ion-activated potassium ion channel may be modelled. A two-state stochastic model for ion channels which incorporates time interval omission is examined. Two new methods for overcoming a non-identifiability problem induced by time interval omission are introduced and simulation studies are presented in support of these methods. A framework is presented for analysing the asymptotic behaviour of the method-of-moments estimators of the mean lengths of open and closed sojourns. This framework is used to clarify the origin of the non-identifiability and to construct confidence sets for the mean sojourn lengths. A conjecture concerning the number of solutions of the moment estimating equations is proved.

Some results on boundary hitting times for one-dimensional diffusion processes

Shortland, Christopher Francis January 1993 (has links)
Boundary hitting times for one-dimensional diffusion processes have applications in a variety of areas of mathematics. Unfortunately, for most choices of diffusions and boundaries, the exact exit distribution is unknown, and an approximation has to be made. The primary requirements of an approximation, from a practical viewpoint, is that it is both accurate and easily computable. The main, currently used approximations are discussed, and a new method is developed for two-sided boundaries, where current methodology provides very few techniques. In order to produce new approximations, we will make use of results about the ordering of stochastic processes, and conditioning processes not to have hit a boundary. These topics are introduced in full detail, and a number of results are proved. The ability to order conditioned processes is exploited to provide exact, analytic bounds on the exit distribution. This technique also produces a new approximation, which, for Brownian motion exiting concave or convex boundaries, is shown to be a superior approximation to the standard tangent approximation. To illustrate the uses of these approximations, and general boundary hitting time results, we investigate a class of optimal stopping problems, motivated by a sequential analysis problem. Properties of the optimal stopping boundary are found using analytic techniques for a wide class of cost functions, and both one- and two-sided boundaries. A number of results are proved concerning the expected stopping cost in cases of "near optimality". Numerical examples are used, throughout this thesis, to illustrate the principal results and exit distribution approximations.

Zero-crossing intervals of Gaussian and symmetric stable processes

Cao, Yufei January 2017 (has links)
The zero-crossing problem is the determination of the probability density function of the intervals between the successive axis crossings of a stochastic process. This thesis studies the properties of the zero-crossings of stationary processes belonging to the symmetric-stable class of Gaussian and non-Gaussian type, corresponding to the stability index nu=2 and 0 < nu < 2 respectively.

The impact of periodicity on the zero-crossings of random functions

Wilson, Lorna Rachel Maven January 2015 (has links)
Continuous random processes are used to model a huge variety of real world phenomena. In particular, the zero-crossings of such processes find application in modelling processes of diffusion, meteorology, genetics, finance and applied probability. Understanding the zero-crossings behaviour improves prediction of phenomena initiated by a threshold crossing, as well as extremal problems where the turning points of the process are of interest. To identify the Probability Density Function (PDF) for the times between successive zero-crossings of a stochastic process is a challenging problem with a rich history. This thesis considers the effect of an oscillatory auto-correlation function on the zero-crossings of a Gaussian process. Examining statistical properties of the number of zeros in a fixed time period, it is found that increasing the rate of oscillations in the auto-correlation function results in more ‘deterministic’ realisations of the process. The random interval times between successive zeros become more regular, and the variance is reduced. Accurate calculation of the variance is achieved through analysing the correlation between intervals,which numerical simulations show can be anti-correlated or correlated, depending on the rate of oscillations in the auto-correlation function. The persistence exponent describes the tail of the inter-event PDF, which is steeper where zero-crossings occur more regularly. It exhibits a complex phenomenology, strongly influenced by the oscillatory nature of the auto-correlation function. The interplay between random and deterministic components of a system governs its complexity. In an ever-more complex world, the potential applications for this scale of ‘regularity’ in a random process are far reaching and powerful.

Page generated in 0.2588 seconds