• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 164
  • 62
  • 6
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 1900
  • 354
  • 197
  • 117
  • 69
  • 53
  • 52
  • 51
  • 51
  • 51
  • 50
  • 40
  • 38
  • 38
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

The rational hybrid Monte Carlo algorithm

Clark, Michael A. January 2005 (has links)
This thesis is concerned with the problem of generating gauge configurations for use with Monte Carlo lattice QCD calculations that include the effect of dynamical fermions. Although such effects have been included in calculations for a long time, historically it has been difficult to include the effect of the strange quark because of the square root of the Dirac operator that appears in the action. The lattice formulation of QCD is discussed, and the various fermion formulations are highlighted. Current popular algorithms used to generate gauge configurations are described, in particular the advantages and disadvantages of each are discussed. The Rational Hybrid Monte Carlo algorithm (RHMC) is introduced, this uses rational functions to approximate the matrix square root and is an exact algorithm. RHMC is compared with the Polynomial Hybrid Monte Carlo algorithm and the inexact R algorithm for two flavour staggered fermion calculations. The algorithm is found to reproduce published data and to be more efficient than the Polynomial Hybrid Monte Carlo algorithm. With the introduction of multiple time scales for the gauge and fermion parts of the action the efficiency further increases. As a means to accelerate the Monte Carlo acceptance rate of lattice QCD calculations, the splitting of the fermion determination into n<sup>th</sup> root contributions is described. This is shown to improve the conservation of the Hamiltonian. As the quark mass is decreased this is found to decrease the overall cost of calculation by allowing an increase in the integrating stepsize. An efficient formulation for applying RHMC to ASQTAD calculations is described, and it is found to be no more expensive than using the conventional R algorithm formulation. Full 2+1 quark flavour QCD calculations are undertaken using the domain wall fermion formulation. Results are generated using both RHMC and the R algorithm and comparisons are made on the basis of algorithm efficiency and hadronic observables. With the exception of the stepsize errors present in the R algorithm data, consistency is found between the two algorithms. RHMC is found to allow a much greater integrating stepsize than the R algorithm.
32

Local energy transfer theory in forced and decaying isotropic turbulence

Quinn, Anthony Peter January 2001 (has links)
In mathematical analyses of the turbulence phenomenon, averaging the governing equation of fluid flow leads to an impasse at which the number of equations is outweighed by the number of unknowns. This difficulty is often described as the 'closure problem'. A 'closure hypothesis' is an additional ingredient, typically comprising a set of mathematical assumptions based on some physical insights. This is artificially introduced into the problem an extra relation, and hence match the number of equations and unknowns. Many such closure hypotheses have been proposed and range from simple empirical rules to complex mathematical treatments. The 'Local Energy Transfer' (LET) theory [W. D. McComb, M. J. Filipiak and V. Shanmugasundaram, J. Fluid Mech, 245, 279 (1992)] is a closure hypothesis based on renormalized perturbation theory (RPT). This theory has enjoyed much success in predicting the behaviour of freely decaying, isotropic, homogeneous turbulence. LET is the only time-dependent Eulerian RPT closure which is compatible with Kolmogorov's <i>k<sup>-</sup></i> <sup>5</sup>/<sub>3</sub> law. In this research, we begin by reviewing the mathematical background of turbulence theory. We then consider the derivation of LET, surveying the evolution of the theory and its relation with other RPT closure hypotheses. Computer software for numerically solving the LET equations is then developed and tested. This is used to generate quantitative forecasts for the behaviour of freely decaying turbulent flows. To investigate the accuracy of these predictions, comprehensive, detailed, purpose-run comparisons between LET output and Direct Numerical Simulation (DNS) data are performed for the first time. These demonstrate that LET theory can provide reasonably accurate numerical estimates for the time evolution of a range of spectral measures and integral parameters in freely decaying turbulence.
33

Statistical methods for the analysis of covariance and spatio-temporal models

Papasouliotis, Orestis January 2000 (has links)
No description available.
34

The problem of nonlinear filtering

Crisan, Dan Ovidiu January 1996 (has links)
Stochastic filtering theory studies the problem of estimating an unobservable 'signal' process <I>X</I> given the information obtained by observing an associated process <I>Y</I> (a 'noisy' observation) within a certain time window [0, <I>t</I>]. It is possible to explicitly describe the distribution of <I>X</I> given <I>Y</I> in the setting of linear/guassain systems. Outside the realm of the linear theory, it is known that only a few very exceptional examples have explicitly described posterior distributions. We present in detail a class of nonlinear filters (Beneš filters) which allow explicit formulae. Using the explicit expression of the Laplace transform of a functional of Brownian motion we give a direct computation of the unnormalized conditional density of the signal of the Beneš filter and obtain the formula for the normalized conditional density of <I>X</I> for two particular filters. In the case in which <I>X </I>is a diffusion process and <I>Y</I> is given by the equation <I>dY<SUB>t</SUB> </I>=<I> dh</I>(<I>s</I>,<I>X<SUB>s</SUB></I>)<I>ds </I>+ <I>dW<SUB>t</SUB></I>, where <I>W</I> is a Brownian motion independent of <I>X, Y</I><SUB>0</SUB> = 0 and <I>h </I>satisfies certain conditions, the evolution of the conditional distribution of <I>X</I> is described by 2 stochastic partial differential equations: a linear equation - the <I>Zakai</I> equation - which describes the evolution of an unnormalised version of the condition distribution of <I>X</I> and a nonlinear equation - the <I>Kushner</I> - <I>Stratonovitch </I>equation - which describes the evolution of the conditional distribution of <I>X</I> itself. We construct several measure valued processes, associated with the two equations, whose values give the conditional distribution of <I>X</I> (in the first case unnormalised). We do this by means of converging sequences of branching particle systems. The particles evolve independently, moving with the same law as <I>X</I>, and branch according to a mechanism that depends on the their locations and the observation <I>Y</I>. The result is a cloud of paths, with those surviving to the current time providing an estimate for the conditional distribution of <I>X</I>.
35

Statistical methods for segmenting X-ray CT images of sheep

Robinson, Caroline D. January 2000 (has links)
X-ray computed tomography (CT) is a non-invasive imaging technique widely used in medical diagnosis to detect physiological abnormalities. Recently it has been adopted for estimating tissue proportions in live sheep. This thesis is concerned with the development of statistical methods for automating the estimation of tissue proportions from CT images. The first stage in the estimation process is to segment sectional images into the internal organs, the carcass and the area external to the sheep. This is currently achieved by manually extracting boundaries which encircle the internal organs of the sheep, and is undesirable because it is a very subjective and tedious process. We explore the use of deformable templates to automate this stage, by means of a parametrised stochastic template which describes the shape and variability of these boundaries. The manually segmented boundaries from 24 lumbar images are parametrised using Fourier coefficients, which are reduced in dimensionality using principal components in order to estimate a distribution on the parameters of the template. Templates are fitted to further images using a criterion which combines the local pixel gradient and closeness to the estimated template distribution. Having isolated the carcass region, we estimate the proportions of fat and muscle by modelling the probability density function of the pixel values in the segmented image, taking into account that many pixel values are generated from a mixture of two or more tissues. The spatial response of the CT machine is investigated by examining a sharp boundary in the image. Modelling this response as an isotropic bivariate normal density leads to a new probability density function for the values of the mixed pixels in the image, and hence to a combined distribution with the remaining pixels.
36

The algebraic theory of Kreck surgery

Sixt, Jorg January 2004 (has links)
No description available.
37

Some topics on graphical models in statistics

Brewer, Mark John January 1994 (has links)
This thesis considers graphical models that are represented by families of probability distributions having sets of conditional independence constraints specified by an influence diagram. Chapter 1 introduces the notion of a directed acyclic graph, a particular type of independence graph, which is used to define the influence diagram. Examples of such structures are given, and of how they are used in building a graphical model. Models may contain discrete or continuous variables, or both. Local computational schemes using exact probabilistic methods on these models are then reviewed. Chapter 2 presents a review of the use of graphical models in legal reasoning literature. The use of likelihood ratios to propagate probabilities through an influence diagram is investigated in this chapter, and a method for calculating LRs in graphical models is presented. The notion of recovering the structure of a graphical model from observed data is studied in Chapter 3. An established method on discrete data is described, and extended to include continuous variables. Kernel methods are introduced and applied to the probability estimation needed in these methods. Chapters 4 and 5 describe the use of stochastic simulation on mixed graphical association models. Simulation methods, in particular the Gibbs sampler, can be used on a wider range of models than exact probabilistic methods. Also estimates of marginal density functions of continuous variables can be obtained by using kernel estimates on the simulated values; exact methods generally only provide the marginal means and variances of continuous variables.
38

Spatial population processes

Renshaw, Eric January 1976 (has links)
This thesis is a theoretical study of the effect of migration between colonies, each of which is developing according to a simple stochastic birth-death-immigration process. In Chapters 2 to 7 I investigate the probability structure of the two-colony process. The Kolmogorov forward differential equation for the population size probabilities is developed and from it expressions are derived for the first- and second-order moments. Exact solutions to this forward equation are obtained for three special cases and a recursive solution is developed in a fourth. Three approximate solutions are developed; (i) by modifying the birth mechanism, (ii) by fitting a bivariate negative binomial distribution, and (iii) by placing an upper bound on the total population size. Iterative solutions are then derived by the use of two different techniques. In the first a power series solution is obtained in terms of a common migration rate. In the second sequences of functions are generated which converge to the required solution. The investigation of the two-colony process concludes with a simulation study and an analysis of the probability of extinction. In Chapter 8 I introduce a 'stepping-stone' model in which the population is composed of an infinite number of colonies which may be considered to be situated at the integer points of a single co-ordinate axis. Migration is allowed between nearest-neighbours only. Although the Kolmogorov forward differential equation cannot be solved directly, approximate solutions are developed in an analogous manner to those derived for the two-colony process. First- and second-order moments are obtained and an exact stochastic solution is developed for one special case. If the population has a positive rate of growth and is initially concentrated into a relatively small geographic region, we may expect it to diffuse into the surrounding areas and eventually to take over the entire territory. This expanding population may be envisaged as generating a travelling wave and in Chapter 9 I investigate the velocity of propagation and the form of the wave profile. In Chapter 10 I examine non-nearest-neighbour migration models and develop expressions for the mean size of each colony at time t for several appropriate migration distributions. To conclude the thesis I present a spatial model in two-dimensions and relate it to data on the spatial distribution of flour beetles in a closed container.
39

Nonequilibrium phase transitions and dynamical scaling regimes

Blythe, Richard Alexander January 2001 (has links)
In recent years, the application of statistical mechanics to nonequilibrium systems, and quite specifically the probabilistic modelling of nonequilibrium microscopic dynamics, has become a major research topic. However, in contrast to the equilibrium case, there is currently no general framework within which nonequilibrium systems are understood. Hence the aim of this thesis is to improve our understanding of nonequilibrium systems through the study of a range of systems with probabilistic microscopic dynamics and the collective phenomena - notably phase transitions and the onset of scaling regimes - that arise. In this thesis I briefly review general aspects of mathematical models of probabilistic dynamics (stochastic processes), with a particular emphasis on steady-state properties and the origin of phase transitions. Then I consider separately four specific types of nonequilibrium dynamics. Firstly, I introduce and solve exactly a model of a particle reaction system. The solution which employs commutation properties of the <i>q</i>-deformed harmonic oscillator algebra, reveals that phase transitions in the analytic form of the particle density as a function of time arise as a direct consequence of randomness in the reaction dynamics. I also use similar mathematical techniques to solve the partially asymmetric exclusion process, an important prototype of a physical system that is driven by its environment. This model is also found to exhibit phase transitions, although in this case their origins lies in the nonequilibrium interactions between the system and its surroundings. Then I examine the scaling behaviour associated with the nonequilibrium directed percolation continuous phase transition. This transition is related to the presence of an absorbing state and I provide evidence for such a transition in a wetting model that does not possess an absorbing state. Finally, I generalise the wetting model to two dimensions and study its interfacial scaling behaviour. This is found to belong to the Kardar-Parisi-Zhang universality class, although there are strong crossover effects - which I quantify - that obscure the scaling regime.
40

Finite difference approximations of second order quasi-linear elliptic and hyperbolic stochastic partial differential equations

Pefferly, Robert J. January 2001 (has links)
This thesis covers topics such as finite difference schemes, mean-square convergence, modelling, and numerical approximations of second order quasi-linear stochastic partial differential equations (SPDE) driven by white noise in less than three space dimensions. The motivation for discussing and expanding these topics lies in their implications in such physical phenomena as signal and information flow, gravitational and electromagnetic fields, large scale weather systems, and macro-computer networks. Chapter 2 delves into the hyperbolic SPDE in one space and one time dimension. This is an important equation to such fields as signal processing, communications, and information theory where singularities propagate throughout space as a function of time. Chapter 3 discusses some concepts and implications of elliptic SPDE's driven by additive noise. These systems are key for understanding steady state phenomena. Chapter 4 presents some numerical work regarding elliptic SPDE's driven by multiplicative and general noise. These SPDE's are open topics in the theoretical literature, hence numerical work provides significant insight into the nature of the process. Chapter 5 presents some numerical work regarding quasi-geostrophic geophysical fluid dynamics involving stochastic noise and demonstrates how these systems can be represented as a combination of elliptic and hyperbolic components.

Page generated in 0.0198 seconds