91 |
Iterative techniques for the estimation of parameters in time series modelsRevfeim, K. J. A. January 1969 (has links)
No description available.
|
92 |
Objective Bayes and conditional frequentist inferenceKuffner, Todd Alan January 2011 (has links)
Objective Bayesian methods have garnered considerable interest and support among statisticians, particularly over the past two decades. It has often been ignored, however, that in some cases the appropriate frequentist inference to match is a conditional one. We present various methods for extending the probability matching prior (PMP) methods to conditional settings. A method based on saddlepoint approximations is found to be the most tractable and we demonstrate its use in the most common exact ancillary statistic models. As part of this analysis, we give a proof of an exactness property of a particular PMP in location-scale models. We use the proposed matching methods to investigate the relationships between conditional and unconditional PMPs. A key component of our analysis is a numerical study of the performance of probability matching priors from both a conditional and unconditional perspective in exact ancillary models. In concluding remarks we propose many routes for future research.
|
93 |
Uncertainty quantification for problems in radionuclide transportHagues, Andrew W. January 2011 (has links)
The field of radionuclide transport has long recognised the stochastic nature of the problems encountered. Many parameters that are used in computational models are very difficult, if not impossible, to measure with any great degree of confidence. For example, bedrock properties can only be measured at a few discrete points, the properties between these points may be inferred or estimated using experiments but it is difficult to achieve any high levels of confidence. This is a major problem when many countries around the world are considering deep geologic repositories as a disposal option for long-lived nuclear waste but require a high degree of confidence that any release of radioactive material will not pose a risk to future populations. In this thesis we apply Polynomial Chaos methods to a model of the biosphere that is similar to those used to assess exposure pathways for humans and associated dose rates by many countries worldwide. We also apply the Spectral-Stochastic Finite Element Method to the problem of contaminated fluid flow in a porous medium. For this problem we use the Multi-Element generalized Polynomial Chaos method to discretise the random dimensions in a manner similar to the well known Finite Element Method. The stochastic discretisation is then refined adaptively to mitigate the build up errors over the solution times. It was found that these methods have the potential to provide much improved estimates for radionuclide transport problems. However, further development is needed in order to obtain the necessary efficiency that would be required to solve industrial problems.
|
94 |
Compatible finite element methods for atmospheric dynamical coresMcRae, Andrew Timothy Tang January 2015 (has links)
A key part of numerical weather prediction is the simulation of the partial differential equations governing atmospheric flow over the Earth's surface. This is typically performed on supercomputers at national and international centres around the world. In the last decade, there has been a relative plateau in single-core computing performance. Running ever-finer forecasting models has necessitated the use of ever-larger numbers of CPU cores. Several current forecasting models, including those favoured by the Met Office, use an underlying latitude--longitude grid. This facilitates the development of finite difference discretisations with favourable numerical properties. However, such models are inherently unable to make efficient use of large numbers of processors, as a result of the excessive concentration of gridpoints in the vicinity of the poles. A certain class of mixed finite element methods have recently been proposed in order to obtain favourable numerical properties on an arbitrary -- in particular, quasi-uniform -- mesh. This thesis supports the proposition that such finite element methods, which we label ''compatible'', or ''mimetic'', are suitable for discretising the equations used in an atmospheric dynamical core. We firstly show promising results applying these methods to the nonlinear rotating shallow-water equations. We then develop sophisticated tensor product finite elements for use in 3D. Finally, we give a discretisation for the fully-compressible 3D equations.
|
95 |
Discontinuous Galerkin methods for hyperbolic conservation lawsHadi, Justin January 2012 (has links)
New numerical methods are developed for single phase compressible gas flow and two phase gas/liquid flow in the framework of the discontinuous Galerkin finite element method (DGFEM) and applied to Riemann problems. A residual based diffusion scheme inspired by the streamline upwind Petrov-Galerkin method (SUPG) of Brookes and Hughes [15] is applied to the Euler equations of gas dynamics and the single pressure incompressible liquid/compressible gas flow system of Toumi and Kumbaro [137]. A Roe [119] based approximate Riemann solver is applied. To minimise unstable overshoots, diffusivity is added in the direction of the gradient of the solution as opposed to the direction of the streamlines in SUPG for the continuous finite element method (CFEM). The methods are tested on Cartesian meshes with scalar advection problems, the computationally challenging Sod shock tube and Lax Riemann problems, explosion problems in gas dynamics and the water faucet test and explosion problems in two phase flow. An extension to two dimensions and comparisons to existing methods are made. A framework for the well posedness of two phase flow equations is posited and virtual mass terms are added to the two phase flow equations of Toumi and Kumbaro to ensure hyperbolicity. A viscous path based Roe solver for DGFEM is applied mirroring the method of Toumi and Kumbaro in a framework for discontinuous solutions.
|
96 |
Development of algorithms for the direct multi-configuration self-consistent field (MCSCF) methodLi, Shaopeng January 2011 (has links)
In order to improve the performance of the current parallelized direct multi-configuration self-consistent field (MCSCF) implementations of the program package Gaussian [42], consisting of the complete active space (CAS) SCF method [43] and the restricted active space (RAS) SCF method [44], this thesis introduces a matrix multiplication scheme as part of the CI eigenvalue evaluation of these methods. Thus highly optimized linear algebra routines, which are able to use data in a sequential and predictable way, can be used in our method, resulting in a much better performance overall than the current methods. The side effect of this matrix multiplication scheme is that it requires some extra memory to store the additional intermediate matrices. Several chemical systems are used to demonstrate that the new CAS and RAS methods are faster than the current CAS and RAS methods respectively. This thesis is structured into four chapters. Chapter One is the general introduction, which describes the background of the CASSCF/RASSCF methods. Then the efficiency of the current CASSCF/RASSCF code is discussed, which serves as the motivation for this thesis, followed by a brief introduction to our method. Chapter Two describes applying the matrix multiplication scheme to accelerate the current direct CASSCF method, by reorganizing the summation order in the equation that generates non-zero Hamiltonian matrix elements. It is demonstrated that the new method can perform much faster than the current CASSCF method by carrying out single point energy calculations on pyracylene and pyrene molecules, and geometry optimization calculations on anthracene+ / phenanthrene+ molecules. However, in the RASSCF method, because an arbitrary number of doubly-occupied or unoccupied orbitals are introduced into the CASSCF reference space, many new orbital integral cases arise. Some cases are suitable for the matrix multiplication scheme, while others are not. Chapter Three applies the scheme to those suitable integral cases that are also the most time-consuming cases for the RASSCF calculation. The coronene molecule - with different sizes of orbital active space - has been used to demonstrate that the new RASSCF method can perform significantly faster than the current Gaussian method. Chapter Four describes an attempt to modify the other integral cases, based on a review of the method developed by Saunders and Van Lenthe [95]. Calculations on coronene molecule are used again to test whether this implementation can further improve the performance of the RASSCF method developed in Chapter Three.
|
97 |
Sharp gradient bounds for the diffusion semigroupNee, Colm January 2011 (has links)
Precise regularity estimates on diffusion semigroups are more than a mere theoretical curiosity. They play a fundamental role in deducing sharp error bounds for higher-order particle methods. In this thesis error bounds which are of consequence in iterated applications of Wiener space cubature (Lyons and Victoir [29]) and a related higher-order method by Kusuoka [21] are considered. Regularity properties for a wide range of diffusion semigroups are deduced. In particular, semigroups corresponding to solutions of stochastic differential equations (SDEs) with non-smooth and degenerate coefficients. Precise derivative bounds for these semigroups are derived as functions of time, and are obtained under a condition, known as the UFG condition, which is much weaker than Hormander's criterion for hypoellipticity. Moreover, very relaxed differentiability assumptions on the coefficients are imposed. Proofs of exact error bounds for the associated higher-order particle methods are deduced, where no such source already exists. In later chapters, a local version of the UFG condition - `the LFG condition' - is introduced and is used to obtain local gradient bounds and local smoothness properties of the semigroup. The condition's generality is demonstrated. In later chapters, it is shown that the V0 condition, proposed by Crisan and Ghazali [8], may be completely relaxed. Sobolev-type gradient bounds are established for the semigroup under very general differentiability assumptions of the vector fields. The problem of considering regularity properties for a semigroup which has been perturbed by a potential, and a Langrangian term are also considered. These prove important in the final chapter, in which we discuss existence and uniqueness of solutions to the Cauchy problem.
|
98 |
Krylov subspace techniques for model reduction and the solution of linear matrix equationsAhmad, Mian Ilyas January 2011 (has links)
This thesis focuses on the model reduction of linear systems and the solution of large scale linear matrix equations using computationally efficient Krylov subspace techniques. Most approaches for model reduction involve the computation and factorization of large matrices. However Krylov subspace techniques have the advantage that they involve only matrix-vector multiplications in the large dimension, which makes them a better choice for model reduction of large scale systems. The standard Arnoldi/Lanczos algorithms are well-used Krylov techniques that compute orthogonal bases to Krylov subspaces and, by using a projection process on to the Krylov subspace, produce a reduced order model that interpolates the actual system and its derivatives at infinity. An extension is the rational Arnoldi/Lanczos algorithm which computes orthogonal bases to the union of Krylov subspaces and results in a reduced order model that interpolates the actual system and its derivatives at a predefined set of interpolation points. This thesis concentrates on the rational Krylov method for model reduction. In the rational Krylov method an important issue is the selection of interpolation points for which various techniques are available in the literature with different selection criteria. One of these techniques selects the interpolation points such that the approximation satisfies the necessary conditions for H2 optimal approximation. However it is possible to have more than one approximation for which the necessary optimality conditions are satisfied. In this thesis, some conditions on the interpolation points are derived, that enable us to compute all approximations that satisfy the necessary optimality conditions and hence identify the global minimizer to the H2 optimal model reduction problem. It is shown that for an H2 optimal approximation that interpolates at m interpolation points, the interpolation points are the simultaneous solution of m multivariate polynomial equations in m unknowns. This condition reduces to the computation of zeros of a linear system, for a first order approximation. In case of second order approximation the condition is to compute the simultaneous solution of two bivariate polynomial equations. These two cases are analyzed in detail and it is shown that a global minimizer to the H2 optimal model reduction problem can be identified. Furthermore, a computationally efficient iterative algorithm is also proposed for the H2 optimal model reduction problem that converges to a local minimizer. In addition to the effect of interpolation points on the accuracy of the rational interpolating approximation, an ordinary choice of interpolation points may result in a reduced order model that loses the useful properties such as stability, passivity, minimum-phase and bounded real character as well as structure of the actual system. Recently in the literature it is shown that the rational interpolating approximations can be parameterized in terms of a free low dimensional parameter in order to preserve the stability of the actual system in the reduced order approximation. This idea is extended in this thesis to preserve other properties and combinations of them. Also the concept of parameterization is applied to the minimal residual method, two-sided rational Arnoldi method and H2 optimal approximation in order to improve the accuracy of the interpolating approximation. The rational Krylov method has also been used in the literature to compute low rank approximate solutions of the Sylvester and Lyapunov equations, which are useful for model reduction. The approach involves the computation of two set of basis vectors in which each vector is orthogonalized with all previous vectors. This orthogonalization becomes computationally expensive and requires high storage capacity as the number of basis vectors increases. In this thesis, a restart scheme is proposed which restarts without requiring that the new vectors are orthogonal to the previous vectors. Instead, a set of two new orthogonal basis vectors are computed. This reduces the computational burden of orthogonalization and the requirement of storage capacity. It is shown that in case of Lyapunov equations, the approximate solution obtained through the restart scheme approaches monotonically to the actual solution.
|
99 |
Hamiltonian sequential Monte Carlo and normalizing constantsKostov, Svetoslav January 2016 (has links)
The present thesis deals with the problems of simulation from a given target distribution and the estimation of ratios of normalizing constants, i.e. marginal likelihoods (ML). Both problems could be considerably difficult even for the simplest possible real-world statistical setups. We investigate how the combination of Hamiltonian Monte Carlo (HMC) and Sequential Monte Carlo (SMC) could be used to sample effectively from a multi-modal target distribution and to estimate ratios of normalizing constants at the same time. We call this novel combination Hamiltonian SMC (HSMC) algorithm and we show that it achieves some improvements over the existing Monte Carlo sampling algorithms, especially when the target distribution is multi-modal and/ or have complicated covariance structure. An important convergence result is proved for the HSMC, as well as an upper bound on the bias of the estimate of the ratio of MLs. Our investigation of the continuous time limit of the HSMC process reveals an interesting connection between Monte Carlo simulation and physics. We also concern ourselves with the problem of estimation of the uncertainty of the estimate of the ML of a HMM. We propose a new algorithm (Pairs algorithm) to estimate the non-asymptotic second moment of the estimate of the ML for general HMM. Later we show that there exists a linear-in-time bound on the relative variance of the estimate of the second moment of the ML obtained using the Pairs algorithm. This theoretical property of the relative variance translates in practice into a more reliable estimates of the second moment of the estimate of the MLs compared to the standard approach of running independent copies of the particle filter. We support out investigations with different numerical examples like Bayesian inference of a heteroscedastic regression, inference of a Lotka - Volterra based HMM, etc.
|
100 |
The Application of Iterative Techniques to Adaptive Detection ProcessesClements, A. January 1976 (has links)
No description available.
|
Page generated in 0.0332 seconds