• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 106
  • 36
  • 12
  • 8
  • 6
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 535
  • 76
  • 66
  • 43
  • 41
  • 39
  • 37
  • 35
  • 34
  • 28
  • 27
  • 27
  • 25
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Well-posed formulations and stable finite differencing schemes for numerical relativity

Hinder, Ian January 2005 (has links)
This work concerns the evolution of equations of general relativity; their mathematical properties at the continuum level, and the properties of the infinite difference scheme used to approximate their solution in numerical simulations. Stability results for finite difference approximations of partial differential equations which are first order in time and first order in space are well-known. However, systems which are first order in time and second order in space have been more successful in the field of numerical relativity than fully first order systems. For example, binary black hole simulations are accurate for much longer times. Hence, a greater understanding of the stability properties of these system is desirable. An example of such a system is the NOR (Nagy, Ortiz and Reula)[47] formulation of general relativity. We present a proof of the stability of a finite difference approximation of the linearized NOR evolution system. The new tools used to prove stability for second order in space systems are described, along with the simple example of the wave equation. In order to implement and compare different formulations of the Einstein equations in numerical simulations, the equations must be expanded from abstract tensor relations into components, discretized, and entered into a computer. This process is aided enoromously by the use of automated code generation. I present the Kranc software package which we have written to perform these tasks. It is expected, by analogy with the wave equation, that numerical simulations of systems which are of the first order in time and second order in space will be more accurate than those of fully first order systems. We present a quantitative comparison of the accuracy of formulations of the fully nonlinear Einstein equations and determine that for linearized gravitational waves, this prediction is verified. However, the same cannot be said for other test cases, and it is concluded that certain problems with the second order in space formulations make them behave worse than fully first order formulations in these cases.
92

Non-polynomial scalar field potentials in the local potential approximation

Bridle, Ismail Hamzaan January 2017 (has links)
We present the renormalisation group analysis of O(N) invariant scalar field theory in the local potential approximation. Linearising around the Gaussian fixed point, we find the same eigenoperators solutions exist for both the Wilsonian and the Legendre effective actions, given by solutions to Kummer’s equations. We find the usual polynomial eigenoperators and the Hilbert space they define are a natural subset of these solutions given by a specific set of quantised eigenvalues. Allowing for continuous eigenvalues, we find non-polynomial eigenoperator solutions, the so called Halpern-Huang directions, that exist outside of the polynomial Hilbert space due to the exponential field dependence. Carefully analysing the large field behaviour shows that the exponential dependence implies the Legendre effective action does not have a well defined continuum limit. In comparison, flowing towards the infrared we find that the non-polynomial eigenoperators flow into the polynomial Hilbert space. These conclusions are based off RG flow initiated at an arbitrary scale, implying non-polynomial eigenoperators are dependent upon a scale other than k. Therefore, the asymptotic field behaviour forbids self-similar scaling. These results hold when generalised from the Halpern-Huang directions around the Gaussian fixed point to a general fixed point with a general non-polynomial eigenoperator. Legendre transforming to results of the Polchinski equation, we find the flow of the Wilsonian effective action is much better regulated and always fall into the polynomial Hilbert space. For large Wilsonian effective actions, we find that the non-linear terms of the Polchinski equation forbid any non-polynomial field scaling, regardless of the fixed point. These observations lead to the conclusion that only polynomial eigenoperators show the correct, self-similar, scaling behaviour to construct a non-perturbatively renormalisable scalar QFT.
93

Fast approximate Bayesian computation for inference in non-linear differential equations

Ghosh, Sanmitra January 2016 (has links)
Complex biological systems are often modelled using non-linear differential equations which provide a rich framework for describing the dynamic behaviour of many interacting physical variables representing quantities of biological importance. Approximate Bayesian computation (ABC) using a sequential Monte Carlo (SMC) algorithm is a Bayesian inference methodology that provides a comprehensive platform for parameter estimation, model selection and sensitivity analysis in such non-linear differential equations. However, this method incurs a significant computational cost as it requires explicit numerical integration of differential equations to carry out inference. In this thesis we propose a novel method for circumventing the requirement of explicit integration, within the ABC-SMC algorithm, by using derivatives of Gaussian processes to smooth the observations from which parameters are estimated. We evaluate our methods using synthetic data generated from model biological systems described by ordinary and delay differential equations. Upon comparing the performance of our method to existing ABC techniques, we demonstrate that it produces comparably reliable parameter estimates at a significantly reduced execution time. To put emphasis on the practical applicability of our fast ABC-SMC algorithm we have used it extensively in the task of inverse modelling of a phenomenon pertaining to plant electrophysiology. Particularly we model the electrical responses in higher plants subjected to periods of ozone exposure. We investigate the generation of calcium responses at local sites following a stimulation and model electrical signals as a plant-wide manifestation of such responses. We propose a novel mathematical model that describes the experimentally observed responses to ozone. Furthermore, we pose the modelling task as an inverse problem where much of our insight is gained from the data itself. We highlight throughout the inverse modelling process the usefulness of the proposed fast ABC-SMC method in fitting, discriminating and analysing models described as non-linear ordinary differential equations. We carry out all these tasks using noisy experimental datasets, that provide limited information, to derive novel insights about the underlying biological processes.
94

Smoothness-increasing accuracy-conserving (SIAC) line filtering : effective rotation for multidimensional fields

Docampo-Sanchez, Julia January 2016 (has links)
Over the past few decades there has been a strong effort towards the development of Smoothness-Increasing Accuracy-Conserving (SIAC) filters for Discontinuous Galerkin (DG) methods, designed to increase the smoothness and improve the convergence rate of the DG solution through this post-processor. The applications of these filters in multidimension have traditionally employed a tensor product kernel, allowing a natural extension of the theory developed for one-dimensional problems. In addition, the tensor product has always been done along the Cartesian axis, resulting in a filter whose support has fixed shape and orientation. This thesis has challenged these assumptions, leading to the investigation of rotated filters: tensor product filters with variable orientation. Combining this approach with previous experiments on lower-dimension filtering, a new and computationally efficient subfamily for post-processing multidimensional data has been developed: SIAC Line filters. These filters transform the integral of the convolution into a line integral. Hence, the computational advantages are immediate: the simulation times become significantly shorter and the complexity of the algorithm design reduces to a one-dimensional problem. In the thesis, a solid theoretical background for SIAC Line filters has been established. Theoretical error estimates have been developed, showing how Line filtering preserves the properties of traditional tensor product filtering, including smoothness recovery and improvement in the convergence rate. Furthermore, different numerical experiments were performed, exhibiting how these filters achieve the same accuracy at significantly lower computational costs. This affords great advantages towards the applications of these filters during flow visualization; one important limiting factor of a tensor product structure is that the filter grows in support as the field dimension increases, becoming computationally expensive. SIAC Line filters have proven efficiency in computational performance, thus overcoming the limitations presented by the tensor product filter. The experiments carried out on streamline visualization suggest that these filters are a promising tool in scientific visualisation.
95

Virtual Element Methods

Sutton, Oliver James January 2017 (has links)
In this thesis we study the Virtual Element Method, a recent generalisation of the standard conforming Finite Element Method offering high order approximation spaces on computational meshes consisting of general polygonal or polyhedral elements. Our particular focus is on developing the tools required to use the method as the foundation of mesh adaptive algorithms which are able to take advantage of the flexibility offered by such general meshes. We design virtual element discretisations of certain classes of linear second order elliptic and parabolic partial differential equations, and present a detailed exposition of their implementation aspects. An outcome of this is a concise and usable 50-line MATLAB implementation of a virtual element method for solving a model elliptic problem on general polygonal meshes, the code for which is included as an appendix. Optimal order convergence rates in the H1 and L2 norms are proven for the discretisation of elliptic problems. Alongside these, we derive fully computable residual-type a posteriori estimates of the error measured in the H1 and L2 norms for the methods we develop for elliptic problems, and in the L2(0; T;H1) and L∞(0; T;L2) norms for parabolic problems. In deriving the L∞(0; T;L2) error estimate, we introduce a new technique (which translates naturally back into the setting of conventional finite element methods) to produce estimates with effectivities which become constant for long time simulations. Mesh adaptive algorithms, designed around these methods and computable error estimates, are proposed and numerically assessed in a variety of challenging stationary and time-dependent scenarios. We further propose a virtual element discretisation and computable coarsening/refinement indicator for a system of semilinear parabolic partial differential equations which we apply to a Lotka-Volterra type model of interacting species. These components form the basis of an adaptive method which we use to reveal a variety of new pattern-forming mechanisms in the cyclic competition model.
96

Online graph-based learning for classification

Wainer, L. J. January 2008 (has links)
The aim of this thesis is to develop online kernel based algorithms for learning clas sification functions over a graph. An important question in machine learning is: how to learn functions in a high dimension One of the benefits of using a graphical representation of data is that it can provide a dimensionality reduction of the data to the number of nodes plus edges in the graph. Graphs are useful discrete repre sentations of data that have already been used successfully to incorporate structural information in data to aid in semi-supervised learning techniques. In this thesis, an online learning framework is used to provide guarantees on performance of the algo rithms developed. The first step in developing these algorithms required motivating the idea of a "natural" kernel defined on a graph. This natural kernel turns out to be the Laplacian operator associated with the graph. The next step was to look at a well known online algorithm - the perceptron algorithm - with the associated bound, and formulate it for online learning with this kernel. This was a matter of using the Laplacian kernel with the kernel perceptron algorithm. For a binary classification problem, the bound on the performance of this algorithm can be interpreted in terms of natural properties of the graph, such as the graph diameter. Further algorithms were developed, motivated by the idea of a series of alternate projections, which also share this bound interpretation. The minimum norm interpolation algorithm was developed in batch mode and then transformed into an online algorithm. These al gorithms were tested and compared with other proposed algorithms on toy and real data sets. The main comparison algorithm used was k-nearest neighbour along the graph. Once the kernel has been calculated, the new algorithms perform well and offer some advantages over other approaches in terms of computational complexity.
97

Mathematical continua & the intuitive idea of continuity

Hudry, Jean-Louis January 2006 (has links)
How does philosophy understand the concept of continuity? The intuitive idea of continuity is about perceptual smoothness; but what looks smooth may be discontinuous, meaning that phenomenal continuity does not constitute a reliable definition. Metaphysics speaks of continuants with respect to temporal parts, but does not provide a definition of continuity. When properly defined, it is then associated with a minimal change divided into infinitesimal parts, which is an implicit reference to Leibniz's law of continuity such that a continuous change pertains to a geometric graph differentiable at arbitrary points. Yet, does it make sense to define continuity by means of discontinuous points? We must view Leibniz's definition as a transitory stage between two contradictory concepts, i.e. geometric and arithmetical continua. While Aristotle shows that a continuous line is infinitely divisible into lines, Dedekind defines an arithmetical continuum (or real line) as a complete domain of real numbers. This distinction opposes the intuitive idea of a smooth extension to a discontinuous and extensionless sequence of numbers, meaning that algebraic formalisms do not solve Zeno's geometric paradoxes but make them irrelevant. The consequences for physical continuity are such that an Aristotelian time is a smooth temporal interval devoid of indivisible parts; namely, instants of time are abstract limits and not physical durations. Arithmetical continuity defines a continuous time as isomorphic to a set of real numbers, but the measure of this extensionless structure is physically meaningless, and there is no physical argument to claim that a continuous time is a better model than a discrete time. Arithmetical continuity is omnipresent in modern mathematics; yet, it is fraught with difficulties in relation to the infinite. Cantor distinguishes an infinitely countable set of natural (or rational) numbers from an infinitely uncountable continuum. These infinite cardinalities imply the 'axiom' of choice, such that it is always possible to choose a unique element in a set over an infinite collection of disjoint, non-empty sets. Brouwer rejects this postulate because based on the unjustified idea that the infinite has a same ordering as the finite. He then claims that only infinitely incomplete sequences can be generated, since the nature of the infinite is to be merely potential. Others directly challenge arithmetic. C.S. Peirce suggests a topological geometry devoid of discrete numbers; however, it is clear that modern topology rests on an arithmetical ordering of real numbers and cannot be defined as pure geometry. More recently, J.L. Bell rejects the intuitive discontinuity of algebraic structures by defending an axiomatic system of smooth infinitesimals; yet, the identification of axiomatic smoothness with intuition neglects the necessity for any axiomatic property to belong to the axioms alone. Still, the construction of an axiomatic system can help us defend arithmetical continuity. Hilbert shows that a Euclidean model of geometry is isomorphic to an algebraic model, such that the axiom of continuity is satisfiable in either model. As for the absolute consistency of the axiomatic system, it requires a metamathematics, which aims to demonstrate the arithmetical infinite on finite logical grounds. First-order logic fails to define a continuum as a concrete object, since the uncountable set of all countable subsets is independent of any logic whose models have only countable domains (Lowenheim-Skolem theorem). By contrast, second-order logic makes sense of a continuum as an abstract set, which means that arithmetical continuity is nothing more than an ideal, hypothetical abstraction.
98

Floquet theory for doubly periodic differential equations

Wright, G. P. January 1970 (has links)
Our objective is to extend the well-known Floquet theory of ordinary differential equations with singly periodic coefficients, to equations with doubly-periodic coefficients. We study mainly an equation of fairly general type, analogous to Hill's equation, hut doubly-periodic. Some particular attention is devoted, however, to the special case of Lame's equation. A general theory, analogous to that for Hill's equation, is first developed, with some consideration of an algebraic form of the equation, having three regular singularities and one irregular. Next we introduce a parameter v (one of the characteristic exponents at a singularity). In the case v = O the general solution is uniform and Hermite showed that there then exists at least one doubly-multiplicative solution. The central work of this, thesis is to consider certain rational values of V, introducing some special cuts in the complex plane and showing that in certain circumstances the general solution is uniform in the cut plane. When this is so, doubly-multiplicative solutions again exist. Extension to general rational values of v depends on an interesting and apparently unproved conjecture related to the zeros of Chebyshev polynomials.
99

Topics in transcendental dynamics

Sixsmith, Dave January 2013 (has links)
We study the iteration of a transcendental entire function, f; in particular, the fast escaping set, A(f). This set consists of points that iterate to infinity as fast as possible, and plays a significant role in transcendental dynamics. First we investigate functions for which A(f) has a structure called a spider's web. We construct several new classes of function with this property. We show that some of these classes have a degree of stability under changes in the function, and that new examples of functions with this property can be constructed by composition, by differentiation, and by integration of existing examples. We use a property of spiders' webs to give new results concerning functions with no unbounded Fatou components. When A(f) is a spider's web, it contains a sequence of fundamental loops. We next explore the structure of these fundamental loops for functions with a multiply connected Fatou component, and show that there exist functions for which some fundamental loops are analytic curves and approximately circles, while others are geometrically highly distorted. We do this by introducing a real-valued function which measures the rate of escape of points in A(f), and show that this function has a number of interesting properties. Next we study functions with a simply connected Fatou component in A(f). We give an example of a function with this property, which - in contrast to the only other known functions of this type - has no multiply connected Fatou components. To do this we also prove a new criterion for points to be in A(f). Finally, we investigate the much studied Eremenko-Lyubich class of transcendental entire functions with a bounded set of singular values. We give a new characterisation of this class, and a new result regarding direct singularities which are not logarithmic.
100

Stochastic approximation method for identification of linear dynamical systems

Panuska, Vaclav January 1969 (has links)
No description available.

Page generated in 0.0503 seconds