• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2393
  • 354
  • 260
  • 174
  • 11
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 8129
  • 2237
  • 2095
  • 1153
  • 935
  • 909
  • 909
  • 600
  • 580
  • 478
  • 326
  • 292
  • 256
  • 256
  • 239
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Sparse space-time boundary element methods for the heat equation

Reinarz, Anne January 2015 (has links)
The goal of this work is the efficient solution of the heat equation with Dirichlet or Neumann boundary conditions using the Boundary Elements Method (BEM). Efficiently solving the heat equation is useful, as it is a simple model problem for other types of parabolic problems. In complicated spatial domains as often found in engineering, BEM can be beneficial since only the boundary of the domain has to be discretised. This makes BEM easier than domain methods such as finite elements and finite differences, conventionally combined with time-stepping schemes to solve this problem. The contribution of this work is to further decrease the complexity of solving the heat equation, leading both to speed gains (in CPU time) as well as requiring smaller amounts of memory to solve the same problem. To do this we will combine the complexity gains of boundary reduction by integral equation formulations with a discretisation using wavelet bases. This reduces the total work to O(hₓ-(d-1)), when the solution of the linear system is performed with linear complexity. We show that the discretisation with a wavelet basis leads to a numerically sparse matrix. Further, we show that this matrix can be compressed without losing accuracy of the underlying Galerkin scheme. This matrix compression reduces the number of non-zero matrix entries from O(N2) to O(N). Thus, we can indeed solve the linear system in linear time. It has been shown theoretically that using sparse grid methods leads to considerably higher convergence rates in the energy norm of the problem. In this work we will show that the convergence can be further improved for some choices of polynomial degrees by using more general sparse grid spaces. We also give numerical results to verify the theoretical bounds from [Chernov, Schwab, 2013].
102

Adaptive discontinuous Galerkin methods for nonlinear parabolic problems

Metcalfe, Stephen Arthur January 2015 (has links)
This work is devoted to the study of a posteriori error estimation and adaptivity in parabolic problems with a particular focus on spatial discontinuous Galerkin (dG) discretisations. We begin by deriving an a posteriori error estimator for a linear non-stationary convection-diffusion problem that is discretised with a backward Euler dG method. An adaptive algorithm is then proposed to utilise the error estimator. The effectiveness of both the error estimator and the proposed algorithm is shown through a series of numerical experiments. Moving on to nonlinear problems, we investigate the numerical approximation of blow-up. To begin this study, we first look at the numerical approximation of blow-up in nonlinear ODEs through standard time stepping schemes. We then derive an a posteriori error estimator for an implicit-explicit (IMEX) dG discretisation of a semilinear parabolic PDE with quadratic nonlinearity. An adaptive algorithm is proposed that uses the error estimator to approach the blow-up time. The adaptive algorithm is then applied in a series of test cases to gauge the effectiveness of the error estimator. Finally, we consider the adaptive numerical approximation of a nonlinear interface problem that is used to model the mass transfer of solutes through semi-permiable membranes. An a posteriori error estimator is proposed for the IMEX dG discretisation of the model and its effectiveness tested through a series of numerical experiments.
103

Derivative pricing in Lévy driven models

Kushpel, Alexander January 2015 (has links)
We consider an important class of derivative contracts written on multiple assets which are traded on a wide range of financial markets. More specifically, we are interested in developing novel methods for pricing financial derivatives using approximation theoretic methods which are not well-known to the financial engineering community. The problem of pricing of such contracts splits into two parts. First, we need to approximate the respective density function which depends on the adapted jump-diffusion model. Second, we need to construct a sequence of approximation formulas for the price. These two parts are connected with the problem of optimal approximation of infinitely differentiable, analytic or entire functions on noncompact domains. We develop new methods of recovery of density functions using sk-splines (in particular, radial basis functions), Wiener spaces and complex exponents with frequencies from special domains. The respective lower bounds obtained show that the methods developed have almost optimal rate of convergence in the sense of n-widths. On the basis of results obtained we develop a new theory of pricing of basket options under Lévy processess. In particular, we introduce and study a class of stochastic systems to model multidimensional return process, construct a sequence of approximation formulas for the price and establish the respective rates of convergence.
104

Model reductions in biochemical reaction networks

Khoshnaw, Sarbaz Hamza Abdullah January 2015 (has links)
Many complex kinetic models in the field of biochemical reactions contain a large number of species and reactions. These models often require a huge array of computational tools to analyse. Techniques of model reduction, which arise in various theoretical and practical applications in systems biology, represent key critical elements (variables and parameters) and substructures of the original system. This thesis aims to study methods of model reduction for biochemical reaction networks. It has three goals related to techniques of model reduction. The primary goal provides analytical approximate solutions of such models. In order to have this set of solutions, we propose an algorithm based on the Duhamel iterates. This algorithm is an explicit formula that can be studied in detail for wide regions of concentrations for optimization and parameter identification purposes. Another goal is to simplify high dimensional models to smaller sizes in which the dynamics of original models and reduced models should be similar. Therefore, we have developed some techniques of model reduction such as geometric singular perturbation method for slow and fast subsystems, and entropy production analysis for identifying non–important reactions. The suggested techniques can be applied to some models in systems biology including enzymatic reactions, elongation factors EF–Tu and EF–Ts signalling pathways, and nuclear receptor signalling. Calculating the value of deviation at each reduction stage helps to check that the approximation of concentrations is still within the allowable limits. The final goal is to identify critical model parameters and variables for reduced models. We study the methods of local sensitivity in order to find the critical model elements. The results are obtained in numerical simulations based on Systems Biology Toolbox (SBToolbox) and SimBiology Toolbox for Matlab. The simplified models would be accurate, robust, and easily applied by biologists for various purposes such as reproducing biological data and functions for the full models.
105

Using partially specified models to detect and quantify structural sensitivity in biological systems

Adamson, Matthew William January 2015 (has links)
Mathematical models in ecology and evolution are highly simplified representations of a complex underlying reality. For this reason, there is always a high degree of uncertainty with regards to the model specification—not just in terms of parameters, but also in the form taken by the model equations themselves. This uncertainty becomes critical for models in which the use of two different functions fitting the same dataset can yield substantially different model predictions—a property known as structural sensitivity. In this case, even if the model is purely deterministic, the uncertainty in the model functions carries through into uncertainty in the model predictions, and new frameworks are required to tackle this fundamental problem. Here, we construct a framework that uses partially specified models: ODE models in which unknown functions are represented not by a specific functional form, but by an entire data range and constraints of biological realism. Partially specified models can be used to rigorously detect when models are structurally sensitive in their predictions concerning the character of an equilibrium point by projecting the data range into a generalised bifurcation space formed of equilibrium values and derivatives of any unspecified functions. The key question of how to carry out this projection is a serious mathematical challenge and an obstacle to the use of partially specified models. We address this challenge by developing several powerful techniques to perform such a projection.
106

Sparse grid approximation with Gaussians

Usta, Fuat January 2015 (has links)
Motivated by the recent multilevel sparse kernel-based interpolation (MuSIK) algorithm proposed in [Georgoulis, Levesley and Subhan, SIAM J. Sci. Comput., 35(2), pp. A815-A831, 2013], we introduce the new quasi-multilevel sparse interpolation with kernels (Q-MuSIK) via the combination technique. The Q-MuSIK scheme achieves better convergence and run time in comparison with classical quasi-interpolation; namely, the Q-MuSIK algorithm is generally superior to the MuSIK methods in terms of run time in particular in high-dimensional interpolation problems, since there is no need to solve large algebraic systems. We subsequently propose a fast, low complexity, high-dimensional quadrature formula based on Q-MuSIK interpolation of the integrand. We present the results of numerical experimentation for both interpolation and quadrature in Rd, for d = 2, d = 3 and d = 4. In this work we also consider the convergence rates for multilevel quasiinterpolation of periodic functions using Gaussians on a grid. Initially, we have given the single level quasi-interpolation error by using the shifting properties of Gaussian kernel, and have then found an estimate for the multilevel error using the multilevel algorithm for unit function.
107

Multiscale principal component analysis

Akinduko, Ayodeji Akinwumi January 2016 (has links)
The problem of approximating multidimensional data with objects of lower dimension is a classical problem in complexity reduction. It is important that data approximation capture the structure(s) and dynamics of the data, however distortion to data by many methods during approximation implies that some geometric structure(s) of the data may not be preserved during data approximation. For methods that model the manifold of the data, the quality of approximation depends crucially on the initialization of the method. The first part of this thesis investigates the effect of initialization on manifold modelling methods. Using Self Organising Maps (SOM) as a case study, we compared the quality of learning of manifold methods for two popular initialization methods; random initialization and principal component initialization. To further understand the dynamics of manifold learning, datasets were further classified into linear, quasilinear and nonlinear. The second part of this thesis focuses on revealing geometric structure(s) in high dimension data using an extension of Principal Component Analysis (PCA). Feature extraction using (PCA) favours direction with large variance which could obfuscate other interesting geometric structure(s) that could be present in the data. To reveal these intrinsic structures, we analysed the local PCA structures of the dataset. An equivalent definition of PCA is that it seeks subspaces that maximize the sum of pairwise distances of data projection; extending this definition we define localization in term of scale as maximizing the sum of weighted squared pairwise distances between data projections for various distributions of weights (scales). Since for complex data various regions of the dataspace could have different PCA structures, we also define localization with regards to dataspace. The resulting local PCA structures were represented by the projection matrix corresponding to the subspaces and analysed to reveal some structures in the data at various localizations.
108

Homology in finite index subgroups

Wall, Liam January 2009 (has links)
No description available.
109

Topics in high-dimensional and large-scale data analysis

Shah, Rajen Dinesh January 2014 (has links)
No description available.
110

Some applications of functional analysis in summability theory and bases in (F)- and (LF)-spaces

Bennett, G. January 1970 (has links)
No description available.

Page generated in 0.0197 seconds