• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • Tagged with
  • 11
  • 11
  • 11
  • 9
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Tensor product methods in numerical simulation of high-dimensional dynamical problems

Dolgov, Sergey 20 August 2014 (has links)
Quantification of stochastic or quantum systems by a joint probability density or wave function is a notoriously difficult computational problem, since the solution depends on all possible states (or realizations) of the system. Due to this combinatorial flavor, even a system containing as few as ten particles may yield as many as $10^{10}$ discretized states. None of even modern supercomputers are capable to cope with this curse of dimensionality straightforwardly, when the amount of quantum particles, for example, grows up to more or less interesting order of hundreds. A traditional approach for a long time was to avoid models formulated in terms of probabilistic functions, and simulate particular system realizations in a randomized process. Since different times in different communities, data-sparse methods came into play. Generally, they aim to define all data points indirectly, by a map from a low amount of representers, and recast all operations (e.g. linear system solution) from the initial data to the effective parameters. The most advanced techniques can be applied (at least, tried) to any given array, and do not rely explicitly on its origin. The current work contributes further progress to this area in the particular direction: tensor product methods for separation of variables. The separation of variables has a long history, and is based on the following elementary concept: a function of many variables may be expanded as a product of univariate functions. On the discrete level, a function is encoded by an array of its values, or a tensor. Therefore, instead of a huge initial array, the separation of variables allows to work with univariate factors with much less efforts. The dissertation contains a short overview of existing tensor representations: canonical PARAFAC, Hierarchical Tucker, Tensor Train (TT) formats, as well as the artificial tensorisation, resulting in the Quantized Tensor Train (QTT) approximation method. The contribution of the dissertation consists in both theoretical constructions and practical numerical algorithms for high-dimensional models, illustrated on the examples of the Fokker-Planck and the chemical master equations. Both arise from stochastic dynamical processes in multiconfigurational systems, and govern the evolution of the probability function in time. A special focus is put on time propagation schemes and their properties related to tensor product methods. We show that these applications yield large-scale systems of linear equations, and prove analytical separable representations of the involved functions and operators. We propose a new combined tensor format (QTT-Tucker), which descends from the TT format (hence TT algorithms may be generalized smoothly), but provides complexity reduction by an order of magnitude. We develop a robust iterative solution algorithm, constituting most advantageous properties of the classical iterative methods from numerical analysis and alternating density matrix renormalization group (DMRG) techniques from quantum physics. Numerical experiments confirm that the new method is preferable to DMRG algorithms. It is as fast as the simplest alternating schemes, but as reliable and accurate as the Krylov methods in linear algebra.

Page generated in 0.137 seconds