1 |
Algebraic Multigrid for Markov Chains and Tensor DecompositionMiller, Killian January 2012 (has links)
The majority of this thesis is concerned with the development of efficient and robust numerical methods based on adaptive algebraic multigrid to compute the stationary distribution of Markov chains. It is shown that classical algebraic multigrid techniques can be applied in an exact interpolation scheme framework to compute the stationary distribution of irreducible, homogeneous Markov chains. A quantitative analysis shows that algebraically smooth multiplicative error is locally constant along strong connections in a scaled system operator, which suggests that classical algebraic multigrid coarsening and interpolation can be applied to the class of nonsymmetric irreducible singular M-matrices with zero column sums. Acceleration schemes based on fine-level iterant recombination, and over-correction of the coarse-grid correction are developed to improve the rate of convergence and scalability of simple adaptive aggregation multigrid methods for Markov chains. Numerical tests over a wide range of challenging nonsymmetric test problems demonstrate the effectiveness of the proposed multilevel method and the acceleration schemes.
This thesis also investigates the application of adaptive algebraic multigrid techniques for computing the canonical decomposition of higher-order tensors. The canonical decomposition is formulated as a least squares optimization problem, for which local minimizers are computed by solving the first-order optimality equations. The proposed multilevel method consists of two phases: an adaptive setup phase that uses a multiplicative correction scheme in conjunction with bootstrap algebraic multigrid interpolation to build the necessary operators on each level, and a solve phase that uses additive correction cycles based on the full approximation scheme to efficiently obtain an accurate solution. The alternating least squares method, which is a standard one-level iterative method for computing the canonical decomposition, is used as the relaxation scheme. Numerical tests show that for certain test problems arising from the discretization of high-dimensional partial differential equations on regular lattices the proposed multilevel method significantly outperforms the standard alternating least squares method when a high level of accuracy is required.
|
2 |
Algebraic Multigrid for Markov Chains and Tensor DecompositionMiller, Killian January 2012 (has links)
The majority of this thesis is concerned with the development of efficient and robust numerical methods based on adaptive algebraic multigrid to compute the stationary distribution of Markov chains. It is shown that classical algebraic multigrid techniques can be applied in an exact interpolation scheme framework to compute the stationary distribution of irreducible, homogeneous Markov chains. A quantitative analysis shows that algebraically smooth multiplicative error is locally constant along strong connections in a scaled system operator, which suggests that classical algebraic multigrid coarsening and interpolation can be applied to the class of nonsymmetric irreducible singular M-matrices with zero column sums. Acceleration schemes based on fine-level iterant recombination, and over-correction of the coarse-grid correction are developed to improve the rate of convergence and scalability of simple adaptive aggregation multigrid methods for Markov chains. Numerical tests over a wide range of challenging nonsymmetric test problems demonstrate the effectiveness of the proposed multilevel method and the acceleration schemes.
This thesis also investigates the application of adaptive algebraic multigrid techniques for computing the canonical decomposition of higher-order tensors. The canonical decomposition is formulated as a least squares optimization problem, for which local minimizers are computed by solving the first-order optimality equations. The proposed multilevel method consists of two phases: an adaptive setup phase that uses a multiplicative correction scheme in conjunction with bootstrap algebraic multigrid interpolation to build the necessary operators on each level, and a solve phase that uses additive correction cycles based on the full approximation scheme to efficiently obtain an accurate solution. The alternating least squares method, which is a standard one-level iterative method for computing the canonical decomposition, is used as the relaxation scheme. Numerical tests show that for certain test problems arising from the discretization of high-dimensional partial differential equations on regular lattices the proposed multilevel method significantly outperforms the standard alternating least squares method when a high level of accuracy is required.
|
3 |
Towards large-scale quantum computationFowler, Austin Greig Unknown Date (has links) (PDF)
This thesis deals with a series of quantum computer implementation issues from the Kane 31P in 28Si architecture to Shor’s integer factoring algorithm and beyond. The discussion begins with simulations of the adiabatic Kane CNOT and readout gates, followed by linear nearest neighbor implementations of 5-qubit quantum error correction with and without fast measurement. A linear nearest neighbor circuit implementing Shor’s algorithm is presented, then modified to remove the need for exponentially small rotation gates. Finally, a method of constructing optimal approximations of arbitrary single-qubit fault-tolerant gates is described and applied to the specific case of the remaining rotation gates required by Shor’s algorithm.
|
4 |
Identification aveugle de mélanges et décomposition canonique de tenseurs : application à l'analyse de l'eau / Blind identification of mixtures and canonical tensor decomposition : application to wateranalysisRoyer, Jean-Philip 04 October 2013 (has links)
Dans cette thèse, nous nous focalisons sur le problème de la décomposition polyadique minimale de tenseurs de dimension trois, problème auquel on se réfère généralement sous différentes terminologies : « Polyadique Canonique » (CP en anglais), « CanDecomp », ou encore « Parafac ». Cette décomposition s'avère très utile dans un très large panel d'applications. Cependant, nous nous concentrons ici sur la spectroscopie de fluorescence appliquée à des données environnementales particulières de type échantillons d'eau qui pourront avoir été collectés en divers endroits ou différents moments. Ils contiennent un mélange de plusieurs molécules organiques et l'objectif des traitements numériques mis en œuvre est de parvenir à séparer et à ré-estimer ces composés présents dans les échantillons étudiés. Par ailleurs, dans plusieurs applications comme l'imagerie hyperspectrale ou justement, la chimiométrie, il est intéressant de contraindre les matrices de facteurs recherchées à être réelles et non négatives car elles sont représentatives de quantités physiques réelles non négatives (spectres, fractions d'abondance, concentrations, ...etc.). C'est pourquoi tous les algorithmes développés durant cette thèse l'ont été dans ce cadre (l'avantage majeur de cette contrainte étant de rendre le problème d'approximation considéré bien posé). Certains de ces algorithmes reposent sur l'utilisation de méthodes proches des fonctions barrières, d'autres approches consistent à paramétrer directement les matrices de facteurs recherchées par des carrés. / In this manuscript, we focus on the minimal polyadic decomposition of third order tensors, which is often referred to: “Canonical Polyadic” (CP), “CanDecomp”, or “Parafac”. This decomposition is useful in a very wide panel of applications. However, here, we only address the problem of fluorescence spectroscopy applied to environment data collected in different locations or times. They contain a mixing of several organic components and the goal of the used processing is to separate and estimate these components present in the considered samples. Moreover, in some applications like hyperspectral unmixing or chemometrics, it is useful to constrain the wanted loading matrices to be real and nonnegative, because they represent nonnegative physical data (spectra, abundance fractions, concentrations, etc...). That is the reason why all the algorithms developed here take into account this constraint (the main advantage is to turn the approximation problem into a well-posed one). Some of them rely on methods close to barrier functions, others consist in a parameterization of the loading matrices with the help of squares. Many optimization algorithms were considered: gradient approaches, nonlinear conjugate gradient, that fits well with big dimension problems, Quasi-Newton (BGFS and DFP) and finally Levenberg-Marquardt. Two versions of these algorithms have been considered: “Enhanced Line Search” version (ELS, enabling to escape from local minima) and the “backtracking” version (alternating with ELS).
|
5 |
Nonnegative matrix and tensor factorizations, least squares problems, and applicationsKim, Jingu 14 November 2011 (has links)
Nonnegative matrix factorization (NMF) is a useful dimension reduction method that has been investigated and applied in various areas. NMF is considered for high-dimensional data in which each element has a nonnegative value, and it provides a low-rank approximation formed by factors whose elements are also nonnegative. The nonnegativity constraints imposed on the low-rank factors not only enable natural interpretation but also reveal the hidden structure of data. Extending the benefits of NMF to multidimensional arrays, nonnegative tensor factorization (NTF) has been shown to be successful in analyzing complicated data sets. Despite the success, NMF and NTF have been actively developed only in the recent decade, and algorithmic strategies for computing NMF and NTF have not been fully studied. In this thesis, computational challenges regarding NMF, NTF, and related least squares problems are addressed.
First, efficient algorithms of NMF and NTF are investigated based on a connection from the NMF and the NTF problems to the nonnegativity-constrained least squares (NLS) problems. A key strategy is to observe typical structure of the NLS problems arising in the NMF and the NTF computation and design a fast algorithm utilizing the structure. We propose an accelerated block principal pivoting method to solve the NLS problems, thereby significantly speeding up the NMF and NTF computation. Implementation results with synthetic and real-world data sets validate the efficiency of the proposed method.
In addition, a theoretical result on the classical active-set method for rank-deficient NLS problems is presented. Although the block principal pivoting method appears generally more efficient than the active-set method for the NLS problems, it is not applicable for rank-deficient cases. We show that the active-set method with a proper starting vector can actually solve the rank-deficient NLS problems without ever running into rank-deficient least squares problems during iterations.
Going beyond the NLS problems, it is presented that a block principal pivoting strategy can also be applied to the l1-regularized linear regression. The l1-regularized linear regression, also known as the Lasso, has been very popular due to its ability to promote sparse solutions. Solving this problem is difficult because the l1-regularization term is not differentiable. A block principal pivoting method and its variant, which overcome a limitation of previous active-set methods, are proposed for this problem with successful experimental results.
Finally, a group-sparsity regularization method for NMF is presented. A recent challenge in data analysis for science and engineering is that data are often represented in a structured way. In particular, many data mining tasks have to deal with group-structured prior information, where features or data items are organized into groups. Motivated by an observation that features or data items that belong to a group are expected to share the same sparsity pattern in their latent factor representations, We propose mixed-norm regularization to promote group-level sparsity. Efficient convex optimization methods for dealing with the regularization terms are presented along with computational comparisons between them. Application examples of the proposed method in factor recovery, semi-supervised clustering, and multilingual text analysis are presented.
|
Page generated in 0.1099 seconds