• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 129
  • 21
  • 12
  • 7
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 237
  • 48
  • 45
  • 44
  • 41
  • 34
  • 31
  • 30
  • 27
  • 25
  • 23
  • 22
  • 21
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Strategies For Recycling Krylov Subspace Methods and Bilinear Form Estimation

Swirydowicz, Katarzyna 10 August 2017 (has links)
The main theme of this work is effectiveness and efficiency of Krylov subspace methods and Krylov subspace recycling. While solving long, slowly changing sequences of large linear systems, such as the ones that arise in engineering, there are many issues we need to consider if we want to make the process reliable (converging to a correct solution) and as fast as possible. This thesis is built on three main components. At first, we target bilinear and quadratic form estimation. Bilinear form $c^TA^{-1}b$ is often associated with long sequences of linear systems, especially in optimization problems. Thus, we devise algorithms that adapt cheap bilinear and quadratic form estimates for Krylov subspace recycling. In the second part, we develop a hybrid recycling method that is inspired by a complex CFD application. We aim to make the method robust and cheap at the same time. In the third part of the thesis, we optimize the implementation of Krylov subspace methods on Graphic Processing Units (GPUs). Since preconditioners based on incomplete matrix factorization (ILU, Cholesky) are very slow on the GPUs, we develop a preconditioner that is effective but well suited for GPU implementation. / Ph. D. / In many applications we encounter the repeated solution of a large number of slowly changing large linear systems. The cost of solving these systems typically dominates the computation. This is often the case in medical imaging, or more generally inverse problems, and optimization of designs. Because of the size of the matrices, Gaussian elimination is infeasible. Instead, we find a sufficiently accurate solution using iterative methods, so-called Krylov subspace methods, that improve the solution with every iteration computing a sequence of approximations spanning a Krylov subspace. However, these methods often take many iterations to construct a good solution, and these iterations can be expensive. Hence, we consider methods to reduce the number of iterations while keeping the iterations cheap. One such approach is Krylov subspace recycling, in which we recycle judiciously selected subspaces from previous linear solves to improve the rate of convergence and get a good initial guess. In this thesis, we focus on improving efficiency (runtimes) and effectiveness (number of iterations) of Krylov subspace methods. The thesis has three parts. In the first part, we focus on efficiently estimating sequences of bilinear forms, c<sup>T</sup>A⁻¹b. We approximate the bilinear forms using the properties of Krylov subspaces and Krylov subspace solvers. We devise an algorithm that allows us to use Krylov subspace recycling methods to efficiently estimate bilinear forms, and we test our approach on three applications: topology optimization for the optimal design of structures, diffuse optical tomography, and error estimation and grid adaptation in computational fluid dynamics. In the second part, we focus on finding the best strategy for Krylov subspace recycling for two large computational fluid dynamics problems. We also present a new approach, which lets us reduce the computational cost of Krylov subspace recycling. In the third part, we investigate Krylov subspace methods on Graphics Processing Units. We use a lid driven cavity problem from computational fluid dynamics to perform a thorough analysis of how the choice of the Krylov subspace solver and preconditioner influences runtimes. We propose a new preconditioner, which is designed to work well on Graphics Processing Units.
192

Linear Eigenvalue Problems in Quantum Chemistry / Linjärt egenvärde Problem inom kvantkemi kvantkemi

van de Linde, Storm January 2023 (has links)
In this thesis, a method to calculate eigenpairs is implemented for the Multipsi library. While the standard implemtentations use the Davidson method with Rayleigh-Ritz extraction to calculate the eigenpairs with the lowest eigenvalues, the new method uses the harmonic Davidson method with the harmonic Rayleigh-Ritz extraction to calculate eigenpairs with eigenvalues near a chosen target. This is done for Configuration Interaction calculations and for Multiconfigurational methods. From calculations, it seems the new addition to the Multipsi library is worth investigating further as convergence for difficult systems with a lot of near-degeneracy was improved. / I denna avhandling implementeras en metod för att beräkna egenpar för Multipsi-biblioteket. Medan standardimplementeringarna använder Davidson-metoden med Rayleigh-Ritz-extraktion för att beräkna egenparen med de lägsta egenvärdena, använder den nya metoden den harmoniska Davidson-metoden med den harmoniska Rayleigh-Ritz-extraktionen för att beräkna egenparen med egenvärden nära ett valt mål. Detta görs för konfigurationsinteraktionsberäkningar och för multikonfigurationsmetoder. Utifrån beräkningarna verkar det nya tillskottet till Multipsi-biblioteket vara värt att undersöka vidare eftersom konvergensen för svåra system med mycket nära degenerering förbättrades.
193

Semantic content analysis for effective video segmentation, summarisation and retrieval

Ren, Jinchang January 2009 (has links)
This thesis focuses on four main research themes namely shot boundary detection, fast frame alignment, activity-driven video summarisation, and highlights based video annotation and retrieval. A number of novel algorithms have been proposed to address these issues, which can be highlighted as follows. Firstly, accurate and robust shot boundary detection is achieved through modelling of cuts into sub-categories and appearance based modelling of several gradual transitions, along with some novel features extracted from compressed video. Secondly, fast and robust frame alignment is achieved via the proposed subspace phase correlation (SPC) and an improved sub-pixel strategy. The SPC is proved to be insensitive to zero-mean-noise, and its gradient-based extension is even robust to non-zero-mean noise and can be used to deal with non-overlapped regions for robust image registration. Thirdly, hierarchical modelling of rush videos using formal language techniques is proposed, which can guide the modelling and removal of several kinds of junk frames as well as adaptive clustering of retakes. With an extracted activity level measurement, shot and sub-shot are detected for content-adaptive video summarisation. Fourthly, highlights based video annotation and retrieval is achieved, in which statistical modelling of skin pixel colours, knowledge-based shot detection, and improved determination of camera motion patterns are employed. Within these proposed techniques, one important principle is to integrate various kinds of feature evidence and to incorporate prior knowledge in modelling the given problems. High-level hierarchical representation is extracted from the original linear structure for effective management and content-based retrieval of video data. As most of the work is implemented in the compressed domain, one additional benefit is the achieved high efficiency, which will be useful for many online applications.
194

Algorithmes bayésiens variationnels accélérés et applications aux problèmes inverses de grande taille / Fast variational Bayesian algorithms and their application to large dimensional inverse problems

Zheng, Yuling 04 December 2014 (has links)
Dans le cadre de cette thèse, notre préoccupation principale est de développer des approches non supervisées permettant de résoudre des problèmes de grande taille le plus efficacement possible. Pour ce faire, nous avons considéré des approches bayésiennes qui permettent d'estimer conjointement les paramètres de la méthode avec l'objet d'intérêt. Dans ce cadre, la difficulté principale est que la loi a posteriori est en général complexe. Pour résoudre ce problème, nous nous sommes intéressés à l'approximation bayésienne variationnelle (BV) qui offre une approximation séparable de la loi a posteriori. Néanmoins, les méthodes d’approximation BV classiques souffrent d’une vitesse de convergence faible. La première contribution de cette thèse consiste à transposer les méthodes d'optimisation par sous-espace dans l'espace fonctionnel impliqué dans le cadre BV, ce qui nous permet de proposer une nouvelle méthode d'approximation BV. Nous avons montré l’efficacité de notre nouvelle méthode par les comparaisons avec les approches de l’état de l’art.Nous avons voulu ensuite confronter notre nouvelle méthodologie à des problèmes de traitement d'images de grande taille. De plus nous avons voulu favoriser les images régulières par morceau. Nous avons donc considéré un a priori de Variation Total (TV) et un autre a priori à variables cachées ressemblant à un mélange scalaire de gaussiennes par changement de positions. Avec ces deux modèles a priori, en appliquant notre méthode d’approximation BV, nous avons développé deux approches non-supervisées rapides et bien adaptées aux images régulières par morceau.En effet, les deux lois a priori introduites précédemment sont corrélées ce qui rend l'estimation des paramètres de méthode très compliquée : nous sommes souvent confronté à une fonction de partition non explicite. Pour contourner ce problème, nous avons considéré ensuite de travailler dans le domaine des ondelettes. Comme les coefficients d'ondelettes des images naturelles sont généralement parcimonieux, nous avons considéré des lois de la famille de mélange scalaire de gaussiennes par changement d'échelle (GSM) pour décrire la parcimonie. Une autre contribution est donc de développer une approche non-supervisée pour les lois de la famille GSM dont la densité est explicitement connue, en utilisant la méthode d'approximation BV proposée. / In this thesis, our main objective is to develop efficient unsupervised approaches for large dimensional problems. To do this, we consider Bayesian approaches, which allow us to jointly estimate regularization parameters and the object of interest. In this context, the main difficulty is that the posterior distribution is generally complex. To tackle this problem, we consider variational Bayesian (VB) approximation, which provides a separable approximation of the posterior distribution. Nevertheless, classical VB methods suffer from slow convergence speed. The first contribution of this thesis is to transpose the subspace optimization methods to the functional space involved in VB framework, which allows us to propose a new VB approximation method. We have shown the efficiency of the proposed method by comparisons with the state of the art approaches. Then we consider the application of our new methodology to large dimensional problems in image processing. Moreover, we are interested in piecewise smooth images. As a result, we have considered a Total Variation (TV) prior and a Gaussian location mixture-like hidden variable model. With these two priors, using our VB approximation method, we have developed two fast unsupervised approaches well adapted to piecewise smooth images.In fact, the priors introduced above are correlated which makes the estimation of regularization parameters very complicated: we often have a non-explicit partition function. To sidestep this problem, we have considered working in the wavelet domain. As the wavelet coefficients of natural images are generally sparse, we considered prior distributions of the Gaussian scale mixture family to enforce sparsity. Another contribution is therefore the development of an unsupervised approach for a prior distribution of the GSM family whose density is explicitly known, using the proposed VB approximation method.
195

[en] PARAMETRIC IDENTIFICATION OF MECHANICAL SYSTEMS USING SUBSPACE ALGORITHMS / [pt] IDENTIFICAÇÃO PARAMÉTRICA DE SISTEMAS MECÂNICOS USANDO ALGORITMOS DE SUBESPAÇO

GERMAIN CARLOS VENERO LOZANO 22 December 2003 (has links)
[pt] Identificação paramétrica de sistemas mecânicos é uma das principais aplicações das técnicas de identificação de sistemas na Engenharia Mecânica, especificamente para a identificação de parâmetros modais de estruturas flexíveis. Um dos principais problemas na identificação é a presença de ruido nas medições. Este trabalho apresenta uma análise na presença de ruído de alguns dos métodos no domínio do tempo aplicáveis na identificação de parâmetros modais de sistemas mecânicos lineares invariantes no tempo com múltiplas entradas e múltiplas saídas (MIMO), usando como base o modelo em espaço de estados tipicamente usado em Dinâmica e Vibrações. Os algoritmos de subespaço envolvidos neste estudo destacam-se pela utilização da decomposição em valores singulares (SVD) dos dados, obtendo subespaços ortogonais dos modos associados ao sistema e dos modos associados ao ruído. Outros complicadores no processo de identificação que serão explorados neste trabalho são: flexibilidde e baixo amortecimento. Comparam-se as técnicas usando o modelo no espaço de estado da estrutura Mini- mast desenvolvida pela NASA Langley Research Center e simulações são feitas variando o nível de ruído nos dados, o amortecimento e a flexibilidade da estrutura. O problema de identificação de parâmetros estruturais (matrizes de massa, rigidez e amortecimento) também é estudado, as características e limitações dos algoritmos utilizados são analisados. Como exemplo de aplicação prática, um experimento foi realizado para identificar os parâmetros modais e estruturais de um rotor flexível e os resultados são discutidos. / [en] Parametric identification of mechanical systems is one of the main applications of the system identification techniques in Mechanical Engineering, specifically for the identification of modal parameters of flexible structures. One of the main problems in the identification is the presence of noise in the measurements. This work presents an analysis in the presence of noise of some of the time domain methods applicable in modal parameters identification of linear time-invariant mechanical systems with multiple inputs and multiple outputs (MIMO), using as base the state-space model typically used in Dynamics and Vibrations. The subspace algorithms involved in this study are distinguished for the use of the singular values decomposition (SVD) of the data, obtaining ortogonal subspaces of the modes associates to the system and of the modes associates to the noise. Other complicators in the identification process that will be explored in this work are: flexibility and low damping. The techniques are compared using the state-space model of the Mini-mast structure developed for NASA Langley Research Center and simulations are made varying the level of noise in the data, the damping and the flexibility of the structure. The problem of identification of structural parameters (mass, stiffness and damping matrices) also is studied, the characteristics and limitations of the used algorithm is analyzed. As example of practical application, an experiment was made to identify the modal parameters of a flexible rotor and the results are discussed.
196

FIR System Identification Using Higher Order Cumulants -A Generalized Approach

Srinivas, L 07 1900 (has links)
The thesis presents algorithms based on a linear algebraic solution for the identification of the parameters of the FIR system using only higher order statistics when only the output of the system corrupted by additive Gaussian noise is observed. All the traditional parametric methods of estimating the parameters of the system have been based on the 2nd order statistics of the output of the system. These methods suffer from the deficiency that they do not preserve the phase response of the system and hence cannot identify non-minimum phase systems. To circumvent this problem, higher order statistics which preserve the phase characteristics of a process and hence are able to identify a non-minimum phase system and also are insensitive to additive Gaussian noise have been used in recent years. Existing algorithms for the identification of the FIR parameters based on the higher order cumulants use the autocorrelation sequence as well and give erroneous results in the presence of additive colored Gaussian noise. This problem can be overcome by obtaining algorithms which do not utilize the 2nd order statistics. An existing relationship between the 2nd order and any Ith order cumulants is generalized to a relationship between any two arbitrary k, Ith order cumulants. This new relationship is used to obtain new algorithms for FIR system identification which use only cumulants of order > 2 and with no other restriction than the Gaussian nature of the additive noise sequence. Simulation studies are presented to demonstrate the failure of the existing algorithms when the imposed constraints on the 2nd order statistics of the additive noise are violated while the proposed algorithms perform very well and give consistent results. Recently, a new algebraic approach for parameter estimation method denoted the Linear Combination of Slices (LCS) method was proposed and was based on expressing the FIR parameters as a linear combination of the cumulant slices. The rank deficient cumulant matrix S formed in the LCS method can be expressed as a product of matrices which have a certain structure. The orthogonality property of the subspace orthogonal to S and the range space of S has been exploited to obtain a new class of algorithms for the estimation of the parameters of a FIR system. Numerical simulation studies have been carried out to demonstrate the good behaviour of the proposed algorithms. Analytical expressions for the covariance of the estimates of the FIR parameters of the different algorithms presented in the thesis have been obtained and numerical comparison has been done for specific cases. Numerical examples to demonstrate the application of the proposed algorithms for channel equalization in data communication and as an initial solution to the cumulant matching nonlinear optimization methods have been presented.
197

Automatic target recognition using passive bistatic radar signals.

Pisane, Jonathan 04 April 2013 (has links) (PDF)
We present the design, development, and test of three novel, distinct automatic target recognition (ATR) systems for the recognition of airplanes and, more specifically, non-cooperative airplanes, i.e. airplanes that do not provide information when interrogated, in the framework of passive bistatic radar systems. Passive bistatic radar systems use one or more illuminators of opportunity (already present in the field), with frequencies up to 1 GHz for the transmitter part of the systems considered here, and one or more receivers, deployed by the persons managing the system, and not co-located with the transmitters. The sole source of information are the signal scattered on the airplane and the direct-path signal that are collected by the receiver, some basic knowledge about the transmitter, and the geometrical bistatic radar configuration. The three distinct ATR systems that we built respectively use the radar images, the bistatic complex radar cross-section (BS-RCS), and the bistatic radar cross-section (BS-RCS) of the targets. We use data acquired either on scale models of airplanes placed in an anechoic, electromagnetic chamber or on real-size airplanes using a bistatic testbed consisting of a VOR transmitter and a software-defined radio (SDR) receiver, located near Orly airport, France. We describe the radar phenomenology pertinent for the problem at hand, as well as the mathematical underpinnings of the derivation of the bistatic RCS values and of the construction of the radar images.For the classification of the observed targets into pre-defined classes, we use either extremely randomized trees or subspace methods. A key feature of our approach is that we break the recognition problem into a set of sub-problems by decomposing the parameter space, which consists of the frequency, the polarization, the aspect angle, and the bistatic angle, into regions. We build one recognizer for each region. We first validate the extra-trees method on the radar images of the MSTAR dataset, featuring ground vehicles. We then test the method on the images of the airplanes constructed from data acquired in the anechoic chamber, achieving a probability of correct recognition up to 0.99.We test the subspace methods on the BS-CRCS and on the BS-RCS of the airplanes extracted from the data acquired in the anechoic chamber, achieving a probability of correct recognition up to 0.98, with variations according to the frequency band, the polarization, the sector of aspect angle, the sector of bistatic angle, and the number of (Tx,Rx) pairs used. The ATR system deployed in the field gives a probability of correct recognition of $0.82$, with variations according to the sector of aspect angle and the sector of bistatic angle.
198

A decompositional investigation of 3D face recognition

Cook, James Allen January 2007 (has links)
Automated Face Recognition is the process of determining a subject's identity from digital imagery of their face without user intervention. The term in fact encompasses two distinct tasks; Face Verficiation is the process of verifying a subject's claimed identity while Face Identification involves selecting the most likely identity from a database of subjects. This dissertation focuses on the task of Face Verification, which has a myriad of applications in security ranging from border control to personal banking. Recently the use of 3D facial imagery has found favour in the research community due to its inherent robustness to the pose and illumination variations which plague the 2D modality. The field of 3D face recognition is, however, yet to fully mature and there remain many unanswered research questions particular to the modality. The relative expense and specialty of 3D acquisition devices also means that the availability of databases of 3D face imagery lags significantly behind that of standard 2D face images. Human recognition of faces is rooted in an inherently 2D visual system and much is known regarding the use of 2D image information in the recognition of individuals. The corresponding knowledge of how discriminative information is distributed in the 3D modality is much less well defined. This dissertations addresses these issues through the use of decompositional techniques. Decomposition alleviates the problems associated with dimensionality explosion and the Small Sample Size (SSS) problem and spatial decomposition is a technique which has been widely used in face recognition. The application of decomposition in the frequency domain, however, has not received the same attention in the literature. The use of decomposition techniques allows a map ping of the regions (both spatial and frequency) which contain the discriminative information that enables recognition. In this dissertation these techniques are covered in significant detail, both in terms of practical issues in the respective domains and in terms of the underlying distributions which they expose. Significant discussion is given to the manner in which the inherent information of the human face is manifested in the 2D and 3D domains and how these two modalities inter-relate. This investigation is extended to cover also the manner in which the decomposition techniques presented can be recombined into a single decision. Two new methods for learning the weighting functions for both the sum and product rules are presented and extensive testing against established methods is presented. Knowledge acquired from these examinations is then used to create a combined technique termed Log-Gabor Templates. The proposed technique utilises both the spatial and frequency domains to extract superior performance to either in isolation. Experimentation demonstrates that the spatial and frequency domain decompositions are complimentary and can combined to give improved performance and robustness.
199

Krylov subspace methods for approximating functions of symmetric positive definite matrices with applications to applied statistics and anomalous diffusion

Simpson, Daniel Peter January 2008 (has links)
Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A..=2b, where A 2 Rnn is a large, sparse symmetric positive definite matrix and b 2 Rn is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LLT is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L..T z, with x = A..1=2z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form n = A..=2b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t..=2 and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A..=2b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.
200

Contribution aux décompositions rapides des matrices et tenseurs / Contributions to fast matrix and tensor decompositions

Nguyen, Viet-Dung 16 November 2016 (has links)
De nos jours, les grandes masses de données se retrouvent dans de nombreux domaines relatifs aux applications multimédia, sociologiques, biomédicales, radio astronomiques, etc. On parle alors du phénomène ‘Big Data’ qui nécessite le développement d’outils appropriés pour la manipulation et l’analyse appropriée de telles masses de données. Ce travail de thèse est dédié au développement de méthodes efficaces pour la décomposition rapide et adaptative de tenseurs ou matrices de grandes tailles et ce pour l’analyse de données multidimensionnelles. Nous proposons en premier une méthode d’estimation de sous espaces qui s’appuie sur la technique dite ‘divide and conquer’ permettant une estimation distribuée ou parallèle des sous-espaces désirés. Après avoir démontré l’efficacité numérique de cette solution, nous introduisons différentes variantes de celle-ci pour la poursuite adaptative ou bloc des sous espaces principaux ou mineurs ainsi que des vecteurs propres de la matrice de covariance des données. Une application à la suppression d’interférences radiofréquences en radioastronomie a été traitée. La seconde partie du travail a été consacrée aux décompositions rapides de type PARAFAC ou Tucker de tenseurs multidimensionnels. Nous commençons par généraliser l’approche ‘divide and conquer’ précédente au contexte tensoriel et ce en vue de la décomposition PARAFAC parallélisable des tenseurs. Ensuite nous adaptons une technique d’optimisation de type ‘all-at-once’ pour la décomposition robuste (à la méconnaissance des ordres) de tenseurs parcimonieux et non négatifs. Finalement, nous considérons le cas de flux de données continu et proposons deux algorithmes adaptatifs pour la décomposition rapide (à complexité linéaire) de tenseurs en dimension 3. Malgré leurs faibles complexités, ces algorithmes ont des performances similaires (voire parfois supérieures) à celles des méthodes existantes de la littérature. Au final, ce travail aboutit à un ensemble d’outils algorithmiques et algébriques efficaces pour la manipulation et l’analyse de données multidimensionnelles de grandes tailles. / Large volumes of data are being generated at any given time, especially from transactional databases, multimedia content, social media, and applications of sensor networks. When the size of datasets is beyond the ability of typical database software tools to capture, store, manage, and analyze, we face the phenomenon of big data for which new and smarter data analytic tools are required. Big data provides opportunities for new form of data analytics, resulting in substantial productivity. In this thesis, we will explore fast matrix and tensor decompositions as computational tools to process and analyze multidimensional massive-data. We first aim to study fast subspace estimation, a specific technique used in matrix decomposition. Traditional subspace estimation yields high performance but suffers from processing large-scale data. We thus propose distributed/parallel subspace estimation following a divide-and-conquer approach in both batch and adaptive settings. Based on this technique, we further consider its important variants such as principal component analysis, minor and principal subspace tracking and principal eigenvector tracking. We demonstrate the potential of our proposed algorithms by solving the challenging radio frequency interference (RFI) mitigation problem in radio astronomy. In the second part, we concentrate on fast tensor decomposition, a natural extension of the matrix one. We generalize the results for the matrix case to make PARAFAC tensor decomposition parallelizable in batch setting. Then we adapt all-at-once optimization approach to consider sparse non-negative PARAFAC and Tucker decomposition with unknown tensor rank. Finally, we propose two PARAFAC decomposition algorithms for a classof third-order tensors that have one dimension growing linearly with time. The proposed algorithms have linear complexity, good convergence rate and good estimation accuracy. The results in a standard setting show that the performance of our proposed algorithms is comparable or even superior to the state-of-the-art algorithms. We also introduce an adaptive nonnegative PARAFAC problem and refine the solution of adaptive PARAFAC to tackle it. The main contributions of this thesis, as new tools to allow fast handling large-scale multidimensional data, thus bring a step forward real-time applications.

Page generated in 0.0517 seconds