Spelling suggestions: "subject:"cholesky"" "subject:"choleskey""
1 |
Valuation and analysis of equity-linked bonds on multi-underlying by copula methodShen, Wei-Cheng 08 September 2006 (has links)
none
|
2 |
Estimation of time series models with incomplete dataPenzer, Jeremy January 1996 (has links)
No description available.
|
3 |
Recovering Cholesky Factor in Smoothing and MappingTouchette, Sébastien 30 July 2018 (has links)
Autonomous vehicles, from self driving cars to small sized unmanned aircraft, is a
hotly contested market experiencing significant growth. As a result, fundamental concepts of autonomous vehicle navigation, such as simultaneous localisation and mapping (SLAM) are very active fields of research garnering significant interest in the drive to improve effectiveness.
Traditionally, SLAM has been performed by filtering methods but several improvements have brought smoothing and mapping (SAM) based methods to the forefront of SLAM research. Although recent works have made such methods incremental, they retain some batch functionalities from their bundle-adjustment origins. More specifically, re-linearisation and column reordering still require the full re-computation of the solution.
In this thesis, the problem of re-computation after column reordering is addressed. A
novel method to reflect changes in ordering directly on the Cholesky factor, called Factor Recovery, is proposed. Under the assumption that changes to the ordering are small and localised, the proposed method can be executed faster than the re-computation of the Cholesky factor. To define each method’s optimal region of operation, a function estimating the computational cost of Factor Recovery is derived and compared with the known cost of Cholesky factorisation obtained using experimental data. Combining Factor Recovery and traditional Cholesky decomposition, the Hybrid Cholesky decomposition algorithm is proposed. This novel algorithm attempts to select the most efficient algorithm to compute the Cholesky factor based on an estimation of the work required.
To obtain experimental results, the Hybrid Cholesky decomposition algorithm was
integrated in the SLAM++ software and executed on popular datasets from the literature. The proposed method yields an average reduction of 1.9 % on the total execution time with reductions of up to 31 % obtained in certain situations. When considering only the time spend performing reordering and factorisation for batch steps, reductions of 18 % on average and up to 78 % in certain situations are observed.
|
4 |
Optimering av blockbaserad Choleskyfaktorisering för moderna datorsystem : En studie om vektorisering och dess effekt på en befintlig blockalgoritm / Optimization of Block-based Cholesky Factorization for Modern Computer SystemsRosell, Jonathan, Vestergren, Andreas January 2016 (has links)
Linear systems and the solving of those is an important tool in many areas of science. The solving of linear systems is an operation of high complexity, and there are applications where systems of thousands of variables are used. Therefore, it is important to use methods and algorithms that can take full advantage of the performance of modern computers. Factorizing a matrix that represents a linear system makes solving it faster. If the matrix is symmetrical and positive definite Cholesky factorization can be used. J. Chen et. al. (2013) studied a block-based algorithm that gives better performance by using the cache memory more efficently when the matrix size increases. Since then, the conditions have changed. The cache memory of modern processors are subject to constant change, and modern processors capability of improving performance by vectorization have been vastly improved. This report examine how this block-based Cholesky factorization can be optimized for modern Intel processors. By using AVX2 instructions, the parts of the algorithm where most of the arithmetic operations are performed are vectorized. The report also study how the optimal block size, as well as how the breaking point between the naive algorithm and the block-based algorithm changes as the hardware develops. Using a fairly simple implementation with vectorization, the time required to factorize matrices of all sizes are cut in half. The breaking point between the naive and the block-based algorithm is now at matrices of sizes as small as 100 × 100. This is an interesting fact as prior research showed a trend where the breaking point seemed to move towards bigger matrices as the hardware developed. / Linjära system och lösning av dessa är ett viktigt verktyg inom många vetenskapliga områden. Lösning av linjära system är en operation med hög komplexitet, samtidigt som det finns tillämpningar där system med tusentals variabler är vanligt. Därför är det viktigt att använda metoder och algoritmer som utnyttjar datorns prestanda maximalt. Genom att faktorisera en matris som representerar ett linjärt system kan det lösas snabbare. Är matrisen symmetrisk och positivt definit kan Choleskyfaktorisering användas. J. Chen et. al (2013) undersökte en blockbaserad algoritm som ger bättre prestanda genom förbättrad cachehantering. Förutsättningarna har dock förändrats sedan deras undersökning. Dels är processorns cacheminnen under ständig utveckling, och dels så har moderna processorer kraftigt förbättrade möjligheter att öka prestandan med hjälp av vektorisering. Denna rapport undersöker hur denna blockbaserade Choleskyfaktorisering kan optimeras för moderna Intelprocessorer. Genom att använda AVX2-instruktioner vektoriseras de delar av algoritmen där flest operationer utförs. Samtidigt undersöks hur valet av blockstorlek påverkas av, samt hur brytpunkten mellan en klassisk, naiv algoritm och den blockbaserade algoritmen förändras i takt med att hårdvaran utvecklas. Med hjälp av en relativt enkel implementation av vektorisering halveras tiden för att faktorisera matriser oavsett storlek. Brytpunkten mellan den naiva och de blockbaserade algoritmerna sker nu redan runt 100 × 100-matriser. Detta är extra intressant då utvecklingen tidigare gått mot allt större matriser.
|
5 |
ARMA modelingKayahan, Gurhan 12 1900 (has links)
Approved for public release; distribution is unlimited / This thesis estimates the frequency response of a network where the only data is the
output obtained from an Autoregressive-moving average (ARMA) model driven by a
random input.
Models of random processes and existing methods for solving ARMA models are
examined. The estimation is performed iteratively by using the Yule-Walker Equations
in three different methods for the AR part and the Cholesky factorization for the MA
part. The AR parameters are estimated initially, then MA parameters are estimated
assuming that the AR parameters have been compensated for. After the estimation of
each parameter set, the original time series is filtered via the inverse of the last estimate
of the transfer function of an AR model or MA model, allowing better and better estimation
of each model's coefficients. The iteration refers to the procedure of removing
the MA or AR part from the random process in an alternating fashion allowing the
creation of an almost pure AR or MA process, respectively. As the iteration continues
the estimates are improving. When the iteration reaches a point where the coefficients
converse the last VIA and AR model coefficients are retained as final estimates. / http://archive.org/details/armamodeling00kaya / Lieutenant Junior Grade, Turkish Navy
|
6 |
A Note on Generation, Estimation and Prediction of Stationary ProcessesHauser, Michael A., Hörmann, Wolfgang, Kunst, Robert M., Lenneis, Jörg January 1994 (has links) (PDF)
Some recently discussed stationary processes like fractionally integrated processes cannot be described by low order autoregressive or moving average (ARMA) models rendering the common algorithms for generation estimation and prediction partly very misleading. We offer an unified approach based on the Cholesky decomposition of the covariance matrix which makes these problems exactly solvable in an efficient way. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
|
7 |
DSP implementation of the Cholesky factorisation / DSP implementation av CholeskyfaktoriseringenWinqvist, Arvid January 2014 (has links)
The Cholesky factorisation is an efficient tool that, when used correctly, significantlycan reduce the computational complexity in many applications. This thesiscontains an in-depth study of the factorisation, some of its applications andan implementation on the Coresonic SIMT DSP architecture. / Choleskyfaktoriseringen är ett effektivt verktyg som, när det används korrekt, signifikantkan minska beräkningskomplexiteten i många applikationer. Detta examensarbeteinnehåller en ingående studie av faktoriseringen, några av dess applikationersamt en implementation på Coresonic SIMT DSP architecture.
|
8 |
Contributions to Large Covariance and Inverse Covariance Matrices EstimationKang, Xiaoning 25 August 2016 (has links)
Estimation of covariance matrix and its inverse is of great importance in multivariate statistics with broad applications such as dimension reduction, portfolio optimization, linear discriminant analysis and gene expression analysis. However, accurate estimation of covariance or inverse covariance matrices is challenging due to the positive definiteness constraint and large number of parameters, especially in the high-dimensional cases. In this thesis, I develop several approaches for estimating large covariance and inverse covariance matrices with different applications.
In Chapter 2, I consider an estimation of time-varying covariance matrices in the analysis of multivariate financial data. An order-invariant Cholesky-log-GARCH model is developed for estimating the time-varying covariance matrices based on the modified Cholesky decomposition. This decomposition provides a statistically interpretable parametrization of the covariance matrix. The key idea of the proposed model is to consider an ensemble estimation of covariance matrix based on the multiple permutations of variables.
Chapter 3 investigates the sparse estimation of inverse covariance matrix for the highdimensional data. This problem has attracted wide attention, since zero entries in the inverse covariance matrix imply the conditional independence among variables. I propose an orderinvariant sparse estimator based on the modified Cholesky decomposition. The proposed estimator is obtained by assembling a set of estimates from the multiple permutations of variables. Hard thresholding is imposed on the ensemble Cholesky factor to encourage the sparsity in the estimated inverse covariance matrix. The proposed method is able to catch the correct sparse structure of the inverse covariance matrix.
Chapter 4 focuses on the sparse estimation of large covariance matrix. Traditional estimation approach is known to perform poorly in the high dimensions. I propose a positive-definite estimator for the covariance matrix using the modified Cholesky decomposition. Such a decomposition provides a exibility to obtain a set of covariance matrix estimates. The proposed method considers an ensemble estimator as the center" of these available estimates with respect to Frobenius norm. The proposed estimator is not only guaranteed to be positive definite, but also able to catch the underlying sparse structure of the true matrix. / Ph. D.
|
9 |
Ρευστομηχανική και gridΚωνσταντινίδης, Νικόλαος 30 April 2014 (has links)
Η ανάγκη για την επίλυση μεγάλων προβλημάτων και η εξέλιξη της τεχνολογίας του διαδικτύου, είχε ως αποτέλεσμα την διαρκή ανάγκη για την εύρεση όλο και περισσότερων πόρων. Η ανάγκη αυτή οδήγησε στην δημιουργία δομών συνεργαζόμενων υπολογιστικών συστημάτων, με απώτερο σκοπό την επίλυση προβλημάτων που απαιτούν μεγάλη υπολογιστική ισχύ ή την αποθήκευση μεγάλου όγκου δεδομένων.
Η ύπαρξη τέτοιων δομών αλλά και κεντρικών μονάδων επεξεργασίας με περισσότερους από έναν επεξεργαστές, δημιούργησε πρωτόκολλα για την δημιουργία εφαρμογών που θα εκτελούνται και θα επιλύουν ένα πρόβλημα σε περισσότερους από έναν επεξεργαστές, ώστε να επιτευχθεί η μείωση του χρόνου εκτέλεσης. Ένα παράδειγμα τέτοιου πρωτοκόλλου είναι αυτό της ανταλλαγής μηνυμάτων (MPI).
Σκοπός της παρούσας διπλωματικής εργασίας είναι η τροποποίηση μιας υπάρχουσας εφαρμογή, που απαιτεί σημαντική υπολογιστική ισχύ, με σκοπό την εκμετάλλευση συστημάτων όπως αυτά που περιγράφηκαν προηγούμενα. Μέσα από αυτή την διαδικασία θα γίνει ανάλυση των πλεονεκτημάτων και των μειονεκτημάτων του παράλληλου προγραμματισμού. / The need to solve large problems and the development of internet technology, has resulted in the need to find more and more resources. This need led to the creation of structures collaborating systems, with a view to solving problems that require large computing power or storage of large amounts of data.
The existence of such structures and central processing units with more than one processor, created protocols for the develop applications that will run and will solve a problem in more than one processor in order to achieve the reduction in execution time. An example of such a protocol is that of messaging (MPI).
The purpose of this diploma thesis is to modify an existing application that requires significant computing power to exploit systems such as those described above. Through this process will analyze the advantages and disadvantages of parallel programming.
|
10 |
Modificações na fatoração controlada de Cholesky para acelerar o precondicionamento de sistemas lineares no contexto de pontos interiores / Modifications on controlled Cholesky factorization to improve the preconditioning in interior point methodSilva, Lino Marcos da, 1978- 09 February 2014 (has links)
Orientador: Aurelio Ribeiro Leite de Oliveira / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-25T19:56:24Z (GMT). No. of bitstreams: 1
Silva_LinoMarcosda_D.pdf: 2297954 bytes, checksum: 2213b987c2753edec9152998b30b7c74 (MD5)
Previous issue date: 2014 / Resumo: O método de pontos interiores para programação linear resolve em poucas iterações problemas de grande porte. No entanto, requer a cada iteração a resolução de dois sistemas lineares, os quais possuem a mesma matriz de coeficientes. Essa etapa se constitui no passo mais caro do método por aumentar consideravelmente o tempo de processamento e a necessidade de armazenamento de dados. Reduzir o tempo de solução dos sistemas lineares é, portanto, uma forma de melhorar o desempenho do método. De um modo geral, problemas de programação linear de grande porte possuem matrizes esparsas. Uma vez que os sistemas lineares a serem resolvidos são simétricos positivos definidos, métodos iterativos como o método dos gradientes conjugados precondicionado podem ser utilizados na resolução dos mesmos. Além disso, fatores de Cholesky incompletos podem ser utilizados como precondicionadores para o problema. Por outro lado, fatorações incompletas podem sofrer falhas na diagonal durante o processo de fatoração, e quando tais falhas ocorrem uma correção é efetuada somando-se um valor positivo aos elementos da diagonal da matriz do sistema linear e a fatoração da nova matriz é reiniciada, aumentando dessa forma o tempo de precondicionamento, quer seja devido a reconstrução do precondicionador, quer seja devido a perda de qualidade do mesmo. O precondicionador fatoração controlada de Cholesky tem um bom desempenho nas iterações iniciais do método de pontos interiores e tem sido importante nas implementações de abordagens de precondicionamento híbrido. No entanto, sendo uma fatoração incompleta, o mesmo não está livre da ocorrência de falhas no cálculo do pivô. Neste estudo propomos duas modificações à fatoração controlada de Cholesky a fim de evitar ou diminuir o número de reinícios da fatoração das matrizes diagonalmente modificadas. Resultados computacionais mostram que a técnica pode reduzir significativamente o tempo de resolução de certas classes de problemas de programação linear via método de pontos interiores / Abstract: The interior point method solves large linear programming problems in few iterations. However, each iteration requires computing the solution of one or more linear systems. This constitutes the most expensive step of the method by greatly increasing the processing time and the need for data storage. According to it, reducing the time to solve the linear system is a way of improving the method performance. In general, large linear programming problems have sparse matrices. Since the linear systems to be solved are symmetric positive definite, iterative methods such as the preconditioned conjugate gradient method can be used to solve them. Furthermore, incomplete Cholesky factor can be used as a preconditioner to the problem. On the other hand, breakdown may occur during incomplete factorizations. When such failure occur, a correction is made by adding a positive number to diagonal elements of the linear system matrix and the factorization of the new matrix is restarted, thus increasing the time of preconditioning, either due to computing the preconditioner, or due to loss of its quality. The controlled Cholesky factorization preconditioner performs well in early iterations of interior point methods and has been important on implementations of hybrid preconditioning approaches. However, being an incomplete factorization, it is not free from faulty pivots. In this study we propose two modifications to the controlled Cholesky factorization in order to avoid or decrease the refactoring diagonally modified matrices number. Computational results show that the proposed techniques can significantly reduces the time for solving linear programming problems by interior point method / Doutorado / Matematica Aplicada / Doutor em Matemática Aplicada
|
Page generated in 0.0369 seconds