• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 2
  • 1
  • Tagged with
  • 14
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Extrapolation vectorielle et applications aux équations aux dérivées partielles / Vector extrapolation and applications to partial differential equations

Duminil, Sébastien 06 July 2012 (has links)
Nous nous intéressons, dans cette thèse, à l'étude des méthodes d'extrapolation polynômiales et à l'application de ces méthodes dans l'accélération de méthodes de points fixes pour des problèmes donnés. L'avantage de ces méthodes d'extrapolation est qu'elles utilisent uniquement une suite de vecteurs qui n'est pas forcément convergente, ou qui converge très lentement pour créer une nouvelle suite pouvant admettreune convergence quadratique. Le développement de méthodes cycliques permet, deplus, de limiter le coût de calculs et de stockage. Nous appliquons ces méthodes à la résolution des équations de Navier-Stokes stationnaires et incompressibles, à la résolution de la formulation Kohn-Sham de l'équation de Schrödinger et à la résolution d'équations elliptiques utilisant des méthodes multigrilles. Dans tous les cas, l'efficacité des méthodes d'extrapolation a été montrée.Nous montrons que lorsqu'elles sont appliquées à la résolution de systèmes linéaires, les méthodes d'extrapolation sont comparables aux méthodes de sous espaces de Krylov. En particulier, nous montrons l'équivalence entre la méthode MMPE et CMRH. Nous nous intéressons enfin, à la parallélisation de la méthode CMRH sur des processeurs à mémoire distribuée et à la recherche de préconditionneurs efficaces pour cette même méthode. / In this thesis, we study polynomial extrapolation methods. We discuss the design and implementation of these methods for computing solutions of fixed point methods. Extrapolation methods transform the original sequance into another sequence that converges to the same limit faster than the original one without having explicit knowledge of the sequence generator. Restarted methods permit to keep the storage requirement and the average of computational cost low. We apply these methods for computing steady state solutions of incompressible flow problems modelled by the Navier-Stokes equations, for solving the Schrödinger equation using the Kohn-Sham formulation and for solving elliptic equations using multigrid methods. In all cases, vector extrapolation methods have a useful role to play. We show that, when applied to linearly generated vector sequences, extrapolation methods are related to Krylov subspace methods. For example, we show that the MMPE approach is mathematically equivalent to CMRH method. We present an implementation of the CMRH iterative method suitable for parallel architectures with distributed memory. Finally, we present a preconditioned CMRH method.
12

Méthodes itératives de reconstruction tomographique pour la réduction des artefacts métalliques et de la dose en imagerie dentaire / Iterative reconstruction methods for the reduction of metal artifact and dose in dental CT

Chen, Long 05 February 2015 (has links)
Cette thèse est constituée de deux principaux axes de recherche portant sur l'imagerie dentaire par la tomographie à rayons X : le développement de nouvelles méthodes itératives de reconstruction tomographique afin de réduire les artefacts métalliques et la réduction de la dose délivrée au patient. Afin de réduire les artefacts métalliques, nous prendrons en compte le durcissement du spectre des faisceaux de rayons X et le rayonnement diffusé. La réduction de la dose est abordée dans cette thèse en diminuant le nombre des projections traitées. La tomographie par rayons X a pour objectif de reconstruire la cartographie des coefficients d'atténuations d'un objet inconnu de façon non destructive. Les bases mathématiques de la tomographie repose sur la transformée de Radon et son inversion. Néanmoins des artefacts métalliques apparaissent dans les images reconstruites en inversant la transformée de Radon (la méthode de rétro-projection filtrée), un certain nombre d'hypothèse faites dans cette approche ne sont pas vérifiées. En effet, la présence de métaux exacerbe les phénomènes de durcissement de spectre et l'absence de prise en compte du rayonnement diffusé. Nous nous intéressons dans cette thèse aux méthodes itératives issues d'une méthodologie Bayésienne. Afin d'obtenir des résultats de traitement compatible avec une application clinique de nos nouvelles approches, nous avons choisi un modèle direct relativement simple et classique (linéaire) associé à des approches de corrections de données. De plus, nous avons pris en compte l'incertitude liée à la correction des données en utilisant la minimisation d'un critère de moindres carrés pondérés. Nous proposons donc une nouvelle méthode de correction du durcissement du métal sans connaissances du spectre de la source et des coefficients d'atténuation des matériaux. Nous proposons également une nouvelle méthode de correction du diffusé associée sur les mesures sous certaines conditions notamment de faible dose. En imagerie médicale par tomographie à rayons X, la surexposition ou exposition non nécessaire irradiante augmente le risque de cancer radio-induit lors d'un examen du patient. Notre deuxième axe de recherche porte donc sur la réduction de la dose en diminuant le nombre de projections. Nous avons donc introduit un nouveau mode d'acquisition possédant un échantillonnage angulaire adaptatif. On utilise pour définir cette acquisition notre connaissance a priori de l'objet. Ce mode d'acquisition associé à un algorithme de reconstruction dédié, nous permet de réduire le nombre de projections tout en obtenant une qualité de reconstruction comparable au mode d'acquisition classique. Enfin, dans certains modes d’acquisition des scanners dentaires, nous avons un détecteur qui n'arrive pas à couvrir l'ensemble de l'objet. Pour s'affranchir aux problèmes liés à la tomographie locale qui se pose alors, nous utilisons des acquisitions multiples suivant des trajectoires circulaires. Nous avons adaptés les résultats développés par l’approche « super short scan » [Noo et al 2003] à cette trajectoire très particulière et au fait que le détecteur mesure uniquement des projections tronquées. Nous avons évalué nos méthodes de réduction des artefacts métalliques et de réduction de la dose en diminuant le nombre des projections sur les données réelles. Grâce à nos méthodes de réduction des artefacts métalliques, l'amélioration de qualité des images est indéniable et il n'y a pas d'introduction de nouveaux artefacts en comparant avec la méthode de l'état de l'art NMAR [Meyer et al 2010]. Par ailleurs, nous avons réussi à réduire le nombre des projections avec notre nouveau mode d'acquisition basé sur un « super short scan » appliqué à des trajectoires multiples. La qualité obtenue est comparable aux reconstructions obtenues avec les modes d'acquisition classiques ou short-scan mais avec une réduction d’au moins 20% de la dose radioactive. / This thesis contains two main themes: development of new iterative approaches for metal artifact reduction (MAR) and dose reduction in dental CT (Computed Tomography). The metal artifacts are mainly due to the beam-hardening, scatter and photon starvation in case of metal in contrast background like metallic dental implants in teeth. The first issue concerns about data correction on account of these effects. The second one involves the radiation dose reduction delivered to a patient by decreasing the number of projections. At first, the polychromatic spectra of X-ray beam and scatter can be modeled by a non-linear direct modeling in the statistical methods for the purpose of the metal artifacts reduction. However, the reconstruction by statistical methods is too much time consuming. Consequently, we proposed an iterative algorithm with a linear direct modeling based on data correction (beam-hardening and scatter). We introduced a new beam-hardening correction without knowledge of the spectra of X-ray source and the linear attenuation coefficients of the materials and a new scatter estimation method based on the measurements as well. Later, we continued to study the iterative approaches of dose reduction since the over-exposition or unnecessary exposition of irradiation during a CT scan has been increasing the patient's risk of radio-induced cancer. In practice, it may be useful that one can reconstruct an object larger than the field of view of scanner. We proposed an iterative algorithm on super-short-scans on multiple scans in this case, which contain a minimal set of the projections for an optimal dose. Furthermore, we introduced a new scanning mode of variant angular sampling to reduce the number of projections on a single scan. This was adapted to the properties and predefined interesting regions of the scanned object. It needed fewer projections than the standard scanning mode of uniform angular sampling to reconstruct the objet. All of our approaches for MAR and dose reduction have been evaluated on real data. Thanks to our MAR methods, the quality of reconstructed images was improved noticeably. Besides, it did not introduce some new artifacts compared to the MAR method of state of art NMAR [Meyer et al 2010]. We could reduce obviously the number of projections with the proposed new scanning mode and schema of super-short-scans on multiple scans in particular case.
13

Eigenvalue Algorithms for Symmetric Hierarchical Matrices / Eigenwert-Algorithmen für Symmetrische Hierarchische Matrizen

Mach, Thomas 05 April 2012 (has links) (PDF)
This thesis is on the numerical computation of eigenvalues of symmetric hierarchical matrices. The numerical algorithms used for this computation are derivations of the LR Cholesky algorithm, the preconditioned inverse iteration, and a bisection method based on LDLT factorizations. The investigation of QR decompositions for H-matrices leads to a new QR decomposition. It has some properties that are superior to the existing ones, which is shown by experiments using the HQR decompositions to build a QR (eigenvalue) algorithm for H-matrices does not progress to a more efficient algorithm than the LR Cholesky algorithm. The implementation of the LR Cholesky algorithm for hierarchical matrices together with deflation and shift strategies yields an algorithm that require O(n) iterations to find all eigenvalues. Unfortunately, the local ranks of the iterates show a strong growth in the first steps. These H-fill-ins makes the computation expensive, so that O(n³) flops and O(n²) storage are required. Theorem 4.3.1 explains this behavior and shows that the LR Cholesky algorithm is efficient for the simple structured Hl-matrices. There is an exact LDLT factorization for Hl-matrices and an approximate LDLT factorization for H-matrices in linear-polylogarithmic complexity. This factorizations can be used to compute the inertia of an H-matrix. With the knowledge of the inertia for arbitrary shifts, one can compute an eigenvalue by bisectioning. The slicing the spectrum algorithm can compute all eigenvalues of an Hl-matrix in linear-polylogarithmic complexity. A single eigenvalue can be computed in O(k²n log^4 n). Since the LDLT factorization for general H-matrices is only approximative, the accuracy of the LDLT slicing algorithm is limited. The local ranks of the LDLT factorization for indefinite matrices are generally unknown, so that there is no statement on the complexity of the algorithm besides the numerical results in Table 5.7. The preconditioned inverse iteration computes the smallest eigenvalue and the corresponding eigenvector. This method is efficient, since the number of iterations is independent of the matrix dimension. If other eigenvalues than the smallest are searched, then preconditioned inverse iteration can not be simply applied to the shifted matrix, since positive definiteness is necessary. The squared and shifted matrix (M-mu I)² is positive definite. Inner eigenvalues can be computed by the combination of folded spectrum method and PINVIT. Numerical experiments show that the approximate inversion of (M-mu I)² is more expensive than the approximate inversion of M, so that the computation of the inner eigenvalues is more expensive. We compare the different eigenvalue algorithms. The preconditioned inverse iteration for hierarchical matrices is better than the LDLT slicing algorithm for the computation of the smallest eigenvalues, especially if the inverse is already available. The computation of inner eigenvalues with the folded spectrum method and preconditioned inverse iteration is more expensive. The LDLT slicing algorithm is competitive to H-PINVIT for the computation of inner eigenvalues. In the case of large, sparse matrices, specially tailored algorithms for sparse matrices, like the MATLAB function eigs, are more efficient. If one wants to compute all eigenvalues, then the LDLT slicing algorithm seems to be better than the LR Cholesky algorithm. If the matrix is small enough to be handled in dense arithmetic (and is not an Hl(1)-matrix), then dense eigensolvers, like the LAPACK function dsyev, are superior. The H-PINVIT and the LDLT slicing algorithm require only an almost linear amount of storage. They can handle larger matrices than eigenvalue algorithms for dense matrices. For Hl-matrices of local rank 1, the LDLT slicing algorithm and the LR Cholesky algorithm need almost the same time for the computation of all eigenvalues. For large matrices, both algorithms are faster than the dense LAPACK function dsyev.
14

Eigenvalue Algorithms for Symmetric Hierarchical Matrices

Mach, Thomas 20 February 2012 (has links)
This thesis is on the numerical computation of eigenvalues of symmetric hierarchical matrices. The numerical algorithms used for this computation are derivations of the LR Cholesky algorithm, the preconditioned inverse iteration, and a bisection method based on LDLT factorizations. The investigation of QR decompositions for H-matrices leads to a new QR decomposition. It has some properties that are superior to the existing ones, which is shown by experiments using the HQR decompositions to build a QR (eigenvalue) algorithm for H-matrices does not progress to a more efficient algorithm than the LR Cholesky algorithm. The implementation of the LR Cholesky algorithm for hierarchical matrices together with deflation and shift strategies yields an algorithm that require O(n) iterations to find all eigenvalues. Unfortunately, the local ranks of the iterates show a strong growth in the first steps. These H-fill-ins makes the computation expensive, so that O(n³) flops and O(n²) storage are required. Theorem 4.3.1 explains this behavior and shows that the LR Cholesky algorithm is efficient for the simple structured Hl-matrices. There is an exact LDLT factorization for Hl-matrices and an approximate LDLT factorization for H-matrices in linear-polylogarithmic complexity. This factorizations can be used to compute the inertia of an H-matrix. With the knowledge of the inertia for arbitrary shifts, one can compute an eigenvalue by bisectioning. The slicing the spectrum algorithm can compute all eigenvalues of an Hl-matrix in linear-polylogarithmic complexity. A single eigenvalue can be computed in O(k²n log^4 n). Since the LDLT factorization for general H-matrices is only approximative, the accuracy of the LDLT slicing algorithm is limited. The local ranks of the LDLT factorization for indefinite matrices are generally unknown, so that there is no statement on the complexity of the algorithm besides the numerical results in Table 5.7. The preconditioned inverse iteration computes the smallest eigenvalue and the corresponding eigenvector. This method is efficient, since the number of iterations is independent of the matrix dimension. If other eigenvalues than the smallest are searched, then preconditioned inverse iteration can not be simply applied to the shifted matrix, since positive definiteness is necessary. The squared and shifted matrix (M-mu I)² is positive definite. Inner eigenvalues can be computed by the combination of folded spectrum method and PINVIT. Numerical experiments show that the approximate inversion of (M-mu I)² is more expensive than the approximate inversion of M, so that the computation of the inner eigenvalues is more expensive. We compare the different eigenvalue algorithms. The preconditioned inverse iteration for hierarchical matrices is better than the LDLT slicing algorithm for the computation of the smallest eigenvalues, especially if the inverse is already available. The computation of inner eigenvalues with the folded spectrum method and preconditioned inverse iteration is more expensive. The LDLT slicing algorithm is competitive to H-PINVIT for the computation of inner eigenvalues. In the case of large, sparse matrices, specially tailored algorithms for sparse matrices, like the MATLAB function eigs, are more efficient. If one wants to compute all eigenvalues, then the LDLT slicing algorithm seems to be better than the LR Cholesky algorithm. If the matrix is small enough to be handled in dense arithmetic (and is not an Hl(1)-matrix), then dense eigensolvers, like the LAPACK function dsyev, are superior. The H-PINVIT and the LDLT slicing algorithm require only an almost linear amount of storage. They can handle larger matrices than eigenvalue algorithms for dense matrices. For Hl-matrices of local rank 1, the LDLT slicing algorithm and the LR Cholesky algorithm need almost the same time for the computation of all eigenvalues. For large matrices, both algorithms are faster than the dense LAPACK function dsyev.:List of Figures xi List of Tables xiii List of Algorithms xv List of Acronyms xvii List of Symbols xix Publications xxi 1 Introduction 1 1.1 Notation 2 1.2 Structure of this Thesis 3 2 Basics 5 2.1 Linear Algebra and Eigenvalues 6 2.1.1 The Eigenvalue Problem 7 2.1.2 Dense Matrix Algorithms 9 2.2 Integral Operators and Integral Equations 14 2.2.1 Definitions 14 2.2.2 Example - BEM 16 2.3 Introduction to Hierarchical Arithmetic 17 2.3.1 Main Idea 17 2.3.2 Definitions 19 2.3.3 Hierarchical Arithmetic 24 2.3.4 Simple Hierarchical Matrices (Hl-Matrices) 30 2.4 Examples 33 2.4.1 FEM Example 33 2.4.2 BEM Example 36 2.4.3 Randomly Generated Examples 37 2.4.4 Application Based Examples 38 2.4.5 One-Dimensional Integral Equation 38 2.5 Related Matrix Formats 39 2.5.1 H2-Matrices 40 2.5.2 Diagonal plus Semiseparable Matrices 40 2.5.3 Hierarchically Semiseparable Matrices 42 2.6 Review of Existing Eigenvalue Algorithms 44 2.6.1 Projection Method 44 2.6.2 Divide-and-Conquer for Hl(1)-Matrices 45 2.6.3 Transforming Hierarchical into Semiseparable Matrices 46 2.7 Compute Cluster Otto 47 3 QR Decomposition of Hierarchical Matrices 49 3.1 Introduction 49 3.2 Review of Known QR Decompositions for H-Matrices 50 3.2.1 Lintner’s H-QR Decomposition 50 3.2.2 Bebendorf’s H-QR Decomposition 52 3.3 A new Method for Computing the H-QR Decomposition 54 3.3.1 Leaf Block-Column 54 3.3.2 Non-Leaf Block Column 56 3.3.3 Complexity 57 3.3.4 Orthogonality 60 3.3.5 Comparison to QR Decompositions for Sparse Matrices 61 3.4 Numerical Results 62 3.4.1 Lintner’s H-QR decomposition 62 3.4.2 Bebendorf’s H-QR decomposition 66 3.4.3 The new H-QR decomposition 66 3.5 Conclusions 67 4 QR-like Algorithms for Hierarchical Matrices 69 4.1 Introduction 70 4.1.1 LR Cholesky Algorithm 70 4.1.2 QR Algorithm 70 4.1.3 Complexity 71 4.2 LR Cholesky Algorithm for Hierarchical Matrices 72 4.2.1 Algorithm 72 4.2.2 Shift Strategy 72 4.2.3 Deflation 73 4.2.4 Numerical Results 73 4.3 LR Cholesky Algorithm for Diagonal plus Semiseparable Matrices 75 4.3.1 Theorem 75 4.3.2 Application to Tridiagonal and Band Matrices 79 4.3.3 Application to Matrices with Rank Structure 79 4.3.4 Application to H-Matrices 80 4.3.5 Application to Hl-Matrices 82 4.3.6 Application to H2-Matrices 83 4.4 Numerical Examples 84 4.5 The Unsymmetric Case 84 4.6 Conclusions 88 5 Slicing the Spectrum of Hierarchical Matrices 89 5.1 Introduction 89 5.2 Slicing the Spectrum by LDLT Factorization 91 5.2.1 The Function nu(M − µI) 91 5.2.2 LDLT Factorization of Hl-Matrices 92 5.2.3 Start-Interval [a, b] 96 5.2.4 Complexity 96 5.3 Numerical Results 97 5.4 Possible Extensions 100 5.4.1 LDLT Slicing Algorithm for HSS Matrices 103 5.4.2 LDLT Slicing Algorithm for H-Matrices 103 5.4.3 Parallelization 105 5.4.4 Eigenvectors 107 5.5 Conclusions 107 6 Computing Eigenvalues by Vector Iterations 109 6.1 Power Iteration 109 6.1.1 Power Iteration for Hierarchical Matrices 110 6.1.2 Inverse Iteration 111 6.2 Preconditioned Inverse Iteration for Hierarchical Matrices 111 6.2.1 Preconditioned Inverse Iteration 113 6.2.2 The Approximate Inverse of an H-Matrix 115 6.2.3 The Approximate Cholesky Decomposition of an H-Matrix 116 6.2.4 PINVIT for H-Matrices 117 6.2.5 The Interior of the Spectrum 120 6.2.6 Numerical Results 123 6.2.7 Conclusions 130 7 Comparison of the Algorithms and Numerical Results 133 7.1 Theoretical Comparison 133 7.2 Numerical Comparison 135 8 Conclusions 141 Theses 143 Bibliography 145 Index 153

Page generated in 0.0921 seconds