• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 19
  • 19
  • 19
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A computational framework for the solution of infinite-dimensional Bayesian statistical inverse problems with application to global seismic inversion

Martin, James Robert, Ph. D. 18 September 2015 (has links)
Quantifying uncertainties in large-scale forward and inverse PDE simulations has emerged as a central challenge facing the field of computational science and engineering. The promise of modeling and simulation for prediction, design, and control cannot be fully realized unless uncertainties in models are rigorously quantified, since this uncertainty can potentially overwhelm the computed result. While statistical inverse problems can be solved today for smaller models with a handful of uncertain parameters, this task is computationally intractable using contemporary algorithms for complex systems characterized by large-scale simulations and high-dimensional parameter spaces. In this dissertation, I address issues regarding the theoretical formulation, numerical approximation, and algorithms for solution of infinite-dimensional Bayesian statistical inverse problems, and apply the entire framework to a problem in global seismic wave propagation. Classical (deterministic) approaches to solving inverse problems attempt to recover the “best-fit” parameters that match given observation data, as measured in a particular metric. In the statistical inverse problem, we go one step further to return not only a point estimate of the best medium properties, but also a complete statistical description of the uncertain parameters. The result is a posterior probability distribution that describes our state of knowledge after learning from the available data, and provides a complete description of parameter uncertainty. In this dissertation, a computational framework for such problems is described that wraps around the existing forward solvers, as long as they are appropriately equipped, for a given physical problem. Then a collection of tools, insights and numerical methods may be applied to solve the problem, and interrogate the resulting posterior distribution, which describes our final state of knowledge. We demonstrate the framework with numerical examples, including inference of a heterogeneous compressional wavespeed field for a problem in global seismic wave propagation with 10⁶ parameters.
12

Randomized Diagonal Estimation / Randomiserad Diagonalestimering

Popp, Niclas Joshua January 2023 (has links)
Implicit diagonal estimation is a long-standing problem that is concerned with approximating the diagonal of a matrix that can only be accessed through matrix-vector products. It is of interest in various fields of application, such as network science, material science and machine learning. This thesis provides a comprehensive review of randomized algorithms for implicit diagonal estimation and introduces various enhancements as well as extensions to matrix functions. Three novel diagonal estimators are presented. The first method employs low-rank Nyström approximations. The second approach is based on shifts, forming a generalization of current deflation-based techniques. Additionally, we introduce a method for adaptively determining the number of test vectors, thereby removing the need for prior knowledge about the matrix. Moreover, the median of means principle is incorporated into diagonal estimation. Apart from that, we combine diagonal estimation methods with approaches for approximating the action of matrix functions using polynomial approximations and Krylov subspaces. This enables us to present implicit methods for estimating the diagonal of matrix functions. We provide first of their kind theoretical results for the convergence of these estimators. Subsequently, we present a deflation-based diagonal estimator for monotone functions of normal matrices with improved convergence properties. To validate the effectiveness and practical applicability of our methods, we conduct numerical experiments in real-world scenarios. This includes estimating the subgraph centralities in a protein interaction network, approximating uncertainty in ordinary least squares as well as randomized Jacobi preconditioning. / Implicit diagonalskattning är ett långvarigt problem som handlar om approximationen av diagonalerna i en matris som endast kan nås genom matris-vektorprodukter. Problemet är av intresse inom olika tillämpnings-områden, exempelvis nätverksvetenskap, materialvetenskap och maskininlärning. Detta arbete ger en omfattande översikt över algoritmer för randomiserad diagonalskattning och presenterar flera förbättringar samt utvidgningar till matrisfunktioner. Tre nya diagonalskattare presenteras. Den första metoden använder Nyström-approximationer med låg rang. Den andra metoden är baserad på skift och är en generalisering av de nuvarande deflationsbaserade metoderna. Dessutom presenteras en metod för adaptiv bestämning av antalet testvektorer som inte kräver förhandskunskap om matrisen. Median of Means principen ingår också i uppskattningen av diagonalerna. Dessutom kombinerar vi metoder för att uppskatta diagonalerna med algoritmer för att approximera matris-vektorprodukter med matrisfunktioner med hjälp av polynomapproximationer och Krylov-underutrymmen. Detta gör att vi kan presentera implicita metoder för att uppskatta diagonalerna i matrisfunktioner. Vi ger de första teoretiska resultaten för konvergensen av dessa skattare. Sedan presenterar vi en deflationsbaserad diagonal estimator för monotona funktioner av normala matriser med förbättrade konvergensegenskaper. För att validera våra metoders effektivitet och praktiska användbarhet genomför vi numeriska experiment i verkliga scenarier. Detta inkluderar uppskattning av Subgraph Centrality i nätverk, osäkerhetskvantifiering inom ramen för vanliga minsta kvadratmetoden och randomiserad Jacobi-förkonditionering.
13

Nonnegative matrix and tensor factorizations, least squares problems, and applications

Kim, Jingu 14 November 2011 (has links)
Nonnegative matrix factorization (NMF) is a useful dimension reduction method that has been investigated and applied in various areas. NMF is considered for high-dimensional data in which each element has a nonnegative value, and it provides a low-rank approximation formed by factors whose elements are also nonnegative. The nonnegativity constraints imposed on the low-rank factors not only enable natural interpretation but also reveal the hidden structure of data. Extending the benefits of NMF to multidimensional arrays, nonnegative tensor factorization (NTF) has been shown to be successful in analyzing complicated data sets. Despite the success, NMF and NTF have been actively developed only in the recent decade, and algorithmic strategies for computing NMF and NTF have not been fully studied. In this thesis, computational challenges regarding NMF, NTF, and related least squares problems are addressed. First, efficient algorithms of NMF and NTF are investigated based on a connection from the NMF and the NTF problems to the nonnegativity-constrained least squares (NLS) problems. A key strategy is to observe typical structure of the NLS problems arising in the NMF and the NTF computation and design a fast algorithm utilizing the structure. We propose an accelerated block principal pivoting method to solve the NLS problems, thereby significantly speeding up the NMF and NTF computation. Implementation results with synthetic and real-world data sets validate the efficiency of the proposed method. In addition, a theoretical result on the classical active-set method for rank-deficient NLS problems is presented. Although the block principal pivoting method appears generally more efficient than the active-set method for the NLS problems, it is not applicable for rank-deficient cases. We show that the active-set method with a proper starting vector can actually solve the rank-deficient NLS problems without ever running into rank-deficient least squares problems during iterations. Going beyond the NLS problems, it is presented that a block principal pivoting strategy can also be applied to the l1-regularized linear regression. The l1-regularized linear regression, also known as the Lasso, has been very popular due to its ability to promote sparse solutions. Solving this problem is difficult because the l1-regularization term is not differentiable. A block principal pivoting method and its variant, which overcome a limitation of previous active-set methods, are proposed for this problem with successful experimental results. Finally, a group-sparsity regularization method for NMF is presented. A recent challenge in data analysis for science and engineering is that data are often represented in a structured way. In particular, many data mining tasks have to deal with group-structured prior information, where features or data items are organized into groups. Motivated by an observation that features or data items that belong to a group are expected to share the same sparsity pattern in their latent factor representations, We propose mixed-norm regularization to promote group-level sparsity. Efficient convex optimization methods for dealing with the regularization terms are presented along with computational comparisons between them. Application examples of the proposed method in factor recovery, semi-supervised clustering, and multilingual text analysis are presented.
14

奇異值分解在影像處理上之運用 / Singular Value Decomposition: Application to Image Processing

顏佑君, Yen, Yu Chun Unknown Date (has links)
奇異值分解(singular valve decomposition)是一個重要且被廣為運用的矩陣分解方法,其具備許多良好性質,包括低階近似理論(low rank approximation)。在現今大數據(big data)的年代,人們接收到的資訊數量龐大且形式多元。相較於文字型態的資料,影像資料可以提供更多的資訊,因此影像資料扮演舉足輕重的角色。影像資料的儲存比文字資料更為複雜,若能運用影像壓縮的技術,減少影像資料中較不重要的資訊,降低影像的儲存空間,便能大幅提升影像處理工作的效率。另一方面,有時影像在被存取的過程中遭到雜訊汙染,產生模糊影像,此模糊的影像被稱為退化影像(image degradation)。近年來奇異值分解常被用於解決影像處理問題,對於影像資料也有充分的解釋能力。本文考慮將奇異值分解應用在影像壓縮與去除雜訊上,以奇異值累積比重作為選取奇異值的準則,並透過模擬實驗來評估此方法的效果。 / Singular value decomposition (SVD) is a robust and reliable matrix decomposition method. It has many attractive properties, such as the low rank approximation. In the era of big data, numerous data are generated rapidly. Offering attractive visual effect and important information, image becomes a common and useful type of data. Recently, SVD has been utilized in several image process and analysis problems. This research focuses on the problems of image compression and image denoise for restoration. We propose to apply the SVD method to capture the main signal image subspace for an efficient image compression, and to screen out the noise image subspace for image restoration. Simulations are conducted to investigate the proposed method. We find that the SVD method has satisfactory results for image compression. However, in image denoising, the performance of the SVD method varies depending on the original image, the noise added and the threshold used.
15

Využití řídké reprezentace signálu při snímání a rekonstrukci v nukleární magnetické rezonanci / Exploitng sparse signal representations in capturing and recovery of nuclear magnetic resonance data

Hrbáček, Radek January 2013 (has links)
This thesis deals with the nuclear magnetic resonance field, especially spectroscopy and spectroscopy imaging, sparse signal representation and low-rank approximation approaches. Spectroscopy imaging methods are becoming very popular in clinical praxis, however, long measurement times and low resolution prevent them from their spreading. The goal of this thesis is to improve state of the art methods by using sparse signal representation and low-rank approximation approaches. The compressed sensing technique is demonstrated on the examples of magnetic resonance imaging speedup and hyperspectral imaging data saving. Then, a new spectroscopy imaging scheme based on compressed sensing is proposed. The thesis deals also with the in vivo spectrum quantitation problem by designing the MRSMP algorithm specifically for this purpose.
16

High-Performance Scientific Applications Using Mixed Precision and Low-Rank Approximation Powered by Task-based Runtime Systems

Alomairy, Rabab M. 20 July 2022 (has links)
To leverage the extreme parallelism of emerging architectures, so that scientific applications can fulfill their high fidelity and multi-physics potential while sustaining high efficiency relative to the limiting resource, numerical algorithms must be redesigned. Algorithmic redesign is capable of shifting the limiting resource, for example from memory or communication to arithmetic capacity. The benefit of algorithmic redesign expands greatly when introducing a tunable tradeoff between accuracy and resources. Scientific applications from diverse sources rely on dense matrix operations. These operations arise in: Schur complements, integral equations, covariances in spatial statistics, ridge regression, radial basis functions from unstructured meshes, and kernel matrices from machine learning, among others. This thesis demonstrates how to extend the problem sizes that may be treated and to reduce their execution time. Two “universes” of algorithmic innovations have emerged to improve computations by orders of magnitude in capacity and runtime. Each introduces a hierarchy, of rank or precision. Tile Low-Rank approximation replaces blocks of dense operator with those of low rank. Mixed precision approximation, increasingly well supported by contemporary hardware, replaces blocks of high with low precision. Herein, we design new high-performance direct solvers based on the synergism of TLR and mixed precision. Since adapting to data sparsity leads to heterogeneous workloads, we rely on task-based runtime systems to orchestrate the scheduling of fine-grained kernels onto computational resources. We first demonstrate how TLR permits to accelerate acoustic scattering and mesh deformation simulations. Our solvers outperform the state-of-art libraries by up to an order of magnitude. Then, we demonstrate the impact of enabling mixed precision in bioinformatics context. Mixed precision enhances the performance up to three-fold speedup. To facilitate the adoption of task-based runtime systems, we introduce the AL4SAN library to provide a common API for the expression and queueing of tasks across multiple dynamic runtime systems. This library handles a variety of workloads at a low overhead, while increasing user productivity. AL4SAN enables interoperability by switching runtimes at runtime, which permits to achieve a twofold speedup on a task-based generalized symmetric eigenvalue solver.
17

Approximations de rang faible et modèles d'ordre réduit appliqués à quelques problèmes de la mécanique des fluides / Low rank approximation techniques and reduced order modeling applied to some fluid dynamics problems

Lestandi, Lucas 16 October 2018 (has links)
Les dernières décennies ont donné lieux à d'énormes progrès dans la simulation numérique des phénomènes physiques. D'une part grâce au raffinement des méthodes de discrétisation des équations aux dérivées partielles. Et d'autre part grâce à l'explosion de la puissance de calcul disponible. Pourtant, de nombreux problèmes soulevés en ingénierie tels que les simulations multi-physiques, les problèmes d'optimisation et de contrôle restent souvent hors de portée. Le dénominateur commun de ces problèmes est le fléau des dimensions. Un simple problème tridimensionnel requiert des centaines de millions de points de discrétisation auxquels il faut souvent ajouter des milliers de pas de temps pour capturer des dynamiques complexes. L'avènement des supercalculateurs permet de générer des simulations de plus en plus fines au prix de données gigantesques qui sont régulièrement de l'ordre du pétaoctet. Malgré tout, cela n'autorise pas une résolution ``exacte'' des problèmes requérant l'utilisation de plusieurs paramètres. L'une des voies envisagées pour résoudre ces difficultés est de proposer des représentations ne souffrant plus du fléau de la dimension. Ces représentations que l'on appelle séparées sont en fait un changement de paradigme. Elles vont convertir des objets tensoriels dont la croissance est exponentielle $n^d$ en fonction du nombre de dimensions $d$ en une représentation approchée dont la taille est linéaire en $d$. Pour le traitement des données tensorielles, une vaste littérature a émergé ces dernières années dans le domaine des mathématiques appliquées.Afin de faciliter leurs utilisations dans la communauté des mécaniciens et en particulier pour la simulation en mécanique des fluides, ce manuscrit présente dans un vocabulaire rigoureux mais accessible les formats de représentation des tenseurs et propose une étude détaillée des algorithmes de décomposition de données qui y sont associées. L'accent est porté sur l'utilisation de ces méthodes, aussi la bibliothèque de calcul texttt{pydecomp} développée est utilisée pour comparer l'efficacité de ces méthodes sur un ensemble de cas qui se veut représentatif. La seconde partie de ce manuscrit met en avant l'étude de l'écoulement dans une cavité entraînée à haut nombre de Reynolds. Cet écoulement propose une physique très riche (séquence de bifurcation de Hopf) qui doit être étudiée en amont de la construction de modèle réduit. Cette étude est enrichie par l'utilisation de la décomposition orthogonale aux valeurs propres (POD). Enfin une approche de construction ``physique'', qui diffère notablement des développements récents pour les modèles d'ordre réduit, est proposée. La connaissance détaillée de l'écoulement permet de construire un modèle réduit simple basé sur la mise à l'échelle des fréquences d'oscillation (time-scaling) et des techniques d'interpolation classiques (Lagrange,..). / Numerical simulation has experienced tremendous improvements in the last decadesdriven by massive growth of computing power. Exascale computing has beenachieved this year and will allow solving ever more complex problems. But suchlarge systems produce colossal amounts of data which leads to its own difficulties.Moreover, many engineering problems such as multiphysics or optimisation andcontrol, require far more power that any computer architecture could achievewithin the current scientific computing paradigm. In this thesis, we proposeto shift the paradigm in order to break the curse of dimensionality byintroducing decomposition and building reduced order models (ROM) for complexfluid flows.This manuscript is organized into two parts. The first one proposes an extendedreview of data reduction techniques and intends to bridge between appliedmathematics community and the computational mechanics one. Thus, foundingbivariate separation is studied, including discussions on the equivalence ofproper orthogonal decomposition (POD, continuous framework) and singular valuedecomposition (SVD, discrete matrices). Then a wide review of tensor formats andtheir approximation is proposed. Such work has already been provided in theliterature but either on separate papers or into a purely applied mathematicsframework. Here, we offer to the data enthusiast scientist a comparison ofCanonical, Tucker, Hierarchical and Tensor train formats including theirapproximation algorithms. Their relative benefits are studied both theoreticallyand numerically thanks to the python library texttt{pydecomp} that wasdeveloped during this thesis. A careful analysis of the link between continuousand discrete methods is performed. Finally, we conclude that for mostapplications ST-HOSVD is best when the number of dimensions $d$ lower than fourand TT-SVD (or their POD equivalent) when $d$ grows larger.The second part is centered on a complex fluid dynamics flow, in particular thesingular lid driven cavity at high Reynolds number. This flow exhibits a seriesof Hopf bifurcation which are known to be hard to capture accurately which iswhy a detailed analysis was performed both with classical tools and POD. Oncethis flow has been characterized, emph{time-scaling}, a new ``physics based''interpolation ROM is presented on internal and external flows. This methodsgives encouraging results while excluding recent advanced developments in thearea such as EIM or Grassmann manifold interpolation.
18

Méthodes itératives pour la résolution d'équations matricielles / Iterative methods fol solving matrix equations

Sadek, El Mostafa 23 May 2015 (has links)
Nous nous intéressons dans cette thèse, à l’étude des méthodes itératives pour la résolutiond’équations matricielles de grande taille : Lyapunov, Sylvester, Riccati et Riccatinon symétrique.L’objectif est de chercher des méthodes itératives plus efficaces et plus rapides pour résoudreles équations matricielles de grande taille. Nous proposons des méthodes itérativesde type projection sur des sous espaces de Krylov par blocs Km(A, V ) = Image{V,AV, . . . ,Am−1V }, ou des sous espaces de Krylov étendus par blocs Kem(A, V ) = Image{V,A−1V,AV,A−2V,A2V, · · · ,Am−1V,A−m+1V } . Ces méthodes sont généralement plus efficaces et rapides pour les problèmes de grande dimension. Nous avons traité d'abord la résolution numérique des équations matricielles linéaires : Lyapunov, Sylvester, Stein. Nous avons proposé une nouvelle méthode itérative basée sur la minimisation de résidu MR et la projection sur des sous espaces de Krylov étendus par blocs Kem(A, V ). L'algorithme d'Arnoldi étendu par blocs permet de donner un problème de minimisation projeté de petite taille. Le problème de minimisation de taille réduit est résolu par différentes méthodes directes ou itératives. Nous avons présenté ainsi la méthode de minimisation de résidu basée sur l'approche global à la place de l'approche bloc. Nous projetons sur des sous espaces de Krylov étendus Global Kem(A, V ) = sev{V,A−1V,AV,A−2V,A2V, · · · ,Am−1V,A−m+1V }. Nous nous sommes intéressés en deuxième lieu à des équations matricielles non linéaires, et tout particulièrement l'équation matricielle de Riccati dans le cas continu et dans le cas non symétrique appliquée dans les problèmes de transport. Nous avons utilisé la méthode de Newtown et l'algorithme MINRES pour résoudre le problème de minimisation projeté. Enfin, nous avons proposé deux nouvelles méthodes itératives pour résoudre les équations de Riccati non symétriques de grande taille : la première basée sur l'algorithme d'Arnoldi étendu par bloc et la condition d'orthogonalité de Galerkin, la deuxième est de type Newton-Krylov, basée sur la méthode de Newton et la résolution d'une équation de Sylvester de grande taille par une méthode de type Krylov par blocs. Pour toutes ces méthodes, les approximations sont données sous la forme factorisée, ce qui nous permet d'économiser la place mémoire en programmation. Nous avons donné des exemples numériques qui montrent bien l'efficacité des méthodes proposées dans le cas de grandes tailles. / In this thesis, we focus in the studying of some iterative methods for solving large matrix equations such as Lyapunov, Sylvester, Riccati and nonsymmetric algebraic Riccati equation. We look for the most efficient and faster iterative methods for solving large matrix equations. We propose iterative methods such as projection on block Krylov subspaces Km(A, V ) = Range{V,AV, . . . ,Am−1V }, or block extended Krylov subspaces Kem(A, V ) = Range{V,A−1V,AV,A−2V,A2V, · · · ,Am−1V,A−m+1V }. These methods are generally most efficient and faster for large problems. We first treat the numerical solution of the following linear matrix equations : Lyapunov, Sylvester and Stein matrix equations. We have proposed a new iterative method based on Minimal Residual MR and projection on block extended Krylov subspaces Kem(A, V ). The extended block Arnoldi algorithm gives a projected minimization problem of small size. The reduced size of the minimization problem is solved by direct or iterative methods. We also introduced the Minimal Residual method based on the global approach instead of the block approach. We projected on the global extended Krylov subspace Kem(A, V ) = Span{V,A−1V,AV,A−2V,A2V, · · · ,Am−1V,A−m+1V }. Secondly, we focus on nonlinear matrix equations, especially the matrix Riccati equation in the continuous case and the nonsymmetric case applied in transportation problems. We used the Newton method and MINRES algorithm to solve the projected minimization problem. Finally, we proposed two new iterative methods for solving large nonsymmetric Riccati equation : the first based on the algorithm of extended block Arnoldi and Galerkin condition, the second type is Newton-Krylov, based on Newton’s method and the resolution of the large matrix Sylvester equation by using block Krylov method. For all these methods, approximations are given in low rank form, wich allow us to save memory space. We have given numerical examples that show the effectiveness of the methods proposed in the case of large sizes.
19

New Algorithms for Local and Global Fiber Tractography in Diffusion-Weighted Magnetic Resonance Imaging

Schomburg, Helen 29 September 2017 (has links)
No description available.

Page generated in 0.5178 seconds