• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 8
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 95
  • 95
  • 36
  • 25
  • 23
  • 16
  • 15
  • 15
  • 15
  • 14
  • 12
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Využití řídké reprezentace signálu při snímání a rekonstrukci v nukleární magnetické rezonanci / Exploitng sparse signal representations in capturing and recovery of nuclear magnetic resonance data

Hrbáček, Radek January 2013 (has links)
This thesis deals with the nuclear magnetic resonance field, especially spectroscopy and spectroscopy imaging, sparse signal representation and low-rank approximation approaches. Spectroscopy imaging methods are becoming very popular in clinical praxis, however, long measurement times and low resolution prevent them from their spreading. The goal of this thesis is to improve state of the art methods by using sparse signal representation and low-rank approximation approaches. The compressed sensing technique is demonstrated on the examples of magnetic resonance imaging speedup and hyperspectral imaging data saving. Then, a new spectroscopy imaging scheme based on compressed sensing is proposed. The thesis deals also with the in vivo spectrum quantitation problem by designing the MRSMP algorithm specifically for this purpose.
82

Algoritmy doplňování chybějících dat v audiosignálech / Audio inpainting algorithms

Kolbábková, Anežka January 2014 (has links)
Tato práce se zabývá doplňováním chybějících dat do audio signálů a algoritmy řešícími problém založenými na řídké reprezentaci audio signálu. Práce se zaměřuje na některé algoritmy, které řeší doplňování chybějících dat do audio signálů pomocí řídké reprezentace signálů. Součástí práce je také návrh algoritmu, který používá řídkou reprezentaci signálu a také nízkou hodnost signálu ve spektrogramu audio signálu. Dále práce uvádí implementaci tohoto algoritmu v programu Matlab a jeho vyhodnocení.
83

Komprimované snímání v perfuzním zobrazování pomocí magnetické rezonance / Compressed sensing in magnetic resonance perfusion imaging.

Mangová, Marie January 2014 (has links)
Magnetic resonance perfusion imaging is a today's very promising method for medicine diagnosis. This thesis deals with a sparse representation of signals, low-rank matrix recovery and compressed sensing, which allows overcoming present physical limitations of magnetic resonance perfusion imaging. Several models for reconstruction of measured perfusion data is introduced and numerical methods for their software implementation, which is an important part of the thesis, is mentioned. Proposed models are verified on simulated and real perfusion data from magnetic resonance.
84

Algorithmes d’estimation et de détection en contexte hétérogène rang faible / Estimation and Detection Algorithms for Low Rank Heterogeneous Context

Breloy, Arnaud 23 November 2015 (has links)
Une des finalités du traitement d’antenne est la détection et la localisation de cibles en milieu bruité. Dans la plupart des cas pratiques, comme par exemple le RADAR ou le SONAR actif, il faut estimer dans un premier temps les propriétés statistiques du bruit, et plus précisément sa matrice de covariance ; on dispose à cette fin de données secondaires supposées identiquement distribuées. Dans ce contexte, les hypothèses suivantes sont généralement formulées : bruit gaussien, données secondaires ne contenant que du bruit, et bien sûr matériels fonctionnant parfaitement. Il est toutefois connu aujourd’hui que le bruit en RADAR est de nature impulsive et que l’hypothèse Gaussienne est parfois mal adaptée. C’est pourquoi, depuis quelques années, le bruit et en particulier le fouillis de sol est modélisé par des processus elliptiques, et principalement des Spherically Invariant Random Vectors (SIRV). Dans ce nouveau cadre, la Sample Covariance Matrix (SCM) estimant classiquement la matrice de covariance du bruit entraîne des pertes de performances très importantes des détecteurs / estimateurs. Dans ce contexte non-gaussien, d’autres estimateurs de la matrice de covariance mieux adaptés à cette statistique du bruit ont été développés : la Matrice du Point Fixe (MPF) et les M-estimateurs.Parallèlement, dans un cadre où le bruit se décompose sous la forme d’une somme d’un fouillis rang faible et d’un bruit blanc, la matrice de covariance totale est structurée sous la forme rang faible plus identité. Cette information peut être utilisée dans le processus d'estimation afin de réduire le nombre de données nécessaires. De plus, il aussi est possible d'utiliser le projecteur orthogonal au sous espace fouillis à la place de la matrice de covariance ce qui nécessite moins de données secondaires et d’être aussi plus robuste aux données aberrantes. On calcule classiquement ce projecteur à partir d'un estimateur de la matrice de covariance. Néanmoins l'état de l'art ne présente pas d'estimateurs à la fois être robustes aux distributions hétérogènes, et rendant compte de la structure rang faible des données. C'est pourquoi ces travaux se focalisent sur le développement de nouveaux estimateurs (de covariance et de sous espace), directement adaptés au contexte considéré. Les contributions de cette thèse s'orientent donc autour de trois axes :- Nous présenterons tout d'abord un modèle statistique précis : celui de sources hétérogènes ayant une covariance rang faible noyées dans un bruit blanc gaussien. Ce modèle et est, par exemple, fortement justifié pour des applications de type radar. Il à cependant peu été étudié pour la problématique d'estimation de matrice de covariance. Nous dériverons donc l'expression du maximum de vraisemblance de la matrice de covariance pour ce contexte. Cette expression n'étant pas une forme close, nous développerons différents algorithmes pour tenter de l'atteindre efficacement.- Nous développons de nouveaux estimateurs directs de projecteur sur le sous espace fouillis, ne nécessitant pas un estimé de la matrice de covariance intermédiaire, adaptés au contexte considéré.- Nous étudierons les performances des estimateurs proposés et de l'état de l'art sur une application de Space Time Adaptative Processing (STAP) pour radar aéroporté, au travers de simulations et de données réelles. / One purpose of array processing is the detection and location of a target in a noisy environment. In most cases (as RADAR or active SONAR), statistical properties of the noise, especially its covariance matrix, have to be estimated using i.i.d. samples. Within this context, several hypotheses are usually made: Gaussian distribution, training data containing only noise, perfect hardware. Nevertheless, it is well known that a Gaussian distribution doesn’t provide a good empirical fit to RADAR clutter data. That’s why noise is now modeled by elliptical process, mainly Spherically Invariant Random Vectors (SIRV). In this new context, the use of the SCM (Sample Covariance Matrix), a classical estimate of the covariance matrix, leads to a loss of performances of detectors/estimators. More efficient estimators have been developed, such as the Fixed Point Estimator and M-estimators.If the noise is modeled as a low-rank clutter plus white Gaussian noise, the total covariance matrix is structured as low rank plus identity. This information can be used in the estimation process to reduce the number of samples required to reach acceptable performance. Moreover, it is possible to estimate the basis vectors of the clutter-plus-noise orthogonal subspace rather than the total covariance matrix of the clutter, which requires less data and is more robust to outliers. The orthogonal projection to the clutter plus noise subspace is usually calculated from an estimatd of the covariance matrix. Nevertheless, the state of art does not provide estimators that are both robust to various distributions and low rank structured.In this Thesis, we therefore develop new estimators that are fitting the considered context, to fill this gap. The contributions are following three axes :- We present a precise statistical model : low rank heterogeneous sources embedded in a white Gaussian noise.We express the maximum likelihood estimator for this context.Since this estimator has no closed form, we develop several algorithms to reach it effitiently.- For the considered context, we develop direct clutter subspace estimators that are not requiring an intermediate Covariance Matrix estimate.- We study the performances of the proposed methods on a Space Time Adaptive Processing for airborne radar application. Tests are performed on both synthetic and real data.
85

Advances on Dimension Reduction for Multivariate Linear Regression

Guo, Wenxing January 2020 (has links)
Multivariate linear regression methods are widely used statistical tools in data analysis, and were developed when some response variables are studied simultaneously, in which our aim is to study the relationship between predictor variables and response variables through the regression coefficient matrix. The rapid improvements of information technology have brought us a large number of large-scale data, but also brought us great challenges in data processing. When dealing with high dimensional data, the classical least squares estimation is not applicable in multivariate linear regression analysis. In recent years, some approaches have been developed to deal with high-dimensional data problems, among which dimension reduction is one of the main approaches. In some literature, random projection methods were used to reduce dimension in large datasets. In Chapter 2, a new random projection method, with low-rank matrix approximation, is proposed to reduce the dimension of the parameter space in high-dimensional multivariate linear regression model. Some statistical properties of the proposed method are studied and explicit expressions are then derived for the accuracy loss of the method with Gaussian random projection and orthogonal random projection. These expressions are precise rather than being bounds up to constants. In multivariate regression analysis, reduced rank regression is also a dimension reduction method, which has become an important tool for achieving dimension reduction goals due to its simplicity, computational efficiency and good predictive performance. In practical situations, however, the performance of the reduced rank estimator is not satisfactory when the predictor variables are highly correlated or the ratio of signal to noise is small. To overcome this problem, in Chapter 3, we incorporate matrix projections into reduced rank regression method, and then develop reduced rank regression estimators based on random projection and orthogonal projection in high-dimensional multivariate linear regression models. We also propose a consistent estimator of the rank of the coefficient matrix and achieve prediction performance bounds for the proposed estimators based on mean squared errors. Envelope technology is also a popular method in recent years to reduce estimative and predictive variations in multivariate regression, including a class of methods to improve the efficiency without changing the traditional objectives. Variable selection is the process of selecting a subset of relevant features variables for use in model construction. The purpose of using this technology is to avoid the curse of dimensionality, simplify models to make them easier to interpret, shorten training time and reduce overfitting. In Chapter 4, we combine envelope models and a group variable selection method to propose an envelope-based sparse reduced rank regression estimator in high-dimensional multivariate linear regression models, and then establish its consistency, asymptotic normality and oracle property. Tensor data are in frequent use today in a variety of fields in science and engineering. Processing tensor data is a practical but challenging problem. Recently, the prevalence of tensor data has resulted in several envelope tensor versions. In Chapter 5, we incorporate envelope technique into tensor regression analysis and propose a partial tensor envelope model, which leads to a parsimonious version for tensor response regression when some predictors are of special interest, and then consistency and asymptotic normality of the coefficient estimators are proved. The proposed method achieves significant gains in efficiency compared to the standard tensor response regression model in terms of the estimation of the coefficients for the selected predictors. Finally, in Chapter 6, we summarize the work carried out in the thesis, and then suggest some problems of further research interest. / Dissertation / Doctor of Philosophy (PhD)
86

High-Performance Scientific Applications Using Mixed Precision and Low-Rank Approximation Powered by Task-based Runtime Systems

Alomairy, Rabab M. 20 July 2022 (has links)
To leverage the extreme parallelism of emerging architectures, so that scientific applications can fulfill their high fidelity and multi-physics potential while sustaining high efficiency relative to the limiting resource, numerical algorithms must be redesigned. Algorithmic redesign is capable of shifting the limiting resource, for example from memory or communication to arithmetic capacity. The benefit of algorithmic redesign expands greatly when introducing a tunable tradeoff between accuracy and resources. Scientific applications from diverse sources rely on dense matrix operations. These operations arise in: Schur complements, integral equations, covariances in spatial statistics, ridge regression, radial basis functions from unstructured meshes, and kernel matrices from machine learning, among others. This thesis demonstrates how to extend the problem sizes that may be treated and to reduce their execution time. Two “universes” of algorithmic innovations have emerged to improve computations by orders of magnitude in capacity and runtime. Each introduces a hierarchy, of rank or precision. Tile Low-Rank approximation replaces blocks of dense operator with those of low rank. Mixed precision approximation, increasingly well supported by contemporary hardware, replaces blocks of high with low precision. Herein, we design new high-performance direct solvers based on the synergism of TLR and mixed precision. Since adapting to data sparsity leads to heterogeneous workloads, we rely on task-based runtime systems to orchestrate the scheduling of fine-grained kernels onto computational resources. We first demonstrate how TLR permits to accelerate acoustic scattering and mesh deformation simulations. Our solvers outperform the state-of-art libraries by up to an order of magnitude. Then, we demonstrate the impact of enabling mixed precision in bioinformatics context. Mixed precision enhances the performance up to three-fold speedup. To facilitate the adoption of task-based runtime systems, we introduce the AL4SAN library to provide a common API for the expression and queueing of tasks across multiple dynamic runtime systems. This library handles a variety of workloads at a low overhead, while increasing user productivity. AL4SAN enables interoperability by switching runtimes at runtime, which permits to achieve a twofold speedup on a task-based generalized symmetric eigenvalue solver.
87

ONLINE STATISTICAL INFERENCE FOR LOW-RANK REINFORCEMENT LEARNING

Qiyu Han (18284758) 01 April 2024 (has links)
<p dir="ltr">We propose a fully online procedure to conduct statistical inference with adaptively collected data. The low-rank structure of the model parameter and the adaptivity nature of the data collection process make this task challenging: standard low-rank estimators are biased and cannot be obtained in a sequential manner while existing inference approaches in sequential decision-making algorithms fail to account for the low-rankness and are also biased. To tackle the challenges previously outlined, we first develop an online low-rank estimation process employing Stochastic Gradient Descent with noisy observations. Subsequently, to facilitate statistical inference using the online low-rank estimator, we introduced a novel online debiasing technique designed to address both sources of bias simultaneously. This method yields an unbiased estimator suitable for parameter inference. Finally, we developed an inferential framework capable of establishing an online estimator for performing inference on the optimal policy value. In theory, we establish the asymptotic normality of the proposed online debiased estimators and prove the validity of the constructed confidence intervals for both inference tasks. Our inference results are built upon a newly developed low-rank stochastic gradient descent estimator and its non-asymptotic convergence result, which is also of independent interest.</p>
88

Approximations de rang faible et modèles d'ordre réduit appliqués à quelques problèmes de la mécanique des fluides / Low rank approximation techniques and reduced order modeling applied to some fluid dynamics problems

Lestandi, Lucas 16 October 2018 (has links)
Les dernières décennies ont donné lieux à d'énormes progrès dans la simulation numérique des phénomènes physiques. D'une part grâce au raffinement des méthodes de discrétisation des équations aux dérivées partielles. Et d'autre part grâce à l'explosion de la puissance de calcul disponible. Pourtant, de nombreux problèmes soulevés en ingénierie tels que les simulations multi-physiques, les problèmes d'optimisation et de contrôle restent souvent hors de portée. Le dénominateur commun de ces problèmes est le fléau des dimensions. Un simple problème tridimensionnel requiert des centaines de millions de points de discrétisation auxquels il faut souvent ajouter des milliers de pas de temps pour capturer des dynamiques complexes. L'avènement des supercalculateurs permet de générer des simulations de plus en plus fines au prix de données gigantesques qui sont régulièrement de l'ordre du pétaoctet. Malgré tout, cela n'autorise pas une résolution ``exacte'' des problèmes requérant l'utilisation de plusieurs paramètres. L'une des voies envisagées pour résoudre ces difficultés est de proposer des représentations ne souffrant plus du fléau de la dimension. Ces représentations que l'on appelle séparées sont en fait un changement de paradigme. Elles vont convertir des objets tensoriels dont la croissance est exponentielle $n^d$ en fonction du nombre de dimensions $d$ en une représentation approchée dont la taille est linéaire en $d$. Pour le traitement des données tensorielles, une vaste littérature a émergé ces dernières années dans le domaine des mathématiques appliquées.Afin de faciliter leurs utilisations dans la communauté des mécaniciens et en particulier pour la simulation en mécanique des fluides, ce manuscrit présente dans un vocabulaire rigoureux mais accessible les formats de représentation des tenseurs et propose une étude détaillée des algorithmes de décomposition de données qui y sont associées. L'accent est porté sur l'utilisation de ces méthodes, aussi la bibliothèque de calcul texttt{pydecomp} développée est utilisée pour comparer l'efficacité de ces méthodes sur un ensemble de cas qui se veut représentatif. La seconde partie de ce manuscrit met en avant l'étude de l'écoulement dans une cavité entraînée à haut nombre de Reynolds. Cet écoulement propose une physique très riche (séquence de bifurcation de Hopf) qui doit être étudiée en amont de la construction de modèle réduit. Cette étude est enrichie par l'utilisation de la décomposition orthogonale aux valeurs propres (POD). Enfin une approche de construction ``physique'', qui diffère notablement des développements récents pour les modèles d'ordre réduit, est proposée. La connaissance détaillée de l'écoulement permet de construire un modèle réduit simple basé sur la mise à l'échelle des fréquences d'oscillation (time-scaling) et des techniques d'interpolation classiques (Lagrange,..). / Numerical simulation has experienced tremendous improvements in the last decadesdriven by massive growth of computing power. Exascale computing has beenachieved this year and will allow solving ever more complex problems. But suchlarge systems produce colossal amounts of data which leads to its own difficulties.Moreover, many engineering problems such as multiphysics or optimisation andcontrol, require far more power that any computer architecture could achievewithin the current scientific computing paradigm. In this thesis, we proposeto shift the paradigm in order to break the curse of dimensionality byintroducing decomposition and building reduced order models (ROM) for complexfluid flows.This manuscript is organized into two parts. The first one proposes an extendedreview of data reduction techniques and intends to bridge between appliedmathematics community and the computational mechanics one. Thus, foundingbivariate separation is studied, including discussions on the equivalence ofproper orthogonal decomposition (POD, continuous framework) and singular valuedecomposition (SVD, discrete matrices). Then a wide review of tensor formats andtheir approximation is proposed. Such work has already been provided in theliterature but either on separate papers or into a purely applied mathematicsframework. Here, we offer to the data enthusiast scientist a comparison ofCanonical, Tucker, Hierarchical and Tensor train formats including theirapproximation algorithms. Their relative benefits are studied both theoreticallyand numerically thanks to the python library texttt{pydecomp} that wasdeveloped during this thesis. A careful analysis of the link between continuousand discrete methods is performed. Finally, we conclude that for mostapplications ST-HOSVD is best when the number of dimensions $d$ lower than fourand TT-SVD (or their POD equivalent) when $d$ grows larger.The second part is centered on a complex fluid dynamics flow, in particular thesingular lid driven cavity at high Reynolds number. This flow exhibits a seriesof Hopf bifurcation which are known to be hard to capture accurately which iswhy a detailed analysis was performed both with classical tools and POD. Oncethis flow has been characterized, emph{time-scaling}, a new ``physics based''interpolation ROM is presented on internal and external flows. This methodsgives encouraging results while excluding recent advanced developments in thearea such as EIM or Grassmann manifold interpolation.
89

Widening the applicability of permutation inference

Winkler, Anderson M. January 2016 (has links)
This thesis is divided into three main parts. In the first, we discuss that, although permutation tests can provide exact control of false positives under the reasonable assumption of exchangeability, there are common examples in which global exchangeability does not hold, such as in experiments with repeated measurements or tests in which subjects are related to each other. To allow permutation inference in such cases, we propose an extension of the well known concept of exchangeability blocks, allowing these to be nested in a hierarchical, multi-level definition. This definition allows permutations that retain the original joint distribution unaltered, thus preserving exchangeability. The null hypothesis is tested using only a subset of all otherwise possible permutations. We do not need to explicitly model the degree of dependence between observations; rather the use of such permutation scheme leaves any dependence intact. The strategy is compatible with heteroscedasticity and can be used with permutations, sign flippings, or both combined. In the second part, we exploit properties of test statistics to obtain accelerations irrespective of generic software or hardware improvements. We compare six different approaches using synthetic and real data, assessing the methods in terms of their error rates, power, agreement with a reference result, and the risk of taking a different decision regarding the rejection of the null hypotheses (known as the resampling risk). In the third part, we investigate and compare the different methods for assessment of cortical volume and area from magnetic resonance images using surface-based methods. Using data from young adults born with very low birth weight and coetaneous controls, we show that instead of volume, the permutation-based non-parametric combination (NPC) of thickness and area is a more sensitive option for studying joint effects on these two quantities, giving equal weight to variation in both, and allowing a better characterisation of biological processes that can affect brain morphology.
90

Solveurs multifrontaux exploitant des blocs de rang faible : complexité, performance et parallélisme / Block low-rank multifrontal solvers : complexity, performance, and scalability

Mary, Théo 24 November 2017 (has links)
Nous nous intéressons à l'utilisation d'approximations de rang faible pour réduire le coût des solveurs creux directs multifrontaux. Parmi les différents formats matriciels qui ont été proposés pour exploiter la propriété de rang faible dans les solveurs multifrontaux, nous nous concentrons sur le format Block Low-Rank (BLR) dont la simplicité et la flexibilité permettent de l'utiliser facilement dans un solveur multifrontal algébrique et généraliste. Nous présentons différentes variantes de la factorisation BLR, selon comment les mises à jour de rang faible sont effectuées, et comment le pivotage numérique est géré. D'abord, nous étudions la complexité théorique du format BLR qui, contrairement à d'autres formats comme les formats hiérarchiques, était inconnue jusqu'à présent. Nous prouvons que la complexité théorique de la factorisation multifrontale BLR est asymptotiquement inférieure à celle du solveur de rang plein. Nous montrons ensuite comment les variantes BLR peuvent encore réduire cette complexité. Nous étayons nos bornes de complexité par une étude expérimentale. Après avoir montré que les solveurs multifrontaux BLR peuvent atteindre une faible complexité, nous nous intéressons au problème de la convertir en gains de performance réels sur les architectures modernes. Nous présentons d'abord une factorisation BLR multithreadée, et analysons sa performance dans des environnements multicœurs à mémoire partagée. Nous montrons que les variantes BLR sont cruciales pour exploiter efficacement les machines multicœurs en améliorant l'intensité arithmétique et la scalabilité de la factorisation. Nous considérons ensuite à la factorisation BLR sur des architectures à mémoire distribuée. Les algorithmes présentés dans cette thèse ont été implémentés dans le solveur MUMPS. Nous illustrons l'utilisation de notre approche dans trois applications industrielles provenant des géosciences et de la mécanique des structures. Nous comparons également notre solveur avec STRUMPACK, basé sur des approximations Hierarchically Semi-Separable. Nous concluons cette thèse en rapportant un résultat sur un problème de très grande taille (130 millions d'inconnues) qui illustre les futurs défis posés par le passage à l'échelle des solveurs multifrontaux BLR. / We investigate the use of low-rank approximations to reduce the cost of sparse direct multifrontal solvers. Among the different matrix representations that have been proposed to exploit the low-rank property within multifrontal solvers, we focus on the Block Low-Rank (BLR) format whose simplicity and flexibility make it easy to use in a general purpose, algebraic multifrontal solver. We present different variants of the BLR factorization, depending on how the low-rank updates are performed and on the constraints to handle numerical pivoting. We first investigate the theoretical complexity of the BLR format which, unlike other formats such as hierarchical ones, was previously unknown. We prove that the theoretical complexity of the BLR multifrontal factorization is asymptotically lower than that of the full-rank solver. We then show how the BLR variants can further reduce that complexity. We provide an experimental study with numerical results to support our complexity bounds. After proving that BLR multifrontal solvers can achieve a low complexity, we turn to the problem of translating that low complexity in actual performance gains on modern architectures. We first present a multithreaded BLR factorization, and analyze its performance in shared-memory multicore environments on a large set of real-life problems. We put forward several algorithmic properties of the BLR variants necessary to efficiently exploit multicore systems by improving the arithmetic intensity and the scalability of the BLR factorization. We then move on to the distributed-memory BLR factorization, for which additional challenges are identified and addressed. The algorithms presented throughout this thesis have been implemented within the MUMPS solver. We illustrate the use of our approach in three industrial applications coming from geosciences and structural mechanics. We also compare our solver with the STRUMPACK package, based on Hierarchically Semi-Separable approximations. We conclude this thesis by reporting results on a very large problem (130 millions of unknowns) which illustrates future challenges posed by BLR multifrontal solvers at scale.

Page generated in 0.0219 seconds