• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 20
  • 12
  • 6
  • 6
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 172
  • 172
  • 54
  • 41
  • 38
  • 29
  • 28
  • 27
  • 22
  • 21
  • 18
  • 18
  • 18
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Teoria, métodos e aplicações de otimização multiobjetivo / Theory, methods and applications of multiobjective optimization

Sampaio, Phillipe Rodrigues 24 March 2011 (has links)
Problemas com múltiplos objetivos são muito frequentes nas áreas de Otimização, Economia, Finanças, Transportes, Engenharia e várias outras. Como os objetivos são, geralmente, conflitantes, faz-se necessário o uso de técnicas apropriadas para obter boas soluções. A área que trata de problemas deste tipo é chamada de Otimização Multiobjetivo. Neste trabalho, estudamos os problemas dessa área e alguns dos métodos existentes para resolvê-los. Primeiramente, alguns conceitos relacionados ao conjunto de soluções são definidos, como o de eficiência, no intuito de entender o que seria a melhor solução para este tipo de problema. Em seguida, apresentamos algumas condições de otimalidade de primeira ordem, incluindo as do tipo Fritz John para problemas de Otimização Multiobjetivo. Discutimos ainda sobre algumas condições de regularidade e total regularidade, as quais desempenham o mesmo papel das condições de qualificação em Programação Não-Linear, propiciando a estrita positividade dos multiplicadores de Lagrange associados às funções objetivo. Posteriormente, alguns dos métodos existentes para resolver problemas de Otimização Multiobjetivo são descritos e comparados entre si. Ao final, aplicamos a teoria e métodos de Otimização Multiobjetivo nas áreas de Compressed Sensing e Otimização de Portfolio. Exibimos então testes computacionais realizados com alguns dos métodos discutidos envolvendo problemas de Otimização de Portfolio e fazemos uma análise dos resultados. / Problems with multiple objectives are very frequent in areas such as Optimization, Economy, Finance, Transportation, Engineering and many others. Since the objectives are usually conflicting, there is a need for appropriate techniques to obtain good solutions. The area that deals with problems of this type is called Multiobjective Optimization. The aim of this work is to study the problems of such area and some of the methods available to solve them. Firstly, some basic concepts related to the feasible set are defined, for instance, efficiency, in order to comprehend which solution could be the best for this kind of problem. Secondly, we present some first-order optimality conditions, including the Fritz John ones for Multiobjective Optimization. We also discuss about regularity and total regularity conditions, which play the same role in Nonlinear Multiobjective Optimization as the constraint qualifications in Nonlinear Programming, providing the strict positivity of the Lagrange multipliers associated to the objective functions. Afterwards, some of the existing methods to solve Multiobjective Optimization problems are described and compared with each other. At last, the theory and methods of Multiobjective Optimization are applied into the fields of Compressed Sensing and Portfolio Optimization. We, then, show computational tests performed with some of the methods discussed involving Portfolio Optimization problems and we present an analysis of the results.
42

Física estatística de compressed sensing online / Statistical Physics of Online Compressed Sensing

Rossi, Paulo Victor Camargo 02 March 2018 (has links)
Neste trabalho, Compressed Sensing é introduzido do ponto de vista da Física Estatística. Após uma introdução sucinta onde os conceitos básicos da teoria são apresentados, incluindo condições necessárias para as medições e métodos básicos de reconstrução do sinal, a performance típica do esquema Bayesiano de reconstrução é analisada através de um cálculo de réplicas exposto em detalhe pedagógico. Em seguida, a principal contribuição original do trabalho é introduzida --- o algoritmo Bayesiano de Compressed Sensing Online faz uso de uma aproximação de campo médio para simplificar cálculos e reduzir os requisitos de memória e computação, enquanto mantém a acurácia de reconstrução do esquema offline na presença de ruído aditivo. A última parte deste trabalho contém duas extensões do algoritmo online que permitem reconstrução otimizada do sinal no cenário mais realista onde conhecimento perfeito da distribuição geradora não está disponível. / In this work, Compressed Sensing is introduced from a Statistical Physics point of view. Following a succinct introduction where the basic concepts of the framework are presented, including necessary measurement conditions and basic signal reconstruction methods, the typical performance of the Bayesian reconstruction scheme is analyzed through a replica calculation shown in pedagogical detail. Thereafter, the main original contribution of this work is introduced --- the Bayesian Online Compressed Sensing algorithm makes use of a mean-field approximation to simplify calculations and reduce memory and computation requirements, while maintaining the asymptotic reconstruction accuracy of the offline scheme in the presence of additive noise. The last part of this work are two extensions of the online algorithm that allow for optimized signal reconstruction in the more realistic scenarios where perfect knowledge of the generating distribution is unavailable.
43

One-Bit Compressive Sensing with Partial Support Information

North, Phillip 01 January 2015 (has links)
This work develops novel algorithms for incorporating prior-support information into the field of One-Bit Compressed Sensing. Traditionally, Compressed Sensing is used for acquiring high-dimensional signals from few linear measurements. In applications, it is often the case that we have some knowledge of the structure of our signal(s) beforehand, and thus we would like to leverage it to attain more accurate and efficient recovery. Additionally, the Compressive Sensing framework maintains relevance even when the available measurements are subject to extreme quantization. Indeed, the field of One-Bit Compressive Sensing aims to recover a signal from measurements reduced to only their sign-bit. This work explores avenues for incorporating partial-support information into existing One-Bit Compressive Sensing algorithms. We provide both a rich background to the field of compressed sensing and in particular the one-bit framework, while also developing and testing new algorithms for this setting. Experimental results demonstrate that newly proposed methods of this work yield improved signal recovery even for varying levels of accuracy in the prior information. This work is thus the first to provide recovery mechanisms that efficiently use prior signal information in the one-bit reconstruction setting.
44

Física estatística de compressed sensing online / Statistical Physics of Online Compressed Sensing

Paulo Victor Camargo Rossi 02 March 2018 (has links)
Neste trabalho, Compressed Sensing é introduzido do ponto de vista da Física Estatística. Após uma introdução sucinta onde os conceitos básicos da teoria são apresentados, incluindo condições necessárias para as medições e métodos básicos de reconstrução do sinal, a performance típica do esquema Bayesiano de reconstrução é analisada através de um cálculo de réplicas exposto em detalhe pedagógico. Em seguida, a principal contribuição original do trabalho é introduzida --- o algoritmo Bayesiano de Compressed Sensing Online faz uso de uma aproximação de campo médio para simplificar cálculos e reduzir os requisitos de memória e computação, enquanto mantém a acurácia de reconstrução do esquema offline na presença de ruído aditivo. A última parte deste trabalho contém duas extensões do algoritmo online que permitem reconstrução otimizada do sinal no cenário mais realista onde conhecimento perfeito da distribuição geradora não está disponível. / In this work, Compressed Sensing is introduced from a Statistical Physics point of view. Following a succinct introduction where the basic concepts of the framework are presented, including necessary measurement conditions and basic signal reconstruction methods, the typical performance of the Bayesian reconstruction scheme is analyzed through a replica calculation shown in pedagogical detail. Thereafter, the main original contribution of this work is introduced --- the Bayesian Online Compressed Sensing algorithm makes use of a mean-field approximation to simplify calculations and reduce memory and computation requirements, while maintaining the asymptotic reconstruction accuracy of the offline scheme in the presence of additive noise. The last part of this work are two extensions of the online algorithm that allow for optimized signal reconstruction in the more realistic scenarios where perfect knowledge of the generating distribution is unavailable.
45

Teoria, métodos e aplicações de otimização multiobjetivo / Theory, methods and applications of multiobjective optimization

Phillipe Rodrigues Sampaio 24 March 2011 (has links)
Problemas com múltiplos objetivos são muito frequentes nas áreas de Otimização, Economia, Finanças, Transportes, Engenharia e várias outras. Como os objetivos são, geralmente, conflitantes, faz-se necessário o uso de técnicas apropriadas para obter boas soluções. A área que trata de problemas deste tipo é chamada de Otimização Multiobjetivo. Neste trabalho, estudamos os problemas dessa área e alguns dos métodos existentes para resolvê-los. Primeiramente, alguns conceitos relacionados ao conjunto de soluções são definidos, como o de eficiência, no intuito de entender o que seria a melhor solução para este tipo de problema. Em seguida, apresentamos algumas condições de otimalidade de primeira ordem, incluindo as do tipo Fritz John para problemas de Otimização Multiobjetivo. Discutimos ainda sobre algumas condições de regularidade e total regularidade, as quais desempenham o mesmo papel das condições de qualificação em Programação Não-Linear, propiciando a estrita positividade dos multiplicadores de Lagrange associados às funções objetivo. Posteriormente, alguns dos métodos existentes para resolver problemas de Otimização Multiobjetivo são descritos e comparados entre si. Ao final, aplicamos a teoria e métodos de Otimização Multiobjetivo nas áreas de Compressed Sensing e Otimização de Portfolio. Exibimos então testes computacionais realizados com alguns dos métodos discutidos envolvendo problemas de Otimização de Portfolio e fazemos uma análise dos resultados. / Problems with multiple objectives are very frequent in areas such as Optimization, Economy, Finance, Transportation, Engineering and many others. Since the objectives are usually conflicting, there is a need for appropriate techniques to obtain good solutions. The area that deals with problems of this type is called Multiobjective Optimization. The aim of this work is to study the problems of such area and some of the methods available to solve them. Firstly, some basic concepts related to the feasible set are defined, for instance, efficiency, in order to comprehend which solution could be the best for this kind of problem. Secondly, we present some first-order optimality conditions, including the Fritz John ones for Multiobjective Optimization. We also discuss about regularity and total regularity conditions, which play the same role in Nonlinear Multiobjective Optimization as the constraint qualifications in Nonlinear Programming, providing the strict positivity of the Lagrange multipliers associated to the objective functions. Afterwards, some of the existing methods to solve Multiobjective Optimization problems are described and compared with each other. At last, the theory and methods of Multiobjective Optimization are applied into the fields of Compressed Sensing and Portfolio Optimization. We, then, show computational tests performed with some of the methods discussed involving Portfolio Optimization problems and we present an analysis of the results.
46

L'échantillonnage compressif en IRM : conception optimisée de trajectoires d’échantillonnage pour accélérer l’IRM / Compressed Sensing in MRI : optimization-based design of k-space filling curves for accelerated MRI

Lazarus, Carole 27 September 2018 (has links)
L'imagerie par résonance magnétique (IRM) est l'une des modalités d'imagerie les plus puissantes et les plus sures pour examiner le corps humain. L'IRM de haute résolution devrait aider à la compréhension et le diagnostic de nombreuses pathologies impliquant des lésions submillimétriques ou des maladies telles que la maladie d'Alzheimer et la sclérose en plaque. Bien que les systèmes à haut champ magnétique soient capables de fournir un rapport signal-sur-bruit permettant d'augmenter la résolution spatiale, le long temps d'acquisition et la sensibilité au mouvement continuent d'entraver l'utilisation de l'IRM de haute résolution. Malgré le développement de méthodes de correction du mouvement et du bruit physiologique, le long temps d'acquisition reste un obstacle majeur à l'IRM de haute résolution, en particulier dans les applications cliniques.Au cours de la dernière décennie, la nouvelle théorie du compressed sensing (CS) a proposé une solution prometteuse pour réduire le temps d'examen en IRM. Après avoir expliqué la théorie du compressed sensing, ce projet de thèse propose une étude empirique et quantitative du facteur de sous-échantillonnage maximum réalisable grâce au CS pour l'imagerie pondérée en T ₂ *.En outre, l'application de CS en IRM repose généralement sur l'utilisation de courbes d'échantillonnage simples telles que les lignes droites, spirales ou des légères variations de ces formes élémentaires qui ne tirent pas pleinement parti des degrés de liberté offerts par le hardware et ne peuvent être facilement adaptées à une distribution d'échantillonnage arbitraire. Dans cette thèse, j'ai introduit une méthode appelée SPARKLING, qui permet de surmonter ces limitations en adoptant une approche radicalement nouvelle de la conception de l'échantillonnage de l'espace-k. L'acronyme SPARKLING signifie Spreading Projection Algorithm for Rapid K-space sampLING. C'est une méthode flexible inspirée des techniques de stippling qui génère automatiquement, grâce à un algorithme d'optimisation, des courbes d'échantillonnage non-cartésiennes optimisées et compatibles avec les contraintes hardware de l'IRM en termes d'amplitude de gradient maximale et d'accélération maximale. Ces courbes d'échantillonnage sont conçues pour répondre à des critères clés pour un échantillonnage optimal : une distribution contrôlée des échantillons et une couverture de l'espace-k localement uniforme. Avant de s'engager dans des acquisitions, nous avons vérifié que notre système de gradient était bien capable d'exécuter ces trajectoires complexes. Nous avons implémenté une méthode de mesure de phase et avons observé une très bonne adéquation entre trajectoires prescrites et mesurées.Enfin, en alliant une efficacité d'échantillonnage avec le compressed sensing et l'imagerie parallèle, les trajectoires SPARKLING ont permis de réduire jusqu'à 20 fois le temps d'acquisition d'un examen IRM T ₂ * par rapport aux acquisitions cartésiennes de référence, sans détérioration de la qualité d'image. Ces résultats expérimentaux ont été obtenus à 7 Tesla pour de l'imagerie cérébrale in vivo. Par rapport aux stratégies d'échantillonnage non-cartésiennes usuelles (spirale et radiale), la technique proposée a également permis d'obtenir une qualité d'image supérieure. Enfin, l'approche proposée a été étendue à l'imagerie 3D et appliquée à 3 Tesla pour laquelle des résultats préliminaires ex vivo à une résolution isotrope de 0.6 mm suggèrent la possibilité d'atteindre des facteurs d'accélération très élevés jusqu'à 60 pour la pondération T ₂ * et l'imagerie pondérée en susceptibilité. / Magnetic resonance imaging (MRI) is one of the most powerful and safest imaging modalities for examining the human body. High-resolution MRI is expected to aid in the understanding and diagnosis of many neurodegenerative pathologies involving submillimetric lesions or morphological alterations, such as Alzheimer’s disease and multiple sclerosis. Although high-magnetic-field systems can deliver a sufficient signal-to-noise ratio (SNR) to increase spatial resolution, long scan times and motion sensitivity continue hindering the utilization of high resolution MRI. Despite the development of corrections for bulk and physiological motion, lengthy acquisition times remain a major obstacle to high-resolution acquisition, especially in clinical applications.In the last decade, the newly developed theory of compressed sensing (CS) offered a promising solution for reducing the MRI scan time. After having explained the theory of compressed sensing, this PhD project proposes an empirical and quantitative analysis of the maximum undersampling factor achievable with CS for T ₂ *-weighted imaging.Furthermore, the application of CS to MRI commonly relies on simple sampling patterns such as straight lines, spirals or slight variations of these elementary shapes, which do not take full advantage of the degrees of freedom offered by the hardware and cannot be easily adapted to fit an arbitrary sampling distribution. In this PhD thesis, I have introduced a method called SPARKLING, that may overcome these limitations by taking a radically new approach to the design of k-space sampling. The acronym SPARKLING stands for Spreading Projection Algorithm for Rapid K-space sampLING. It is a versatile method inspired from stippling techniques that automatically generates optimized non-Cartesian sampling patterns compatible with MR hardware constraints on maximum gradient amplitude and slew rate. These sampling curves are designed to comply with key criteria for optimal sampling: a controlled distribution of samples and a locally uniform k-space coverage. Before engaging into experiments, we verified that our gradient system was capable of executing the complex gradient waveforms. We implemented a local phase measurement method and we observed a very good adequacy between prescribed and measured k-space trajectories.Finally, combining sampling efficiency with compressed sensing and parallel imaging, the SPARKLING sampling patterns allowed up to 20-fold reductions in MR scan time, compared to fully-sampled Cartesian acquisitions, for T ₂ *-weighted imaging without deterioration of image quality, as demonstrated by our experimental results at 7 Tesla on in vivo human brains. In comparison to existing non-Cartesian sampling strategies (spiral and radial), the proposed technique also yielded superior image quality. Finally, the proposed approach was also extended to 3D imaging and applied at 3 Tesla for which preliminary results on ex vivo phantoms at 0.8 mm isotropic resolution suggest the possibility to reach very high acceleration factors up to 60 for T ₂ *-weighting and susceptibility-weighted imaging.
47

Compressed Sensing : Algorithms and Applications

Sundman, Dennis January 2012 (has links)
The theoretical problem of finding the solution to an underdeterminedset of linear equations has for several years attracted considerable attentionin the literature. This problem has many practical applications.One example of such an application is compressed sensing (cs), whichhas the potential to revolutionize how we acquire and process signals. Ina general cs setup, few measurement coefficients are available and thetask is to reconstruct a larger, sparse signal.In this thesis we focus on algorithm design and selected applicationsfor cs. The contributions of the thesis appear in the following order:(1) We study an application where cs can be used to relax the necessityof fast sampling for power spectral density estimation problems. Inthis application we show by experimental evaluation that we can gainan order of magnitude in reduced sampling frequency. (2) In order toimprove cs recovery performance, we extend simple well-known recoveryalgorithms by introducing a look-ahead concept. From simulations it isobserved that the additional complexity results in significant improvementsin recovery performance. (3) For sensor networks, we extend thecurrent framework of cs by introducing a new general network modelwhich is suitable for modeling several cs sensor nodes with correlatedmeasurements. Using this signal model we then develop several centralizedand distributed cs recovery algorithms. We find that both thecentralized and distributed algorithms achieve a significant gain in recoveryperformance compared to the standard, disconnected, algorithms.For the distributed case, we also see that as the network connectivity increases,the performance rapidly converges to the performance of thecentralized solution. / <p>QC 20120229</p>
48

Algorithmes itératifs à faible complexité pour le codage de canal et le compressed sensing / Low Complexity Iterative Algorithms for Channel Coding and Compressed Sensing

Danjean, Ludovic 29 November 2012 (has links)
L'utilisation d'algorithmes itératifs est aujourd'hui largement répandue dans tous les domaines du traitement du signal et des communications numériques. Dans les systèmes de communications modernes, les algorithmes itératifs sont utilisés dans le décodage des codes ``low-density parity-check`` (LDPC), qui sont une classe de codes correcteurs d'erreurs utilisés pour leurs performances exceptionnelles en terme de taux d'erreur. Dans un domaine plus récent qu'est le ``compressed sensing``, les algorithmes itératifs sont utilisés comme méthode de reconstruction afin de recouvrer un signal ''sparse`` à partir d'un ensemble d'équations linéaires, appelées observations. Cette thèse traite principalement du développement d'algorithmes itératifs à faible complexité pour les deux domaines mentionnés précédemment, à savoir le design d'algorithmes de décodage à faible complexité pour les codes LDPC, et le développement et l'analyse d'un algorithme de reconstruction à faible complexité, appelé ''Interval-Passing Algorithm (IPA)'', dans le cadre du ``compressed sensing``.Dans la première partie de cette thèse, nous traitons le cas des algorithmes de décodage des codes LDPC. Il est maintenu bien connu que les codes LDPC présentent un phénomène dit de ''plancher d'erreur`` en raison des échecs de décodage des algorithmes de décodage traditionnels du types propagation de croyances, et ce en dépit de leurs excellentes performances de décodage. Récemment, une nouvelle classe de décodeurs à faible complexité, appelés ''finite alphabet iterative decoders (FAIDs)'' ayant de meilleures performances dans la zone de plancher d'erreur, a été proposée. Dans ce manuscrit nous nous concentrons sur le problème de la sélection de bons décodeurs FAID pour le cas de codes LDPC ayant un poids colonne de 3 et le cas du canal binaire symétrique. Les méthodes traditionnelles pour la sélection des décodeurs s'appuient sur des techniques asymptotiques telles que l'évolution de densité, mais qui ne garantit en rien de bonnes performances sur un code de longueurs finies surtout dans la région de plancher d'erreur. C'est pourquoi nous proposons ici une méthode de sélection qui se base sur la connaissance des topologies néfastes au décodage pouvant être présente dans un code en utilisant le concept de ``trapping sets bruités''. Des résultats de simulation sur différents codes montrent que les décodeurs FAID sélectionnés grâce à cette méthode présentent de meilleures performance dans la zone de plancher d'erreur comparé au décodeur à propagation de croyances.Dans un second temps, nous traitons le sujet des algorithmes de reconstruction itératifs pour le compressed sensing. Des algorithmes itératifs ont été proposés pour ce domaine afin de réduire la complexité induite de la reconstruction par ``linear programming''. Dans cette thèse nous avons modifié et analysé un algorithme de reconstruction à faible complexité dénommé IPA utilisant les matrices creuses comme matrices de mesures. Parallèlement aux travaux réalisés dans la littérature dans la théorie du codage, nous analysons les échecs de reconstruction de l'IPA et établissons le lien entre les ``stopping sets'' de la représentation binaire des matrices de mesure creuses. Les performances de l'IPA en font un bon compromis entre la complexité de la reconstruction sous contrainte de minimisation de la norme $ell_1$ et le très simple algorithme dit de vérification. / Iterative algorithms are now widely used in all areas of signal processing and digital communications. In modern communication systems, iterative algorithms are used for decoding low-density parity-check (LDPC) codes, a popular class of error-correction codes that are now widely used for their exceptional error-rate performance. In a more recent field known as compressed sensing, iterative algorithms are used as a method of reconstruction to recover a sparse signal from a linear set of measurements. This thesis primarily deals with the development of low-complexity iterative algorithms for the two aforementioned fields, namely, the design of low-complexity decoding algorithms for LDPC codes, and the development and analysis of a low complexity reconstruction algorithm called Interval-Passing Algorithm (IPA) for compressed sensing.In the first part of this thesis, we address the area of decoding algorithms for LDPC codes. It is well-known that LDPC codes suffer from the error floor phenomenon in spite of their exceptional performance, where traditional iterative decoders based on the belief propagation (BP) fail for certain low-noise configurations. Recently, a novel class of decoders called ''finite alphabet iterative decoders (FAIDs)'' were proposed that are capable of surpassing BP in the error floor at much lower complexity. In this work, we focus on the problem of selection of particularly good FAIDs for column-weight-three codes over the Binary Symmetric channel (BSC). Traditional methods for decoder selection use asymptotic techniques such as the density evolution method, which do not guarantee a good performance on finite-length codes especially in theerror floor region. Instead, we propose a methodology for selection that relies on the knowledge of potentially harmful topologies that could be present in a code, using the concept of noisy trapping set. Numerical results are provided to show that FAIDs selected based on our methodology outperform BP in the error floor on several codes.In the second part of this thesis, we address the area of iterative reconstruction algorithms for compressed sensing. Iterative algorithms have been proposed for compressed sensing in order to tackle the complexity of the LP reconstruction method. In this work, we modify and analyze a low complexity reconstruction algorithm called the IPA which uses sparse matrices as measurement matrices. Similar to what has been done for decoding algorithms in the area of coding theory, we analyze the failures of the IPA and link them to the stopping sets of the binary representation of the sparse measurement matrices used. The performance of the IPA makes it a good trade-off between the complex L1-minimization reconstruction and the very simple verification decoding.
49

Message Passing Approaches to Compressive Inference Under Structured Signal Priors

Ziniel, Justin A. January 2014 (has links)
No description available.
50

Imagerie par résonance magnétique in vivo de la vascularisation cérébrale chez la souris : optimisation et accélération par acquisition compressée / In vivo magnetic resonance imaging of the mouse neurovasculature : optimization and acceleration by compressed sensing

Fouquet, Jérémie January 2016 (has links)
Résumé : Imager la vascularisation cérébrale de la manière la plus exacte, précise et rapide possible représente un enjeu important pour plusieurs domaines de recherche. En plus d’aider à mieux comprendre le fonctionnement normal du cerveau, cela peut servir à caractériser diverses pathologies ou à développer de nouveaux traitements. Dans un premier temps, ce mémoire présente l’optimisation d’une technique d’angiographie cérébrale in vivo chez un modèle animal fréquemment utilisé, la souris. La technique emploie une séquence d’imagerie par résonance magnétique (IRM) 3D pondérée en susceptibilité ainsi qu’un agent de contraste, le Resovist. Les paramètres d’acquisition à l’IRM ont été optimisés à l’aide d’images acquises avant l’injection du Resovist. Ces paramètres permettent d’imager le cerveau entier en 41 minutes avec une résolution de 78 × 78 × 104 μm3. L’emploi d’une pondération en susceptibilité offre une excellente sensibilité aux petits vaisseaux (diamètre ≃ 40μm). L’analyse des images permet d’extraire des informations sur la morphologie vasculaire. Dans un second temps, la méthode de l’acquisition compressée (AcqC) a été implémentée dans le but d’accélérer l’acquisition des images angiographiques. La méthode de l’AcqC utilise des hypothèses de compressibilité des images pour diminuer la quantité de données acquise. L’AcqC a jusqu’à présent principalement été développée pour des images réelles (au sens des nombres complexes). Or, les images angiographiques obtenues présentent d’importantes variations de phase en raison de la pondération en susceptibilité. La présence de ces variations diminue d’une part la force des hypothèses de compressibilité habituelles et rend d’autre part l’espace-k moins propice au sous-échantillonnage requis par l’AcqC. En raison de ces deux facteurs, l’AcqC standard s’avère inefficace pour accélérer l’acquisition des images angiographiques acquises. Leur mise en lumière suggère cependant différentes pistes pour améliorer l’AcqC appliquée aux images comportant d’importantes variations de phase. / Abstract : Imaging neurovasculature with highest exactitude, precision and speed is of critical importance for several research fields. Beside providing an insight on normal brain activity, it can help characterize numerous pathologies or develop novel treatments. This thesis presents in its first part the optimization of a cerebral angiographic in vivo technique in a frequently used animal model, the mouse. The technique uses both a 3D magnetic resonance imaging (MRI) susceptibility weighted sequence and a strongly paramagnetic contrast agent, Resovist. MRI acquisition parameters were optimized using images acquired before contrast agent injection. Those parameters allow whole brain vascular imaging of the mouse brain in 41 minutes with a 78 × 78 × 104 μm3 resolution. Susceptibility weighting offers an excellent detection sensitivity for small vessels (diameter ≃ 40μm). Image treatment and analysis allow the extraction of vascular morphological information such as vessel size and vessel density. In the second part of this thesis, an attempt to accelerate angiographic images acquisition using the compressed sensing (CS) method is presented. CS method aims at reducing the acquired data by using compressibility hypothesis on images. Presently, CS is mainly developped for real images (within the meaning of complex numbers). However, the previously obtained angiographic images contain important phase variations due to the susceptibility weighting. First, those variations reduce the strength of the compressibility hypothesis normally used in CS. Second, those same variations make information distribution in k-space less appropriate for the undersampling required by CS. For those reasons, standard CS does not allow significant acceleration of the acquisition process for the presented angiographic technique. Studying those reasons however suggests new ways to increase CS efficiency when applied to images with important phase variations.

Page generated in 0.5144 seconds