11 |
Iterative Reconstruction Algorithms for Polyenergetic X-ray Computerized TomographyRezvani, Nargol 19 December 2012 (has links)
A reconstruction algorithm in computerized tomography is a procedure for reconstructing the attenuation coefficientscient, a real-valued function associated with the object of interest, from the measured projection data. Generally speaking, reconstruction algorithms in CT fall into two categories: direct, e.g., filtered back-projection (FBP), or iterative. In this thesis, we discuss a new fast matrix-free iterative reconstruction method based on a polyenergetic model.
While most modern x-ray CT scanners rely on the well-known filtered back-projection algorithm, the corresponding reconstructions can be corrupted by beam hardening artifacts. These artifacts arise from the unrealistic physical assumption of monoenergetic x-ray beams. In this thesis, to compensate, we use an alternative model that accounts for differential absorption of polyenergetic x-ray photons and discretize it directly. We do not assume any prior knowledge about the physical properties of the scanned object. We study and implement different solvers and nonlinear unconstrained optimization methods, such as a Newton-like method and an extension of the Levenberg-Marquardt-Fletcher algorithm. We explain how we can use the structure of the Radon matrix and the properties of FBP to make our method matrix-free and fast. Finally, we discuss how we regularize our problem by applying different regularization methods, such as Tikhonov and regularization in the 1-norm. We present numerical reconstructions based on the associated nonlinear discrete formulation incorporating various iterative optimization methods.
|
12 |
Computational Optical Imaging Systems: Sensing Strategies, Optimization Methods, and Performance BoundsHarmany, Zachary Taylor January 2012 (has links)
<p>The emerging theory of compressed sensing has been nothing short of a revolution in signal processing, challenging some of the longest-held ideas in signal processing and leading to the development of exciting new ways to capture and reconstruct signals and images. Although the theoretical promises of compressed sensing are manifold, its implementation in many practical applications has lagged behind the associated theoretical development. Our goal is to elevate compressed sensing from an interesting theoretical discussion to a feasible alternative to conventional imaging, a significant challenge and an exciting topic for research in signal processing. When applied to imaging, compressed sensing can be thought of as a particular case of computational imaging, which unites the design of both the sensing and reconstruction of images under one design paradigm. Computational imaging tightly fuses modeling of scene content, imaging hardware design, and the subsequent reconstruction algorithms used to recover the images. </p><p>This thesis makes important contributions to each of these three areas through two primary research directions. The first direction primarily attacks the challenges associated with designing practical imaging systems that implement incoherent measurements. Our proposed snapshot imaging architecture using compressive coded aperture imaging devices can be practically implemented, and comes equipped with theoretical recovery guarantees. It is also straightforward to extend these ideas to a video setting where careful modeling of the scene can allow for joint spatio-temporal compressive sensing. The second direction develops a host of new computational tools for photon-limited inverse problems. These situations arise with increasing frequency in modern imaging applications as we seek to drive down image acquisition times, limit excitation powers, or deliver less radiation to a patient. By an accurate statistical characterization of the measurement process in optical systems, including the inherent Poisson noise associated with photon detection, our class of algorithms is able to deliver high-fidelity images with a fraction of the required scan time, as well as enable novel methods for tissue quantification from intraoperative microendoscopy data. In short, the contributions of this dissertation are diverse, further the state-of-the-art in computational imaging, elevate compressed sensing from an interesting theory to a practical imaging methodology, and allow for effective image recovery in light-starved applications.</p> / Dissertation
|
13 |
Fusion of Sparse Reconstruction Algorithms in Compressed SensingAmbat, Sooraj K January 2015 (has links) (PDF)
Compressed Sensing (CS) is a new paradigm in signal processing which exploits the sparse or compressible nature of the signal to significantly reduce the number of measurements, without compromising on the signal reconstruction quality. Recently, many algorithms have been reported in the literature for efficient sparse signal reconstruction. Nevertheless, it is well known that the performance of any sparse reconstruction algorithm depends on many parameters like number of measurements, dimension of the sparse signal, the level of sparsity, the measurement noise power, and the underlying statistical distribution of the non-zero elements of the signal. It has been observed that a satisfactory performance of the sparse reconstruction algorithm mandates certain requirement on these parameters, which is different for different algorithms. Many applications are unlikely to fulfil this requirement. For example, imaging speed is crucial in many Magnetic Resonance Imaging (MRI) applications. This restricts the number of measurements, which in turn affects the medical diagnosis using MRI. Hence, any strategy to improve the signal reconstruction in such adverse scenario is of substantial interest in CS.
Interestingly, it can be observed that the performance degradation of the sparse recovery algorithms in the aforementioned cases does not always imply a complete failure. That is, even in such adverse situations, a sparse reconstruction algorithm may provide partially correct information about the signal. In this thesis, we study this scenario and propose a novel fusion framework and an iterative framework which exploit the partial information available in the sparse signal estimate(s) to improve sparse signal reconstruction.
The proposed fusion framework employs multiple sparse reconstruction algorithms, independently, for signal reconstruction. We first propose a fusion algorithm viz. FACS which fuses the estimates of multiple participating algorithms in order to improve the sparse signal reconstruction. To alleviate the inherent drawbacks of FACS and further improve the sparse signal reconstruction, we propose another fusion algorithm called CoMACS and variants of CoMACS. For low latency applications, we propose a latency friendly fusion algorithm called pFACS. We also extend the fusion framework to the MMV problem and propose the extension of FACS called MMV-FACS. We theoretically analyse the proposed fusion algorithms and derive guarantees for performance improvement. We also show that the proposed fusion algorithms are robust against both signal and measurement perturbations. Further, we demonstrate the efficacy of the proposed algorithms via numerical experiments: (i) using sparse signals with different statistical distributions in noise-free and noisy scenarios, and (ii) using real-world ECG signals. The extensive numerical experiments show that, for a judicious choice of the participating algorithms, the proposed fusion algorithms result in a sparse signal estimate which is often better than the sparse signal estimate of the best participating algorithm.
The proposed fusion framework requires toemploy multiple sparse reconstruction algorithms for sparse signal reconstruction. We also propose an iterative framework and algorithm called {IFSRA to improve the performance of a given arbitrary sparse reconstruction algorithm. We theoretically analyse IFSRA and derive convergence guarantees under signal and measurement perturbations. Numerical experiments on synthetic and real-world data confirm the efficacy of IFSRA. The proposed fusion algorithms and IFSRA are general in nature and does not require any modification in the participating algorithm(s).
|
14 |
Performance Evaluation Of Fan-beam And Cone-beam Reconstruction Algorithms With No Backprojection Weight On Truncated Data ProblemsSumith, K 07 1900 (has links) (PDF)
This work focuses on using the linear prediction based projection completion for the fan-beam and cone-beam reconstruction algorithm with no backprojection weight. The truncated data problems are addressed in the computed tomography research. However, the image reconstruction from truncated data perfectly has not been achieved yet and only approximately accurate solutions have been obtained. Thus research in this area continues to strive to obtain close result to the perfect. Linear prediction techniques are adopted for truncation completion in this work, because previous research on the truncated data problems also have shown that this technique works well compared to some other techniques like polynomial fitting and iterative based methods. The Linear prediction technique is a model based technique. The autoregressive (AR) and moving average (MA) are the two important models along with autoregressive moving average (ARMA) model. The AR model is used in this work because of the simplicity it provides in calculating the prediction coefficients. The order of the model is chosen based on the partial autocorrelation function of the projection data proved in the previous researches that have been carried out in this area of interest. The truncated projection completion using linear prediction and windowed linear prediction show that reasonably accurate reconstruction is achieved. The windowed linear prediction provide better estimate of the missing data, the reason for this is mentioned in the literature and is restated for the reader’s convenience in this work.
The advantages associated with the fan-beam reconstruction algorithms with no backprojection weights compared to the fan-beam reconstruction algorithm with backprojection weights motivated us to use the fan-beam reconstruction algorithm with no backprojection weight for reconstructing the truncation completed projection data. The results obtained are compared with the previous work which used conventional fan-beam reconstruction algorithms with backprojection weight. The intensity plots and the noise performance results show improvements resulting from using the fan-beam reconstruction algorithm with no backprojection weight. The work is also extended to the Feldkamp, Davis, and Kress (FDK) reconstruction algorithm with no backprojection weight for the helical scanning geometry and the results obtained are compared with the FDK reconstruction algorithm with backprojection weight for the helical scanning geometry.
|
15 |
Performance Evaluation of Stereo Reconstruction Algorithms on NIR Images / Utvärdering av algoritmer för stereorekonstruktion av NIR-bilderVidas, Dario January 2016 (has links)
Stereo vision is one of the most active research areas in computer vision. While hundreds of stereo reconstruction algorithms have been developed, little work has been done on the evaluation of such algorithms and almost none on evaluation on Near-Infrared (NIR) images. Of almost a hundred examined, we selected a set of 15 stereo algorithms, mostly with real-time performance, which were then categorized and evaluated on several NIR image datasets, including single stereo pair and stream datasets. The accuracy and run time of each algorithm are measured and compared, giving an insight into which categories of algorithms perform best on NIR images and which algorithms may be candidates for real-time applications. Our comparison indicates that adaptive support-weight and belief propagation algorithms have the highest accuracy of all fast methods, but also longer run times (2-3 seconds). On the other hand, faster algorithms (that achieve 30 or more fps on a single thread) usually perform an order of magnitude worse when measuring the per-centage of incorrectly computed pixels.
|
16 |
Advanced three dimensional digital tomosynthesis studies for breast imaging / Προηγμένες μελέτες τρισδιάστατης ψηφιακής τομοσύνθεσης για την απεικόνιση του μαστούΜαλλιώρη, Ανθή 07 July 2015 (has links)
The current thesis is focused on the study of tomosynthesis techniques applied on breast imaging, in order to improve the detection of breast lesions. Breast Tomosynthesis (BT) is a pseudo-three-dimensional (3D) x-ray imaging technique that provides reconstructed tomographic images from a set of angular projections taken in a limited arc around the breast, with dose levels similar to those of a two-view conventional mammography. Simulation studies and clinical trials suggest that BT is very useful for imaging the breast in an attempt to optimize the detection and characterization of lesions particularly in dense breasts and has the potential to reduce the recall rate. Reconstruction algorithms and acquisition parameters are critical for the quality of reconstructed slices.
The aim of this research is to explore tomosynthesis modalities for breast imaging and evaluate them against existing mammographic techniques as well as to investigate the effect of reconstruction algorithms and acquisition parameters on the image quality of tomosynthetic slices. A specific aim and innovation of the study was to demonstrate the feasibility of combining BT and monochromatic radiation for 3D breast imaging, an approach that had not been studied thoroughly yet.
For the purposes of this study a computer-based platform has been developed in Matlab incorporating reconstruction algorithms and filtering techniques for BT applications. It is fully parameterized and has a modular architecture for easy addition of new algorithms. Simulations studies with the XRayImaging Simulator and experimental work at ELETTRA Synchrotron facilities in Trieste, Italy, have been performed using software and complex hardware phantoms, of realistic shape and size, consisting of materials mimicking the breast tissue. The work has been carried out in comparison to conventional BT and mammography and demonstrates the feasibility of the studied new technique and the potential advantages of using BT with synchrotron modality for the detection of breast low- and high-contrast breast lesions such as masses and microcalcifications (μCs).
Evaluations of both simulation and experimental tomograms demonstrated superior visibility of all reconstructed features using appropriately optimized filtered algorithms. Moreover, image quality and evaluation metrics are improved with extending the acquisition length for the masses. The visualization of μCs was found less sensitive to this parameter due to their high inherent contrast. Breast tomosynthesis shows advantages in visualizing features of small size within phantoms of increased thickness and especially in bringing into focus and localizing low-contrast masses hidden in a highly heterogeneous background with superimposed structures. Monochromatic beams can result in better tissue differentiation and in combination with BT can lead to improvement of features’ visibility, better detail and higher contrast. Monochromatic BT provided improved image quality at lower incident exposures, compared to conventional mammography, concerning mass detection and visibility of borders, which is important for their characterization, especially when they are spiculated. Overall it has been proved that while reducing the radiation dose, monochromatic beams combined with BT, result in an improvement of image quality. These findings are encouraging for the development of a tomosynthesis system based on monochromatic beams. / Η συγκεκριμένη διατριβή εστιάζει στη μελέτη των τεχνικών της τομοσύνθεσης όπως αυτές εφαρμόζονται στην απεικόνιση του μαστού, με στόχο την βελτίωση της ανίχνευσης των αλλοιώσεων του μαστού. Η τομοσύνθεση του μαστού είναι μια τεχνική ψευδό-τρισδιάστατης απεικόνισης με ακτίνες-χ που ανακατασκευάζει τομογραφικές εικόνες χρησιμοποιώντας μια σειρά προβολικών λήψεων υπό διαφορετικές γωνίες σε περιορισμένο τόξο γύρω από το μαστό και με δόσεις ακτινοβολίας παρόμοιες με εκείνες που απαιτούνται για τις δύο τυπικές λήψεις της κλασικής μαστογραφία. Μελέτες προσομοίωσης και κλινικές δοκιμές δείχνουν πως η τομοσύνθεση του μαστού βελτιώνει την απεικόνιση του μαστού, με αποτέλεσμα την καλύτερη ανίχνευση των αλλοιώσεων ειδικά σε πυκνούς μαστούς και αναμένεται ότι η εφαρμογή της θα μπορούσε να μειώσει την ανάγκη επανάληψης της εξέτασης. Οι αλγόριθμοι ανακατασκευής και οι παράμετροι λήψης των προβολικών εικόνων είναι μεγάλης σημασίας για την ποιότητα των ανακατασκευασμένων τομογραφικών εικόνων.
Ο στόχος αυτής της έρευνας είναι να μελετήσει τεχνικές που βασίζονται στην τομογραφική απεικόνιση του μαστού και να τις συγκρίνει με υπάρχουσες τεχνικές μαστογραφίας καθώς και να διερευνήσει την επίδραση των αλγορίθμων ανακατασκευής και των παραμέτρων λήψης στην ποιότητα της ανακατασκευασμένης τομογραφικής εικόνας. Ένας συγκεκριμένος στόχος και καινοτομία αυτής της μελέτης ήταν να διερευνήσει πιθανά πλεονεκτήματα και να επιδείξει την σκοπιμότητα του συνδυασμού της τομοσύνθεσης του μαστού με μονοχρωματική ακτινοβολία που παράγεται από σύγχροτρον για την τρισδιάστατη απεικόνιση του μαστού, μία προσέγγιση που δεν είχε ακόμα μελετηθεί εκτενώς.
Για τις ανάγκες αυτής της μελέτης αναπτύχθηκε στο Matlab μια πλατφόρμα που ενσωματώνει αλγορίθμους ανακατασκευής και τεχνικές φιλτραρίσματος για τομοσύνθεση μαστού. Η εφαρμογή είναι πλήρως παραμετροποιημένη και σχεδιασμένη ώστε να είναι εύκολη η προσθήκη νέων αλγορίθμων. Προσομοιώσεις με τη χρήση του προσομοιωτή XRayImagingSimulator καθώς και πειραματικές μελέτες στις εγκαταστάσεις σύγχροτρον ELETΤRA, στην Τεργέστη της Ιταλίας έχουν πραγματοποιηθεί, με χρήση απλών και σύνθετων ομοιωμάτων μαστού, μιμούμενα τις ιδιότητες του ιστού του μαστού, με ρεαλιστικό μέγεθος και σχήμα. Οι μελέτες έχουν πραγματοποιηθεί σε σύγκριση με την τυπική τομοσύνθεση μαστογραφία και δείχνουν πόσο εφικτή είναι η νέα τεχνική και τα δυνητικά πλεονεκτήματα της τομοσύνθεσης του μαστού με χρήση μονοχρωματικής ακτινοβολίας για την εύρεση χαμηλής και υψηλής αντίθεσης αλλοιώσεων όπως μάζες και μικροαποτιτανώσεις.
Εκτιμήσεις των τομογραφικών εικόνων που έχουν προκύψει τόσο από προσομοιώσεις όσο και από πειράματα δείχνουν βελτιωμένη απεικόνιση όλων των ανακατασκευασμένων στοιχείων ενδιαφέροντος με χρήση κατάλληλων βελτιστοποιημένων φίλτρων. Επιπλέον, η ποιότητα της εικόνας βελτιώνεται με τη διεύρυνση του τόξου λήψης για τις μάζες, ενώ η απεικόνιση των μικροαποτιτανώσεων βρέθηκε να είναι λιγότερο ευαίσθητη σε αυτή τη παράμετρο λόγω της υψηλότερης αντίθεσης που έχουν σε σχέση με τον περιβάλλοντα φυσιολογικό ιστό του μαστού. Η τομοσύνθεση του μαστού φάνηκε να έχει πλεονεκτήματα στην απεικόνιση αλλοιώσεων μικρού μεγέθους και πιο συγκεκριμένα στο να διακρίνει και να ανιχνεύει χαμηλής αντίθεσης μάζες, μέσα σε πυκνούς μαστούς με έντονα ετερογενή σύσταση, μετριάζοντας τα προβλήματα επικάλυψης. Η μονοχρωματική ακτινοβολία μπορεί να προσφέρει καλύτερη διαφοροποίηση των ιστών του μαστού και σε συνδυασμό με την τομοσύνθεση μπορεί να οδηγήσει στην βελτίωση της απεικόνισης των αλλοιώσεων και στην παραγωγή εικόνων με καλύτερη λεπτομέρεια και υψηλότερη αντίθεση. Γενικά βρέθηκε ότι η μονοχρωματική τομοσύνθεση του μαστού παρέχει βελτιωμένη ποιότητα εικόνας, σε σύγκριση με την κλασική μαστογραφία, όσον αφορά την ανίχνευση όγκων και την ορατότητα των περιγραμμάτων τους, που είναι σημαντική για τον χαρακτηρισμό των μαζών, ειδικά όταν δεν έχουν καλώς καθορισμένα όρια. Συνολικά η μελέτη αυτή έδειξε ότι ακόμα και με μικρότερη δόση ακτινοβολίας, η χρήση μονοχρωματικής ακτινοβολίας σε συνδυασμό με την τομοσύνθεση του μαστού, έχουν ως αποτέλεσμα την βελτίωση της εικόνας, γεγονός που είναι ενθαρρυντικό για την ανάπτυξη ενός συστήματος τομοσύνθεσης βασισμένο σε ακτίνες-χ μονοχρωματικής δέσμης.
|
17 |
Investigation of mm-wave imaging and radar systemsZeitler, Armin 11 January 2013 (has links) (PDF)
In the last decade, microwave and millimeter-wave systems have gained importance in civil and security applications. Due to an increasing maturity and availability of circuits and components, these systems are getting more compact while being less expensive. Furthermore, quantitative imaging has been conducted at lower frequencies using computational intensive inverse problem algorithms. Due to the ill-posed character of the inverse problem, these algorithms are, in general, very sensitive to noise: the key to their successful application to experimental data is the precision of the measurement system. Only a few research teams investigate systems for imaging in the W-band. In this manuscript such a system is presented, designed to provide scattered field data to quantitative reconstruction algorithms. This manuscript is divided into six chapters. Chapter 2 describes the theory to compute numerically the scattered fields of known objects. In Chapter 3, the W-band measurement setup in the anechoic chamber is shown. Preliminary measurement results are analyzed. Relying on the measurement results, the error sources are studied and corrected by post-processing. The final results are used for the qualitative reconstruction of all three targets of interest and to image quantitatively the small cylinder. The reconstructed images are compared in detail in Chapter 4. Close range imaging has been investigated using a vector analyzer and a radar system. This is described in Chapter 5, based on a future application, which is the detection of FOD on airport runways. The conclusion is addressed in Chapter 6 and some future investigations are discussed.
|
18 |
Uma abordagem multi-objetivo e multimodal para reconstrução de arvores filogeneticas / A multimodal and multiobjective approach for phylogenetic trees reconstructionSilva, Ana Estela Antunes da, 1965- 12 December 2007 (has links)
Orientador: Fernando Jose Von Zuben / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-12T21:45:18Z (GMT). No. of bitstreams: 1
Silva_AnaEstelaAntunesda_D.pdf: 8601078 bytes, checksum: 494abd829c21ee91c2a7003c33fdf0a1 (MD5)
Previous issue date: 2007 / Resumo : A reconstrução de árvores filogenéticas pode ser interpretada como um processo sistemático de proposição de uma descrição arbórea para as diferenças relativas que se observam em conjuntos de atributos genéticos homólogos de espécies sob comparação. A árvore filogenética resultante apresenta uma certa topologia, ou padrão de ancestralidade, e os comprimentos dos ramos desta árvore são indicativos do número de mudanças evolutivas desde a divergência do ancestral comum. Tanto a topologia quanto os comprimentos de ramos são hipóteses descritivas de eventos não-observáveis e condicionais, razão pela qual tendem a existir diversas hipóteses de alta qualidade para a reconstrução, assim como múltiplos critérios de desempenho. Esta tese (i) aborda árvores sem raiz; (ii) enfatiza os critérios de quadrados mínimos, evolução mínima e máxima verossimilhança; (iii) propõe uma extensão ao algoritmo Neighbor Joining que oferece múltiplas hipóteses de alta qualidade para a reconstrução; e (iv) descreve e utiliza uma nova ferramenta para otimização multiobjetivo no contexto de reconstrução filogenética. São considerados dados artificiais e dados reais na apresentação de resultados, os quais apontam
vantagens e aspectos diferenciais das metodologias propostas / Abstract: The reconstruction of phylogenetic trees can be interpreted as a systematic process of proposing an arborean description to the relative dissimilarities observed among sets of homologous genetic attributes of species being compared. The resulting phylogenetic tree presents a certain topology, or ancestrality pattern, and the length of the edges of the tree will indicate the number of evolutionary changes since the divergence from the common ancestor. Both topology and edge lengths are descriptive hypotheses of non-observable and conditional events, which implies the existence of diverse high-quality hypotheses for the reconstruction, as long as multiple performance criteria. This thesis (i) deals with unrooted trees; (ii) emphasizes the least squares, minimum evolution, and maximum likelihood criteria; (iii) proposes an extension to the Neighbor Joining algorithm which offers multiple high-quality reconstruction hypotheses; and (iv) describes and uses a new tool for multiobjective optimization in the context of phylogenetic reconstruction. Artificial and real datasets are considered in the presentation of results, which points to some advantages and distinctive aspects of the proposed methodologies / Doutorado / Engenharia de Computação / Doutor em Engenharia Elétrica
|
19 |
Adaptive Sampling Pattern Design Methods for MR ImagingChennakeshava, K January 2016 (has links) (PDF)
MRI is a very useful imaging modality in medical imaging for both diagnostic as well as functional studies. It provides excellent soft tissue contrast in several diagnostic studies. It is widely used to study the functional aspects of brain and to study the diffusion of water molecules across tissues. Image acquisition in MR is slow due to longer data acquisition time, gradient ramp-up and stabilization delays. Repetitive scans are also needed to overcome any artefacts due to patient motion, field inhomogeneity and to improve signal to noise ratio (SNR). Scanning becomes di cult in case of claustrophobic patients, and in younger/older patients who are unable to cooperate and prone to uncontrollable motions inside the scanner. New MR procedures, advanced research in neuro and functional imaging are demanding better resolutions and scan speeds which implies there is need to acquire more data in a shorter time frame. The hardware approach to faster k-space scanning methods involves efficient pulse sequence and gradient waveform design methods. Such methods have reached a physical and physiological limit. Alternately, methods have been proposed to reduce the scan time by under sampling the k-space data. Since the advent of Compressive Sensing (CS), there has been a tremendous interest in developing under sampling matrices for MRI. Mathematical assumptions on the probability distribution function (pdf) of k-space have led researchers to come up with efficient under sampling matrices for sampling MR k-space data. The recent approaches adaptively sample the k-space, based on the k-space of reference image as the probability distribution instead of a mathematical distribution, to come with an efficient under sampling scheme. In general, the methods use a deterministic central circular/square region and probabilistic sampling of the rest of the k-space. In these methods, the sampling distribution may not follow the selected pdf and
viii Adaptive Sampling Pattern Design Methods for MR Images the selection of deterministic and probabilistic sampling distribution parameters are heuristic in nature.
Two novel adaptive Variable Density Sampling (VDS) methods are proposed to address the heuristic nature of the sampling k-space such that the selected pdf matches the k-space energy distribution of a given fully sampled reference k-space or the MR image. The proposed methods use a novel approach of binning the pdf derived from the fully sampled k-space energy distribution of a reference image. The normalized k-space magnitude spectrum of the reference image is taken as a 2D probability distribution function which is divided in to number of exponentially weighted magnitude bins obtained from the corresponding histogram of the k-space magnitude spectrum.
In the first method, the normalized k-space histogram is binned exponentially, and the resulting exponentially binned 2D pdf is used with a suitable control parameter to obtain a sampling pattern of desired under sampling ratio. The resulting sampling pattern is an adaptive VDS pattern mimicking the energy distribution of the original k-space.
In the second method, the binning of the magnitude spectrum of k-space is followed by ranking of the bins by its spectral energy content. A cost function is de ned to evaluate the k-space energy being captured by the bin. The samples are selected from the energy rank ordered bins using a Knapsack constraint. The energy ranking and the Knapsack criterion result in the selection of sampling points from the highly relevant bins and gives a very robust sampling grid with well defined sparsity level.
Finally, the feasibility of developing a single adaptive VDS sampling pattern for a organ specific or multi-slice MR imaging, using the concept of binning of magnitude spectrum of the k-space, is investigated. Based on the premise that k-space of different organs have a different energy distribution structure to one another, the MR images of organs can be classified based on their spectral content and develop a single adaptive VDS sampling pattern for imaging an organ or multiple slices of the same. The classification is done using the k-space bin histogram as feature vectors and k-means clustering. Based on the nearest distance to the centroid of the organ cluster, a template image is selected to generate the sampling grid for the organ under consideration.
Using the state of the art MR reconstruction algorithms, the performance of the proposed novel adaptive Variable Density Sampling (VDS) methods using image quality measures is evaluated and compared with other VDS methods. The reconstructions show significant improvement in image quality parameters quantitatively and visual reduction in artefacts at 20% 15%, 10% and 5% under sampling
|
20 |
Algorithmes itératifs à faible complexité pour le codage de canal et le compressed sensing / Low Complexity Iterative Algorithms for Channel Coding and Compressed SensingDanjean, Ludovic 29 November 2012 (has links)
L'utilisation d'algorithmes itératifs est aujourd'hui largement répandue dans tous les domaines du traitement du signal et des communications numériques. Dans les systèmes de communications modernes, les algorithmes itératifs sont utilisés dans le décodage des codes ``low-density parity-check`` (LDPC), qui sont une classe de codes correcteurs d'erreurs utilisés pour leurs performances exceptionnelles en terme de taux d'erreur. Dans un domaine plus récent qu'est le ``compressed sensing``, les algorithmes itératifs sont utilisés comme méthode de reconstruction afin de recouvrer un signal ''sparse`` à partir d'un ensemble d'équations linéaires, appelées observations. Cette thèse traite principalement du développement d'algorithmes itératifs à faible complexité pour les deux domaines mentionnés précédemment, à savoir le design d'algorithmes de décodage à faible complexité pour les codes LDPC, et le développement et l'analyse d'un algorithme de reconstruction à faible complexité, appelé ''Interval-Passing Algorithm (IPA)'', dans le cadre du ``compressed sensing``.Dans la première partie de cette thèse, nous traitons le cas des algorithmes de décodage des codes LDPC. Il est maintenu bien connu que les codes LDPC présentent un phénomène dit de ''plancher d'erreur`` en raison des échecs de décodage des algorithmes de décodage traditionnels du types propagation de croyances, et ce en dépit de leurs excellentes performances de décodage. Récemment, une nouvelle classe de décodeurs à faible complexité, appelés ''finite alphabet iterative decoders (FAIDs)'' ayant de meilleures performances dans la zone de plancher d'erreur, a été proposée. Dans ce manuscrit nous nous concentrons sur le problème de la sélection de bons décodeurs FAID pour le cas de codes LDPC ayant un poids colonne de 3 et le cas du canal binaire symétrique. Les méthodes traditionnelles pour la sélection des décodeurs s'appuient sur des techniques asymptotiques telles que l'évolution de densité, mais qui ne garantit en rien de bonnes performances sur un code de longueurs finies surtout dans la région de plancher d'erreur. C'est pourquoi nous proposons ici une méthode de sélection qui se base sur la connaissance des topologies néfastes au décodage pouvant être présente dans un code en utilisant le concept de ``trapping sets bruités''. Des résultats de simulation sur différents codes montrent que les décodeurs FAID sélectionnés grâce à cette méthode présentent de meilleures performance dans la zone de plancher d'erreur comparé au décodeur à propagation de croyances.Dans un second temps, nous traitons le sujet des algorithmes de reconstruction itératifs pour le compressed sensing. Des algorithmes itératifs ont été proposés pour ce domaine afin de réduire la complexité induite de la reconstruction par ``linear programming''. Dans cette thèse nous avons modifié et analysé un algorithme de reconstruction à faible complexité dénommé IPA utilisant les matrices creuses comme matrices de mesures. Parallèlement aux travaux réalisés dans la littérature dans la théorie du codage, nous analysons les échecs de reconstruction de l'IPA et établissons le lien entre les ``stopping sets'' de la représentation binaire des matrices de mesure creuses. Les performances de l'IPA en font un bon compromis entre la complexité de la reconstruction sous contrainte de minimisation de la norme $ell_1$ et le très simple algorithme dit de vérification. / Iterative algorithms are now widely used in all areas of signal processing and digital communications. In modern communication systems, iterative algorithms are used for decoding low-density parity-check (LDPC) codes, a popular class of error-correction codes that are now widely used for their exceptional error-rate performance. In a more recent field known as compressed sensing, iterative algorithms are used as a method of reconstruction to recover a sparse signal from a linear set of measurements. This thesis primarily deals with the development of low-complexity iterative algorithms for the two aforementioned fields, namely, the design of low-complexity decoding algorithms for LDPC codes, and the development and analysis of a low complexity reconstruction algorithm called Interval-Passing Algorithm (IPA) for compressed sensing.In the first part of this thesis, we address the area of decoding algorithms for LDPC codes. It is well-known that LDPC codes suffer from the error floor phenomenon in spite of their exceptional performance, where traditional iterative decoders based on the belief propagation (BP) fail for certain low-noise configurations. Recently, a novel class of decoders called ''finite alphabet iterative decoders (FAIDs)'' were proposed that are capable of surpassing BP in the error floor at much lower complexity. In this work, we focus on the problem of selection of particularly good FAIDs for column-weight-three codes over the Binary Symmetric channel (BSC). Traditional methods for decoder selection use asymptotic techniques such as the density evolution method, which do not guarantee a good performance on finite-length codes especially in theerror floor region. Instead, we propose a methodology for selection that relies on the knowledge of potentially harmful topologies that could be present in a code, using the concept of noisy trapping set. Numerical results are provided to show that FAIDs selected based on our methodology outperform BP in the error floor on several codes.In the second part of this thesis, we address the area of iterative reconstruction algorithms for compressed sensing. Iterative algorithms have been proposed for compressed sensing in order to tackle the complexity of the LP reconstruction method. In this work, we modify and analyze a low complexity reconstruction algorithm called the IPA which uses sparse matrices as measurement matrices. Similar to what has been done for decoding algorithms in the area of coding theory, we analyze the failures of the IPA and link them to the stopping sets of the binary representation of the sparse measurement matrices used. The performance of the IPA makes it a good trade-off between the complex L1-minimization reconstruction and the very simple verification decoding.
|
Page generated in 0.0913 seconds