• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 29
  • 29
  • 9
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Deep learning and quantum annealing methods in synthetic aperture radar

Kelany, Khaled 08 October 2021 (has links)
Mapping of earth resources, environmental monitoring, and many other systems require high-resolution wide-area imaging. Since images often have to be captured at night or in inclement weather conditions, a capability is provided by Synthetic Aperture Radar (SAR). SAR systems exploit radar signal's long-range propagation and utilize digital electronics to process complex information, all of which enables high-resolution imagery. This gives SAR systems advantages over optical imaging systems, since, unlike optical imaging, SAR is effective at any time of day and in any weather conditions. Moreover, advanced technology called Interferometric Synthetic Aperture Radar (InSAR), has the potential to apply phase information from SAR images and to measure ground surface deformation. However, given the current state of technology, the quality of InSAR data can be distorted by several factors, such as image co-registration, interferogram generation, phase unwrapping, and geocoding. Image co-registration aligns two or more images so that the same pixel in each image corresponds to the same point of the target scene. Super-Resolution (SR), on the other hand, is the process of generating high-resolution (HR) images from a low-resolution (LR) one. SR influences the co-registration quality and therefore could potentially be used to enhance later stages of SAR image processing. Our research resulted in two major contributions towards the enhancement of SAR processing. The first one is a new learning-based SR model that can be applied with SAR, and similar applications. A second major contribution is utilizing the devised model for improving SAR co-registration and InSAR interferogram generation, together with methods for evaluating the quality of the resulting images. In the case of phase unwrapping, the process of recovering unambiguous phase values from a two-dimensional array of phase values known only modulo $2\pi$ rad, our research produced a third major contribution. This third major contribution is the finding that quantum annealers can resolve problems associated with phase unwrapping. Even though other potential solutions to this problem do currently exist - based on network programming for example - network programming techniques do not scale well to larger images. We were able to formulate the phase unwrapping problem as a quadratic unconstrained binary optimization (QUBO) problem, which can be solved using a quantum annealer. Since quantum annealers are limited in the number of qubits they can process, currently available quantum annealers do not have the capacity to process large SAR images. To resolve this limitation, we developed a novel method of recursively partitioning the image, then recursively unwrapping each partition, until the whole image becomes unwrapped. We tested our new approach with various software-based QUBO solvers and various images, both synthetic and real. We also experimented with a D-Wave Systems quantum annealer, the first and only commercial supplier of quantum annealers, and we developed an embedding method to map the problem to the D-Wave 2000Q_6, which improved the result images significantly. With our method, we were able to achieve high-quality solutions, comparable to state-of-the-art phase-unwrapping solvers. / Graduate
22

Fourier Transform Interferometry for 3D Mapping of Rough and Discontinuous Surfaces

Lally, Evan M. 07 June 2010 (has links)
Of the wide variety of existing optical techniques for non-contact 3D surface mapping, Fourier Transform Interferometry (FTI) is the method that most elegantly combines simplicity with high speed and high resolution. FTI generates continuous-phase surface maps from a projected optical interference pattern, which is generated with a simple double-pinhole source and collected in a single snapshot using conventional digital camera technology. For enhanced stability and reduced system size, the fringe source can be made from a fiber optic coupler. Unfortunately, many applications require mapping of surfaces that contain challenging features not ideally suited for reconstruction using FTI. Rough and discontinuous surfaces, commonly seen in applications requiring imaging of rock particles, present a unique set of obstacles that cannot be overcome using existing FTI techniques. This work is based on an original analysis of the limitations of FTI and the means in which errors are generated by the particular features encountered in the aggregate mapping application. Several innovative solutions have been developed to enable the use of FTI on rough and discontinuous surfaces. Through filter optimization and development of a novel phase unwrapping and referencing technique, the Method of Multiple References (MoMR), this work has enabled surface error correction and simultaneous imaging of multiple particles using FTI. A complete aggregate profilometry system has been constructed, including a MoMR-FTI software package and graphical user interface, to implement these concepts. The system achieves better than 22µm z-axis resolution, and comprehensive testing has proven it capable to handle a wide variety of particle surfaces. A range of additional features have been developed, such as error correction, particle boundary mapping, and automatic data quality windowing, to enhance the usefulness of the system in its intended application. Because of its high accuracy, high speed and ability to map varied particles, the developed system is ideally suited for large-scale aggregate characterization in highway research laboratories. Additionally, the techniques developed in this work are potentially useful in a large number of applications in which surface roughness or discontinuities pose a challenge. / Ph. D.
23

Multi-Scale, Multi-Modal, High-Speed 3D Shape Measurement

Yatong An (6587408) 10 June 2019 (has links)
<div>With robots expanding their applications in more and more scenarios, practical problems from different scenarios are challenging current 3D measurement techniques. For instance, infrastructure inspection robots need large-scale and high-spatial-resolution 3D data for crack and defect detection, medical robots need 3D data well registered with temperature information, and warehouse robots need multi-resolution 3D shape measurement to adapt to different tasks. In the past decades, a lot of progress has been made in improving the performance of 3D shape measurement methods. Yet, measurement scale and speed and the fusion of multiple modalities of 3D shape measurement techniques remain vital aspects to be improved for robots to have a more complete perception of the real scene. In this dissertation, we will focus on the digital fringe projection technique, which usually can achieve high-accuracy 3D data, and expand the capability of that technique to complicated robot applications by 1) extending the measurement scale, 2) registering with multi-modal information, and 3) improving the measurement speed of the digital fringe projection technique.</div><div><br></div><div>The measurement scale of the digital fringe projection technique mainly focused on a small scale, from several centimeters to tens of centimeters, due to the lack of a flexible and convenient calibration method for a large-scale digital fringe projection system. In this study, we first developed a flexible and convenient large-scale calibration method and then extended the measurement scale of the digital fringe projection technique to several meters. The meter scale is needed in many large-scale robot applications, including large infrastructure inspection. Our proposed method includes two steps: 1) accurately calibrate intrinsics (i.e., focal lengths and principal points) with a small calibration board at close range where both the camera and projector are out of focus, and 2) calibrate the extrinsic parameters (translation and rotation) from camera to projector with the assistance of a low-accuracy large-scale 3D sensor (e.g., Microsoft Kinect). The two-step strategy avoids fabricating a large and accurate calibration target, which is usually expensive and inconvenient for doing pose adjustments. With a small calibration board and a low-cost 3D sensor, we calibrated a large-scale 3D shape measurement system with a FOV of (1120 x 1900 x 1000) mm^3 and verified the correctness of our method.</div><div><br></div><div> Multi-modal information is required in applications such as medical robots, which may need both to capture the 3D geometry of objects and to monitor their temperature. To allow robots to have a more complete perception of the scene, we further developed a hardware system that can achieve real-time 3D geometry and temperature measurement. Specifically, we proposed a holistic approach to calibrate both a structured light system and a thermal camera under exactly the same world coordinate system, even though these two sensors do not share the same wavelength; and a computational framework to determine the sub-pixel corresponding temperature for each 3D point, as well as to discard those occluded points. Since the thermal 2D imaging and 3D visible imaging systems do not share the same spectrum of light, they can perform sensing simultaneously in real time. We developed a hardware system that achieved real-time 3D geometry and temperature measurement at 26Hz with 768 x 960 points per frame.</div><div><br></div><div> In dynamic applications, where the measured object or the 3D sensor could be in motion, the measurement speed will become an important factor to be considered. Previously, people projected additional fringe patterns for absolute phase unwrapping, which slowed down the measurement speed. To achieve higher measurement speed, we developed a method to unwrap a phase pixel by pixel by solely using geometric constraints of the structured light system without requiring additional image acquisition. Specifically, an artificial absolute phase map $\Phi_{min}$, at a given virtual depth plane $z = z_{min}$, is created from geometric constraints of the calibrated structured light system, such that the wrapped phase can be pixel-by-pixel unwrapped by referring to $\Phi_{min}$. Since $\Phi_{min}$ is defined in the projector space, the unwrapped phase obtained from this method is an absolute phase for each pixel. Experimental results demonstrate the success of this proposed novel absolute-phase unwrapping method. However, the geometric constraint-based phase unwrapping method using a virtual plane is constrained in a certain depth range. The depth range limitations cause difficulties in two measurement scenarios: measuring an object with larger depth variation, and measuring a dynamic object that could move beyond the depth range. To address the problem of depth limitation, we further propose to take advantage of an additional 3D scanner and use additional external information to extend the maximum measurement range of the pixel-wise phase unwrapping method. The additional 3D scanner can provide a more detailed reference phase map $\Phi_{ref}$ to assist us to do absolute phase unwrapping without the depth constraint. Experiments demonstrate that our method, assisted by an additional 3D scanner, can work for a large depth range, and the maximum speed of the low-cost 3D scanner is not necessarily an upper bound of the speed of the structured light system. Assisted by Kinect V2, our structured light system achieved 53Hz with a resolution 1600 x 1000 pixels when we measured dynamic objects that were moving in a large depth range.</div><div><br></div><div> In summary, we significantly advanced the 3D shape measurement technology for robots to have a more complete perception of the scene by enhancing the digital fringe projection technique in measurement scale (space domain), speed (time domain), and fusion with other modality information. This research can potentially enable robots to have a better understanding of the scene for more complicated tasks, and broadly impact many other academic studies and industrial practices.</div>
24

Extraction de hauteurs d'eau géolocalisées par interférométrie radar dans le cas de SWOT / Water height estimation using radar interferometry for SWOT

Desroches, Damien 14 March 2016 (has links)
La mission SWOT (Surface Water and Ocean Topography), menée par le CNES et le JPL et dont le lancement est prévu pour 2020, marque un tournant majeur pour l'altimétrie spatiale, à la fois en océanographie et en hydrologie continentale. Il s'agit de la première mission interférométrique SAR dont l'objectif spécifique est la mesure de la hauteur des eaux. L'instrument principal de la mission, KaRIn, un radar interférométrique en bande Ka, présente des caractéristiques particulières : angle de visée proche du nadir (0.6 à 3.9°), faible longueur d'onde (8.6 mm) et courte base stéréoscopique (10 m). Ces spécificités techniques entrainent des particularités propres à SWOT, à la fois en termes de phénoménologie et de traitement des données. Par ailleurs, du fait de la nature et du grand volume des données, de nouvelles méthodes de traitement sont envisagées, qui se distinguent de celles des missions interférométriques antérieures. Pour le mode " Low Rate " (LR) dédié à l'océanographie, une grande partie du traitement se déroulera à bord pour limiter le volume de données à transmettre au sol. Le mode " High Rate" (HR) visant principalement l'hydrologie continentale, présente lui aussi des originalités en termes de traitement, essentiellement réalisé au sol, de par la grande diversité de structure des surfaces d'eau qui seront observées. Pour les deux modes, la stratégie d'inversion de la phase en hauteurs géolocalisées ne peut être calquée sur celles des missions antérieures, fondées sur le déroulement spatial de la phase interférométrique. L'approche retenue est d'utiliser, autant que possible, un modèle numérique de terrain (MNT) de référence pour lever l'ambiguïté de phase et procéder directement à l'inversion de hauteur. Ceci permet à la fois de gagner en temps de traitement et de s'affranchir de l'utilisation des points de contrôle, difficiles à obtenir sur les océans comme sur les continents, du fait des variations de niveau d'eau et un rapport signal à bruit très faible sur les zones terrestres. Dans les cas où la précision du MNT de référence n'est pas suffisante pour assurer correctement le déroulement de la phase, des méthodes visant à détecter et réduire les erreurs sont proposées. Afin de faciliter l'utilisation des hauteurs géolocalisées issues de la phase l'interférométrique en mode HR, nous proposons une méthode qui permet d'améliorer considérablement la géolocalisation des produits, sans dégrader l'information de hauteur d'eau. / The SWOT mission (Surface Water and Ocean Topography), conducted by CNES and JPL, and scheduled for launch in 2020, is a major step forward for spaceborne altimetry, both for oceanography and continental hydrology. It is the first interferometric SAR mission whose specific objective is the measurement of water surface height. The main instrument of the mission, KaRIn, a Ka-band Radar Interferometer, has particular characteristics: very low incidence angle (from 0.6 to 3.9°), short wavelength (8.6 mm), and short baseline (10 m). This technical configuration leads to properties that are specific to SWOT, both in terms of phenomenology and data processing. Moreover, due to the nature and the huge volume of data, new processing methods, different from those used in previous interferometric mission, are considered. For the Low Rate (LR) mode dedicated to oceanography, a large part of the processing will take place onboard to limit the data volume transmitted to ground. The High Rate (HR) mode, mainly targeting continental hydrology, also present original characteristics in terms of processing, essentially conducted on ground, due to the large diversity in the structure of the observed water surfaces. In both modes, the strategy for conversion of phase into geolocated heights cannot be directly based on those of previous missions, relying on spatial phase unwrapping. The approach retained here is to use, as far as possible, a reference Digital Terrain Model (DTM) to remove the phase ambiguity and proceed directly to height inversion. This allows both to reduce the computing time and to avoid the need for ground control points, which are difficult to obtain both over oceans and continental surfaces, due to varying water level and very low signal-to-noise ratio over land. For cases where the precision of reference DTM is not good enough to ensure a correct phase unwrapping, methods to detect and reduce the errors are proposed. To facilitate the use of the geolocated heights derived from the interferometric phase in HR mode, we propose a method that permits to significantly improve the geolocation of the products, without degrading the water height information.
25

Développements algorithmiques pour l’amélioration des résultats de l’interférométrie RADAR en milieu urbain

Tlili, Ayoub 10 1900 (has links)
Le suivi des espaces urbanisés et de leurs dynamiques spatio-temporelles représente un enjeu important pour la population urbaine, autant sur le plan environnemental, économique et social. Avec le lancement des satellites portant des radars à synthèse d’ouverture de la nouvelle génération (TerraSAR-X, COSMO-SkyMed, ALOS, RADARSAT-2,Sentinel-1, Constellation RADARSAT), il est possible d’obtenir des séries temporelles d’images avec des résolutions spatiales et temporelles fines. Ces données multitemporelles aident à mieux analyser et décrire les structures urbaines et leurs variations dans l’espace et dans le temps. L’interférométrie par satellite est effectuée en comparant les phases des images RSO prises à différents passages du satellite au-dessus du même territoire. En optant pour des positions du satellite séparées d’une longue ligne de base, l’InSAR mène à la création des modèles numériques d’altitude (MNA). Si cette ligne de base est courte et à la limite nulle, nous avons le cas de l’interférométrie différentielle (DInSAR) qui mène à l’estimation du mouvement possible du terrain entre les deux acquisitions. Pour toutes les deux applications de l’InSAR, deux opérations sont importantes qui garantissent la génération des interférogrammes de qualité. La première est le filtrage du bruit omniprésent dans les phases interférométriques et la deuxième est le déroulement des phases. Ces deux opérations deviennent particulièrement complexes en milieu urbain où au bruit des phases s’ajoutent des fréquents sauts et discontinuités des phases dus à la présence des bâtiments et d’autres structures surélevées. L’objectif de cette recherche est le développement des nouveaux algorithmes de filtrage et de déroulement de phase qui puissent mieux performer que les algorithmes considérés comme référence dans ce domaine. Le but est d’arriver à générer des produits InSAR de qualité en milieu urbain. Concernant le filtrage, nous avons établi un algorithme qui est une nouvelle formulation du filtre Gaussien anisotrope adaptatif. Quant à l’algorithme de déroulement de phase, il est fondé sur la minimisation de l’énergie par un algorithme génétique ayant recours à une modélisation contextuelle du champ de phase. Différents tests ont été effectués avec des images RSO simulées et réelles qui démontrent le potentiel de nos algorithmes qui dépasse à maints égards celui des algorithmes standard. Enfin, pour atteindre le but de notre recherche, nous avons intégré nos algorithmes dans l’environnement du logiciel SNAP et appliqué l’ensemble de la procédure pour générer un MNA avec des images RADARSAT-2 de haute résolution d’un secteur de la Ville de Montréal (Canada) ainsi que des cartes des mouvements du terrain dans la région de la Ville de Mexico (Mexique) avec des images de Sentinel-1 de résolution plutôt moyenne. La comparaison des résultats obtenus avec des données provenant des sources externes de qualité a aussi démontré le fort potentiel de nos algorithmes. / The monitoring of urban areas and their spatiotemporal dynamics is an important issue for the urban population, at the environmental, economic, as well as social level. With the launch of satellites carrying next-generation synthetic aperture radars (TerraSAR-X, COSMO-SkyMed, ALOS, RADARSAT-2, Sentinel-1, Constellation RADARSAT), it is possible to obtain time series of images with fine temporal and spatial resolutions. These multitemporal data help to better analyze and describe urban structures, and their variations in space and time. Satellite interferometry is performed by comparing the phases of SAR images taken at different satellite passes over the same territory. By opt-ing for satellite positions separated by a long baseline, InSAR leads to the creation of digital elevation models (DEM). If this baseline is short and, at the limit zero, we have the case of differential interferometry (DInSAR) which leads to the estimation of the possible movement of the land between the two acquisitions. In both InSAR applica-tions, two operations are important that ensure the generation of quality interferograms. The first is the filtering of ubiquitous noise in the interferometric phases and the second is the unwrapping of the phases. These two operations become particularly complex in urban areas where the phase noise is added to the frequent jumps and discontinuities of phases due to the presence of buildings and other raised structures. The objective of this research is the development of new filtering and phase unwrap-ping algorithms that can perform better than algorithms considered as reference in this field. The goal is to generate quality InSAR products in urban areas. Regarding filtering, we have established an algorithm that is a new formulation of the adaptive anisotropic Gaussian filter. As for the phase unwrapping algorithm, it is based on the minimization of energy by a genetic algorithm using contextual modelling of the phase field. Various tests have been carried out with simulated and real SAR images that demonstrated the potential of our algorithms that in many respects exceeds that of standard algorithms. Finally, to achieve the goal of our research, we integrated our algorithms into the SNAP software environment and applied the entire procedure to generate a DEM with high-resolution RADARSAT-2 images from an area of the City of Montreal (Canada) as well as maps of land movement in the Mexico City region (Mexico) with relatively medium-resolution Sentinel-1 images. Comparison of the results with data from external quality sources also demonstrated the strong potential of our algorithms.
26

Stochastic Nested Aggregation for Images and Random Fields

Wesolkowski, Slawomir Bogumil 27 March 2007 (has links)
Image segmentation is a critical step in building a computer vision algorithm that is able to distinguish between separate objects in an image scene. Image segmentation is based on two fundamentally intertwined components: pixel comparison and pixel grouping. In the pixel comparison step, pixels are determined to be similar or different from each other. In pixel grouping, those pixels which are similar are grouped together to form meaningful regions which can later be processed. This thesis makes original contributions to both of those areas. First, given a Markov Random Field framework, a Stochastic Nested Aggregation (SNA) framework for pixel and region grouping is presented and thoroughly analyzed using a Potts model. This framework is applicable in general to graph partitioning and discrete estimation problems where pairwise energy models are used. Nested aggregation reduces the computational complexity of stochastic algorithms such as Simulated Annealing to order O(N) while at the same time allowing local deterministic approaches such as Iterated Conditional Modes to escape most local minima in order to become a global deterministic optimization method. SNA is further enhanced by the introduction of a Graduated Models strategy which allows an optimization algorithm to converge to the model via several intermediary models. A well-known special case of Graduated Models is the Highest Confidence First algorithm which merges pixels or regions that give the highest global energy decrease. Finally, SNA allows us to use different models at different levels of coarseness. For coarser levels, a mean-based Potts model is introduced in order to compute region-to-region gradients based on the region mean and not edge gradients. Second, we develop a probabilistic framework based on hypothesis testing in order to achieve color constancy in image segmentation. We develop three new shading invariant semi-metrics based on the Dichromatic Reflection Model. An RGB image is transformed into an R'G'B' highlight invariant space to remove any highlight components, and only the component representing color hue is preserved to remove shading effects. This transformation is applied successfully to one of the proposed distance measures. The probabilistic semi-metrics show similar performance to vector angle on images without saturated highlight pixels; however, for saturated regions, as well as very low intensity pixels, the probabilistic distance measures outperform vector angle. Third, for interferometric Synthetic Aperture Radar image processing we apply the Potts model using SNA to the phase unwrapping problem. We devise a new distance measure for identifying phase discontinuities based on the minimum coherence of two adjacent pixels and their phase difference. As a comparison we use the probabilistic cost function of Carballo as a distance measure for our experiments.
27

Stochastic Nested Aggregation for Images and Random Fields

Wesolkowski, Slawomir Bogumil 27 March 2007 (has links)
Image segmentation is a critical step in building a computer vision algorithm that is able to distinguish between separate objects in an image scene. Image segmentation is based on two fundamentally intertwined components: pixel comparison and pixel grouping. In the pixel comparison step, pixels are determined to be similar or different from each other. In pixel grouping, those pixels which are similar are grouped together to form meaningful regions which can later be processed. This thesis makes original contributions to both of those areas. First, given a Markov Random Field framework, a Stochastic Nested Aggregation (SNA) framework for pixel and region grouping is presented and thoroughly analyzed using a Potts model. This framework is applicable in general to graph partitioning and discrete estimation problems where pairwise energy models are used. Nested aggregation reduces the computational complexity of stochastic algorithms such as Simulated Annealing to order O(N) while at the same time allowing local deterministic approaches such as Iterated Conditional Modes to escape most local minima in order to become a global deterministic optimization method. SNA is further enhanced by the introduction of a Graduated Models strategy which allows an optimization algorithm to converge to the model via several intermediary models. A well-known special case of Graduated Models is the Highest Confidence First algorithm which merges pixels or regions that give the highest global energy decrease. Finally, SNA allows us to use different models at different levels of coarseness. For coarser levels, a mean-based Potts model is introduced in order to compute region-to-region gradients based on the region mean and not edge gradients. Second, we develop a probabilistic framework based on hypothesis testing in order to achieve color constancy in image segmentation. We develop three new shading invariant semi-metrics based on the Dichromatic Reflection Model. An RGB image is transformed into an R'G'B' highlight invariant space to remove any highlight components, and only the component representing color hue is preserved to remove shading effects. This transformation is applied successfully to one of the proposed distance measures. The probabilistic semi-metrics show similar performance to vector angle on images without saturated highlight pixels; however, for saturated regions, as well as very low intensity pixels, the probabilistic distance measures outperform vector angle. Third, for interferometric Synthetic Aperture Radar image processing we apply the Potts model using SNA to the phase unwrapping problem. We devise a new distance measure for identifying phase discontinuities based on the minimum coherence of two adjacent pixels and their phase difference. As a comparison we use the probabilistic cost function of Carballo as a distance measure for our experiments.
28

Digitální metody zpracování trojrozměrného zobrazení v rentgenové tomografii a holografické mikroskopii / The Three-Dimensional Digital Imaging Methods for X-ray Computed Tomography and Digital Holographic Microscopy

Kvasnica, Lukáš January 2015 (has links)
This dissertation thesis deals with the methods for processing image data in X-ray microtomography and digital holographic microscopy. The work aims to achieve significant acceleration of algorithms for tomographic reconstruction and image reconstruction in holographic microscopy by means of optimization and the use of massively parallel GPU. In the field of microtomography, the new GPU (graphic processing unit) accelerated implementations of filtered back projection and back projection filtration of derived data are presented. Another presented algorithm is the orientation normalization technique and evaluation of 3D tomographic data. In the part related to holographic microscopy, the individual steps of the complete image processing procedure are described. This part introduces the new orignal technique of phase unwrapping and correction of image phase damaged by the occurrence of optical vortices in the wrapped image phase. The implementation of the methods for the compensation of the phase deformation and for tracking of cells is then described. In conclusion, there is briefly introduced the Q-PHASE software, which is the complete bundle of all the algorithms necessary for the holographic microscope control, and holographic image processing.
29

Phase Unwrapping MRI Flow Measurements / Fasutvikning av MRT-flödesmätningar

Liljeblad, Mio January 2023 (has links)
Magnetic resonance images (MRI) are acquired by sampling the current of induced electromotiveforce (EMF). EMF is induced due to flux of the net magnetic field from coherent nuclear spins with intrinsic magnetic dipole moments. The spins are excited by (non-ionizing) radio frequency electromagnetic radiation in conjunction with stationary and gradient magnetic fields. These images reveal detailed internal morphological structures as well as enable functional assessment of the body that can help diagnose a wide range of medical conditions. The aim of this project was to unwrap phase contrast cine magnetic resonance images, targeting the great vessels. The maximum encoded velocity (venc) is limited to the angular phase range [-π, π] radians. This may result in aliasing if the venc is set too low by the MRI personnel. Aliased images yield inaccurate cardiac stroke volume measurements and therefore require acquisition retakes. The retakes might be avoided if the images could be unwrapped in post-processing instead. Using computer vision, the angular phase of flow measurements as well as the angular phase of retrospectively wrapped image sets were unwrapped. The performances of three algorithms were assessed, Laplacian algorithm, sequential tree-reweighted message passing and iterative graph cuts. The associated energy formulation was also evaluated. Iterative graph cuts was shown to be the most robust with respect to the number of wraps and the energies correlated with the errors. This thesis shows that there is potential to reduce the number of acquisition retakes, although the MRI personnel still need to verify that the unwrapping performances are satisfactory. Given the promising results of iterative graph cuts, next it would be valuable to investigate the performance of a globally optimal surface estimation algorithm.

Page generated in 0.0766 seconds