1 |
Boundary-constrained inverse consistent image registration and its applicationsKumar, Dinesh 01 May 2011 (has links)
This dissertation presents a new inverse consistent image registration (ICIR) method
called boundary-constrained inverse consistent image registration (BICIR).
ICIR algorithms jointly estimate the
forward and reverse transformations between two images while minimizing
the inverse consistency error (ICE).
The ICE at a point is defined as the distance between
the starting and ending location of a point mapped through the forward
transformation and then the reverse transformation.
The novelty of the BICIR method is that a region of interest (ROI) in one
image is registered with its corresponding ROI. This is accomplished
by first registering the boundaries of the ROIs and then matching the
interiors of the ROIs using intensity registration.
The advantages of this approach include providing better registration
at the boundary of the ROI, eliminating registration errors caused by
registering regions outside the ROI, and theoretically
minimizing computation time since only the ROIs are registered.
The first step of the BICIR algorithm is to inverse consistently
register the boundaries of the ROIs. The resulting forward and reverse
boundary transformations are extended to the entire ROI domains
using the Element Free Galerkin Method (EFGM). The transformations
produced by the EFGM are then made inverse consistent by iteratively
minimizing the ICE. These transformations are used as initial conditions
for inverse-consistent intensity-based registration of the ROI interiors.
Weighted extended B-splines (WEB-splines) are used to parameterize the
transformations. WEB-splines are used instead of B-splines since
WEB-splines can be defined over an arbitrarily shaped ROI.
Results are presented showing that the BICIR method provides better
registration of 2D and 3D anatomical images than the small-deformation,
inverse-consistent, linear-elastic (SICLE) image registration algorithm which
registers entire images. Specifically, the BICIR method produced
registration results with lower similarity cost, reduced boundary
matching error, increased ROI relative overlap,
and lower inverse consistency error than the SICLE algorithm.
|
2 |
A correspondence framework for surface matching algorithmsPlanitz, Brigit Maria January 2004 (has links)
Computer vision tasks such as three dimensional (3D) registration, 3D modelling, and 3D object recognition are becoming more and more useful in industry, and have application such as reverse CAD engineering, and robot navigation. Each of these applications use correspondence algorithms as part of their processes. Correspondence algorithms are required to compute accurate mappings between artificial surfaces that represent actual objects or scenes. In industry, inaccurate correspondence is related to factors such as expenses in time and labour, and also safety. Therefore, it is essential to select an appropriate correspondence algorithm for a given surface matching task. However, current research in the area of surface correspondence is hampered by an abundance of applications specific algorithms, and no uniform terminology of consistent model for selecting and/or comparing algorithms. This dissertation presents a correspondence framework for surface matching algorithms. The framework is a conceptual model that is implementable. It is designed to assist in the analysis, comparison, development, and implementation of correspondence algorithms, which are essential tasks when selecting or creating an algorithm for a particular application. The primary contribution of the thesis is the correspondence framework presented as a conceptual model for surface matching algorithms. The model provides a systematic method for analysing, comparing, and developing algorithms. The dissertation demonstrates that by dividing correspondence computation into five stages: region definition, feature extraction, feature representation, local matching, and global matching, the task becomes smaller and more manageable. It also shows that the same stages of different algorithms are directly comparable. Furthermore, novel algorithms can be created by simply connecting compatible stages of different algorithms. Finally, new ideas can be synthesised by creating only the stages to be tested, without developing a while new correspondence algorithm. The secondary contribution that is outlined is the correspondence framework presented as a software design tool for surface matching algorithms. The framework is shown to reduce the complexity of implementing existing algorithms within the framework. This is done by encoding algorithms in a stage-wise procedure, whereby an algorithm is separated into the five stages of the framework. The software design tool is shown to validate the integrity of restructuring existing algorithms within it, and also provide an efficient basis for creating new algorithms. The third contribution that is made is the specification of a quality metric for algorithms comparison. The metric is used to assess the accuracy of the outcomes of a number of correspondence algorithms, which are used to match a wide variety of input surface pairs. The metric is used to demonstrate that each algorithm is application specific, and highlight the types of surfaces that can be matched by each algorithm. Thus, it is shown that algorithms that are implemented within the framework can be selected for particular surface correspondence tasks. The final contribution made is this dissertation is the expansion of the correspondence framework beyond the surface matching domain. The correspondence framework is maintained in its original form, and is used for image matching algorithms. Existing algorithms from three image matching applications are implemented and modified using the framework. It is shown how the framework provides a consistent means and uniform terminology for developing both surface and image matching algorithms. In summary, this thesis presents a correspondence framework for surface matching algorithms. The framework is general, encompassing a comprehensive set of algorithms, and flexible, expanding beyond surface matching to major image matching applications.
|
3 |
Automated parcellation on the surface of human cerebral cortex generated from MR imagesLi, Wen 01 May 2012 (has links)
The human cerebral cortex is a highly foliated structure that supports the complex cognitive abilities of humans. The cortex is divided by its cytoarchitectural characteristics that can be approximated by the folding pattern of the cortex. Psychiatric and neurological diseases, such as Huntington's disease or schizophrenias, are often related with structural changes in the cerebral cortex. Detecting structural changes in different regions of cerebral cortex can provide insight into disease biology, progression and response to treatment. The delineation of anatomical regions on the cerebral cortex is time intensive if performed manually, therefore automated methods are needed to perform this delineation. Magnetic Resonance Imaging (MRI) is commonly used to explore the structural change in patients with psychiatric and neurological diseases.
This dissertation proposes a fast and reliable method to automatically parcellate the cortical surface generated from MR images. A fully automated pipeline has been built to process MR images and generate cortical surfaces associated with parcellation labels. First, genus zero cortical surfaces for each hemisphere of a subject are generated from MR images. The surface is generated at the parametric boundary between gray matter and white matter. Geometry features are calculated for each cortical surface to as scalar values to drive a multi-resolution spherical registration that can align two cortical surfaces together in the spherical domain. Then, the labels on a subject's cortical surface are evaluated by registering a subject's cortical surface with a population atlas and combining the information of prior probabilities on the atlas with the subject's geometry features. The automated parcellation has been tested on a group of subjects with various cerebral cortex structures. It shows that the proposed method is fast (takes about 3 hours to parcellate at one hemisphere) and accurate (with the weighted average Dice ~0.86). The framework of this dissertation will be as follows: the first chapter is about the introduction, including motivation, background, and significance of the study. The second chapter describes the whole pipeline of the automated surface parcellation and focuses on technical details of every method used in the pipeline. The third chapter presents results achieved in this study and the fourth chapter discusses the results and draws a conclusion.
|
4 |
Part-based recognition of 3-D objects with application to shape modeling in hearing aid manufacturingZouhar, Alexander 12 January 2016 (has links) (PDF)
In order to meet the needs of people with hearing loss today hearing aids are custom designed. Increasingly accurate 3-D scanning technology has contributed to the transition from conventional production scenarios to software based processes. Nonetheless, there is a tremendous amount of manual work involved to transform an input 3-D surface mesh of the outer ear into a final hearing aid shape. This manual work is often cumbersome and requires lots of experience which is why automatic solutions are of high practical relevance.
This work is concerned with the recognition of 3-D surface meshes of ear implants. In particular we present a semantic part-labeling framework which significantly outperforms existing approaches for this task. We make at least three contributions which may also be found useful for other classes of 3-D meshes.
Firstly, we validate the discriminative performance of several local descriptors and show that the majority of them performs poorly on our data except for 3-D shape contexts. The reason for this is that many local descriptor schemas are not rich enough to capture subtle variations in form of bends which is typical for organic shapes.
Secondly, based on the observation that the left and the right outer ear of an individual look very similar we raised the question how similar the ear shapes among arbitrary individuals are? In this work, we define a notion of distance between ear shapes as building block of a non-parametric shape model of the ear to better handle the anatomical variability in ear implant labeling.
Thirdly, we introduce a conditional random field model with a variety of label priors to facilitate the semantic part-labeling of 3-D meshes of ear implants. In particular we introduce the concept of a global parametric transition prior to enforce transition boundaries between adjacent object parts with an a priori known parametric form. In this way we were able to overcome the issue of inadequate geometric cues (e.g., ridges, bumps, concavities) as natural indicators for the presence of part boundaries.
The last part of this work offers an outlook to possible extensions of our methods, in particular the development of 3-D descriptors that are fast to compute whilst at the same time rich enough to capture the characteristic differences between objects residing in the same class.
|
5 |
Part-based recognition of 3-D objects with application to shape modeling in hearing aid manufacturingZouhar, Alexander 14 August 2015 (has links)
In order to meet the needs of people with hearing loss today hearing aids are custom designed. Increasingly accurate 3-D scanning technology has contributed to the transition from conventional production scenarios to software based processes. Nonetheless, there is a tremendous amount of manual work involved to transform an input 3-D surface mesh of the outer ear into a final hearing aid shape. This manual work is often cumbersome and requires lots of experience which is why automatic solutions are of high practical relevance.
This work is concerned with the recognition of 3-D surface meshes of ear implants. In particular we present a semantic part-labeling framework which significantly outperforms existing approaches for this task. We make at least three contributions which may also be found useful for other classes of 3-D meshes.
Firstly, we validate the discriminative performance of several local descriptors and show that the majority of them performs poorly on our data except for 3-D shape contexts. The reason for this is that many local descriptor schemas are not rich enough to capture subtle variations in form of bends which is typical for organic shapes.
Secondly, based on the observation that the left and the right outer ear of an individual look very similar we raised the question how similar the ear shapes among arbitrary individuals are? In this work, we define a notion of distance between ear shapes as building block of a non-parametric shape model of the ear to better handle the anatomical variability in ear implant labeling.
Thirdly, we introduce a conditional random field model with a variety of label priors to facilitate the semantic part-labeling of 3-D meshes of ear implants. In particular we introduce the concept of a global parametric transition prior to enforce transition boundaries between adjacent object parts with an a priori known parametric form. In this way we were able to overcome the issue of inadequate geometric cues (e.g., ridges, bumps, concavities) as natural indicators for the presence of part boundaries.
The last part of this work offers an outlook to possible extensions of our methods, in particular the development of 3-D descriptors that are fast to compute whilst at the same time rich enough to capture the characteristic differences between objects residing in the same class.
|
6 |
Registro automático de superfícies usando spin-image / Automatic surface registration using spin-imagesVieira, Thales Miranda de Almeida 06 February 2007 (has links)
This work describes a method based on three stages for reconstructing a model from a
given set of scanned meshes obtained from 3D scanners. Meshes scanned from different
scanner s view points have their representation in local coordinate systems. Therefore,
for final model reconstruction, an alignment of the meshes is required. The most popular
algorithm for cloud data registration is the ICP algorithm. However, ICP requires an
initial estimate of mesh alignment, which is, many times, done manually. To automate
this process, this work uses a surface representation called spin-images to identify overlap
areas between the meshes and to estimate their alignment. After this initial registration,
the alignment is refined by the ICP algorithm, and finally the model is reconstructed
using a method called VRIP. / Fundação de Amparo a Pesquisa do Estado de Alagoas / Este trabalho descreve um método baseado em três etapas para reconstrução de modelos
a partir de malhas capturadas de scanners 3D. Malhas obtidas a partir de diferentes
pontos de visão de um scanner têm sua representação em sistemas de coordenadas local.
Portanto, para a reconstrução final do modelo, é necessário realizar um alinhamento
dessas malhas, ou registro. O algoritmo mais famoso para realizar registro de nuvens de
pontos é o algoritmo ICP. Porém, um dos requisitos desse algoritmo é uma estimativa
inicial do alinhamento das malhas, que muitas vezes é feita manualmente. Para
automatizar esse processo, este trabalho utiliza descritores spin-image para identificar
regiões de sobreposição entre as malhas e estimar seus alinhamentos. Após este registro
inicial, o alinhamento é refinado através do algoritmo ICP, e finalmente o modelo é
reconstruído usando uma técnica chamada VRIP.
|
7 |
Contributions aux problèmes de l'étalonnage extrinsèque d'affichages semi-transparents pour la réalité augmentée et de la mise en correspondance dense d'images / Contributions to the problems of extrinsic calibration semitransparent displays for augmented reality and dense mapping imagesBraux-Zin, Jim 26 September 2014 (has links)
La réalité augmentée consiste en l'insertion d'éléments virtuels dans une scène réelle, observée à travers un écran. Les systèmes de réalité augmentée peuvent prendre des formes différentes pour obtenir l'équilibre désiré entre trois critères : précision, latence et robustesse. On identifie trois composants principaux : localisation, reconstruction et affichage. Nous nous concentrons sur l'affichage et la reconstruction. Pour certaines applications, l'utilisateur ne peut être isolé de la réalité. Nous proposons un système sous forme de "tablette augmentée" avec un écran semi transparent, au prix d'un étalonnage adapté. Pour assurer l'alignement entre augmentations et réalité, il faut connaître les poses relatives de l'utilisateur et de la scène observée par rapport à l'écran. Deux dispositifs de localisation sont nécessaires et l'étalonnage consiste à calculer la pose de ces dispositifs par rapport à l'écran. Le protocole d'étalonnage est le suivant : l'utilisateur renseigne les projections apparentes dans l'écran de points de référence d'un objet 3D connu ; les poses recherchées minimisent la distance 2D entre ces projections et celles calculées par le système. Ce problème est non convexe et difficile à optimiser. Pour obtenir une estimation initiale, nous développons une méthode directe par l'étalonnage intrinsèque et extrinsèque de caméras virtuelles. Ces dernières sont définies par leurs centres optiques, confondus avec les positions de l'utilisateur, ainsi que leur plan focal, constitué par l'écran. Les projections saisies par l'utilisateur constituent alors les observations 2D des points de référence dans ces caméras virtuelles. Un raisonnement symétrique permet de considérer des caméras virtuelles centrées sur les points de référence de l'objet, "observant" les positions de l'utilisateur. Ces estimations initiales sont ensuite raffinées par ajustement de faisceaux. La reconstruction 3D est basée sur la triangulation de correspondances entre images. Ces correspondances peuvent être éparses lorsqu'elles sont établies par détection, description et association de primitives géométriques ou denses lorsqu'elles sont établies par minimisation d'une fonction de coût sur toute l'image. Un champ dense de correspondance est préférable car il permet une reconstruction de surface, utile notamment pour une gestion réaliste des occultations en réalité augmentée. Les méthodes d'estimation d'un tel champ sont basées sur une optimisation variationnelle, précise mais sensible aux minimums locaux et limitée à des images peu différentes. A l'opposé, l'emploi de descripteurs discriminants peut rendre les correspondances éparses très robustes. Nous proposons de combiner les avantages des deux approches par l'intégration d'un coût basé sur des correspondances éparses de primitives à une méthode d'estimation variationnelle dense. Cela permet d'empêcher l'optimisation de tomber dans un minimum local sans dégrader la précision. Notre terme basé correspondances éparses est adapté aux primitives à coordonnées non entières, et peut exploiter des correspondances de points ou de segments tout en filtrant implicitement les correspondances erronées. Nous proposons aussi une détection et gestion complète des occultations pour pouvoir mettre en correspondance des images éloignées. Nous avons adapté et généralisé une méthode locale de détection des auto-occultations. Notre méthode produit des résultats compétitifs avec l'état de l'art, tout en étant plus simple et plus rapide, pour les applications de flot optique 2D et de stéréo à large parallaxe. Nos contributions permettent d'appliquer les méthodes variationnelles à de nouvelles applications sans dégrader leur performance. Le faible couplage des modules permet une grande flexibilité et généricité. Cela nous permet de transposer notre méthode pour le recalage de surfaces déformables avec des résultats surpassant l'état de l'art, ouvrant de nouvelles perspectives. / Augmented reality is the process of inserting virtual elements into a real scene, observed through a screen. Augmented Reality systems can take different forms to get the desired balance between three criteria: accuracy, latency and robustness. Three main components can be identified: localization, reconstruction and display. The contributions of this thesis are focused on display and reconstruction. Most augmented reality systems use non-transparent screens as they are widely available. However, for critical applications such as surgery or driving assistance, the user cannot be ever isolated from reality. We answer this problem by proposing a new “augmented tablet” system with a semi-transparent screen. Such a system needs a suitable calibration scheme:to correctly align the displayed augmentations and reality, one need to know at every moment the poses of the user and the observed scene with regard to the screen. Two tracking devices (user and scene) are thus necessary, and the system calibration aims to compute the pose of those devices with regard to the screen. The calibration process set up in this thesis is as follows: the user indicates the apparent projections in the screen of reference points from a known 3D object ; then the poses to estimate should minimize the 2D on-screen distance between those projections and the ones computed by the system. This is a non-convex problem difficult to solve without a sane initialization. We develop a direct estimation method by computing the extrinsic parameters of virtual cameras. Those are defined by their optical centers which coincide with user positions, and their common focal plane consisting of the screen plane. The user-entered projections are then the 2D observations of the reference points in those virtual cameras. A symmetrical thinking allows one to define virtual cameras centered on the reference points, and “looking at” the user positions. Those initial estimations can then be refined with a bundle adjustment. Meanwhile, 3D reconstruction is based on the triangulation of matches between images. Those matches can be sparse when computed by detection and description of image features or dense when computed through the minimization of a cost function of the whole image. A dense correspondence field is better because it makes it possible to reconstruct a 3D surface, useful especially for realistic handling of occlusions for augmented reality. However, such a field is usually estimated thanks to variational methods, minimizing a convex cost function using local information. Those methods are accurate but subject to local minima, thus limited to small deformations. In contrast, sparse matches can be made very robust by using adequately discriminative descriptors. We propose to combine the advantages of those two approaches by adding a feature-based term into a dense variational method. It helps prevent the optimization from falling into local minima without degrading the end accuracy. Our feature-based term is suited to feature with non-integer coordinates and can handle point or line segment matches while implicitly filtering false matches. We also introduce comprehensive handling of occlusions so as to support large deformations. In particular, we have adapted and generalized a local method for detecting selfocclusions. Results on 2D optical flow and wide-baseline stereo disparity estimation are competitive with the state of the art, with a simpler and most of the time faster method. This proves that our contributions enables new applications of variational methods without degrading their accuracy. Moreover, the weak coupling between the components allows great flexibility and genericness. This is the reason we were able to also transpose the proposed method to the problem of non-rigid surface registration and outperforms the state of the art methods.
|
8 |
Structural Surface Mapping for Shape AnalysisRazib, Muhammad 19 September 2017 (has links)
Natural surfaces are usually associated with feature graphs, such as the cortical surface with anatomical atlas structure. Such a feature graph subdivides the whole surface into meaningful sub-regions. Existing brain mapping and registration methods did not integrate anatomical atlas structures. As a result, with existing brain mappings, it is difficult to visualize and compare the atlas structures. And also existing brain registration methods can not guarantee the best possible alignment of the cortical regions which can help computing more accurate shape similarity metrics for neurodegenerative disease analysis, e.g., Alzheimer’s disease (AD) classification. Also, not much attention has been paid to tackle surface parameterization and registration with graph constraints in a rigorous way which have many applications in graphics, e.g., surface and image morphing.
This dissertation explores structural mappings for shape analysis of surfaces using the feature graphs as constraints. (1) First, we propose structural brain mapping which maps the brain cortical surface onto a planar convex domain using Tutte embedding of a novel atlas graph and harmonic map with atlas graph constraints to facilitate visualization and comparison between the atlas structures. (2) Next, we propose a novel brain registration technique based on an intrinsic atlas-constrained harmonic map which provides the best possible alignment of the cortical regions. (3) After that, the proposed brain registration technique has been applied to compute shape similarity metrics for AD classification. (4) Finally, we propose techniques to compute intrinsic graph-constrained parameterization and registration for general genus-0 surfaces which have been used in surface and image morphing applications.
|
Page generated in 0.1329 seconds