Traditional open interventions have been progressively replaced with minimally invasive techniques.
Most notably, direct visual feedback is transitioned into indirect, image-based feedback,
leading to the wide use of image-guided interventions (IGIs). One essential process of all IGIs is to
align some 3D data with 2D images of patient through a procedure called 3D-2D registration during
interventions to provide better guidance and richer information. When the 3D data is unavailable, a
realistic 3D patient-speci_c model needs to be constructed from a few 2D images.
The dominating methods that use only image intensity have narrow convergence range and are
not robust to foreign objects presented in 2D images but not existed in 3D data. Feature-based
methods partly addressed these problems, but most of them heavily rely on a set of \best" paired
correspondences and requires clean image features. Moreover, the optimization procedures used in
both kinds of methods are not e_cient.
In this dissertation, two topics have been studied and novel algorithms proposed, namely, contour
extraction from X-ray images and feature-based rigid/deformable 3D-2D registration.
Inspired by biological and neuropsychological characteristics of primary visual cortex (V1), a
contour detector is proposed for simultaneously extracting edges and lines in images. The synergy
of V1 neurons is mimicked using phase congruency and tensor voting. Evaluations and comparisons
showed that the proposed method outperformed several commonly used methods and the results are
consistent with human perception. Moreover, the cumbersome \_ne-tuning" of parameter values is
not always necessary in the proposed method.
An extensible feature-based 3D-2D registration framework is proposed by rigorously formulating
the registration as a probability density estimation problem and solving it via a generalized expectation
maximization algorithm. It optimizes the transformation directly and treats correspondences
as nuisance parameters. This is signi_cantly di_erent from almost all feature-based method in the
literature that _rst single out a set of \best" correspondences and then estimate a transformation
associated with it. This property makes the proposed algorithm not rely on paired correspondences
and thus inherently robust to outliers. The framework can be adapted as a point-based method with
the major advantages of 1) independency on paired correspondences, 2) accurate registration using
a single image, and 3) robustness to the initialization and a large amount of outliers. Extended to
a contour-based method, it di_ers from other contour-based methods mainly in that 1) it does not
rely on correspondences and 2) it incorporates gradient information via a statistical model instead of
a weighting function. Tuning into model-based deformable registration and surface reconstruction,
our method solves the problem using the maximum penalized likelihood estimation. Unlike almost
all other methods that handle the registration and deformation separately and optimized them sequentially,
our method optimizes them simultaneously. The framework was evaluated in two example
clinical applications and a simulation study for point-based, contour-based and surface reconstruction,
respectively. Experiments showed its sub-degree and sub-millimeter registration accuracy and
superiority to the state-of-the-art methods.
It is expected that our algorithms, when thoroughly validated, can be used as valuable tools for
image-guided interventions. / published_or_final_version / Orthopaedics and Traumatology / Doctoral / Doctor of Philosophy
Identifer | oai:union.ndltd.org:HKU/oai:hub.hku.hk:10722/183632 |
Date | January 2011 |
Creators | Kang, Xin, 康欣 |
Publisher | The University of Hong Kong (Pokfulam, Hong Kong) |
Source Sets | Hong Kong University Theses |
Language | English |
Detected Language | English |
Type | PG_Thesis |
Source | http://hub.hku.hk/bib/B48079625 |
Rights | The author retains all proprietary rights, (such as patent rights) and the right to use in future works., Creative Commons: Attribution 3.0 Hong Kong License |
Relation | HKU Theses Online (HKUTO) |
Page generated in 0.002 seconds