• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 4
  • 2
  • 2
  • Tagged with
  • 29
  • 29
  • 14
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Development of statistical shape and intensity models of eroded scapulae to improve shoulder arthroplasty

Sharif Ahmadian, Azita 22 December 2021 (has links)
Reverse Total shoulder arthroplasty (RTSA) is an effective treatment and a surgical alternative approach to conventional total shoulder arthroplasty for patients with severe rotator cuff tears and glenoid erosion. To help optimize RTSA design, it is necessary to gain insight into the geometry of glenoid erosions and consider their unique morphology across the entire bone. One of the most powerful tools to systematically quantify and visualize the variation of bone geometry throughout a population is Statistical Shape Modeling (SSM); this method can assess the variation in the full shape of a bone, rather than of discrete anatomical features, which is very useful in identifying abnormalities, planning surgeries, and improving implant designs. Recently, many scapula SSMs have been presented in the literature; however, each has been created using normal and healthy bones. Therefore, creation of a scapula SSM derived exclusively from patients exhibiting complex glenoid bone erosions is critical and significantly challenging. In addition, several studies have quantified scapular bone properties in patients with complex glenoid erosion. However, because of their discrete nature these analyses cannot be used as the basis for Finite Element Modeling (FEM). Thus, a need exists to systematically quantify the variation of bone properties in a glenoid erosion patient population using a method that captures variation across the entire bone. This can be achieved using Statistical Intensity Modeling (SIM), which can then generate scapula FEMs with realistic bone properties for evaluation of orthopaedic implants. Using an SIM enables researchers to generate models with bone properties that represent a specific, known portion of the population variation, which makes the findings more generalizable. Accordingly, the main purpose of this research is to develop an SSM and SIM to mathematically quantifying the variation of bone geometries in a systematic manner for the complex geometry of scapulae with severe glenoid erosion and to determine the main modes of variation in bone property distribution, which could be used for future FEM studies, respectively. To draw meaningful statistical conclusions from the dataset, we need to compare and relate corresponding parts of the scapula. To achieve this correspondence, 3D triangulated mesh models of 61 scapulae were created from pre-operative CT scans from patients who were treated with RTSA and then a Non-Rigid (NR) registration method was used to morph one Atlas point cloud to the shapes of all other bones. However, the more complex the shape, the more difficult it is to maintain good correspondence. To overcome this challenge, we have adapted and optimized a NR-Iterative Closest Point (ICP) method and applied that on 61 eroded scapulae which results in each bone shape having identical mesh structure (i.e., same number and anatomical location of points). To assess the quality of our proposed algorithm, the resulting correspondence error was evaluated by comparing the positions of ground truth points and the corresponding point locations produced by the algorithm. The average correspondence error of all anatomical landmarks across the two observers was 2.74 mm with inter and intra-observer reliability of ±0.31 and ±0.06 mm. Moreover, the Root-Mean-Square (RMS) and Hausdorff errors of geometric registration between the original and the deformed models were calculated 0.25±0.04 mm and 0.76±0.14 mm, respectively. After registration, Principal Component Analysis (PCA) is applied to the deformed models as a group to describe independent modes of variation in the dataset. The robustness of the SSM is also evaluated using three standard metrics: compactness, generality, and specificity. Regarding compactness, the first 9 principal modes of variations accounted for 95% variability, while the model’s generality error and the calculated specificity over 10,000 instances were found to be 2.6 mm and 2.99 mm, respectively. The SIM results showed that the first mode of variation accounts for overall changes in intensity across the entire bone, while the second mode represented localized changes in the glenoid vault bone quality. The third mode showed changes in intensity at the posterior and inferior glenoid rim associated with posteroinferior glenoid rim erosion which suggests avoiding fixation in this region and preferentially placing screws in the anterosuperior region of the glenoid to improve implant fixation. / Graduate
12

Deformation-based morphometry of the brain for the development of surrogate markers in Alzheimer's disease

Lorenzi, Marco 20 December 2012 (has links) (PDF)
The aim of the present thesis is to provide an e ffective computational framework for the analysis and quantifi cation of the longitudinal structural changes in Alzheimer's disease (AD). The framework is based on the diffeomorphic non-rigid registration parameterized by stationary velocity fields (SVFs), and is hierachically developed to account for the diff erent levels of variability which characterize the longitudinal observations of T1 brain magnetic resonance images (MRIs). We developed an effi cient and robust method for the quantifi cation of the structural changes observed between pairs of MRIs. For this purpose, we propose the LCC-Demons registration framework which implements the local correlation coeffi cient as similarity metric, and we derived consistent and numerically stable measures of volume change and boundary shift for the regional assessment of the brain atrophy. In order to consistently analyze group-wise longitudinal evolutions, we then investigated the parallel transport of subject-specifi c deformation trajectories across di fferent anatomical references. Based on the SVF parametrization of diffeomorphisms, we relied on the Lie group theory to propose new and effective strategies for the parallel transport of SVFs, with particular interest into the practical application to the registration setting. These contributions are the basis for the defi nition of qualitative and quantitative analysis for the pathological evolution of AD. We proposed several analysis frameworks which addressed the di fferentiation of pathological evolutions between clinical populations, the statistically powered evaluation of regional volume changes, and the clinical diagnosis at the early/prodromal disease stages.
13

A General System for Supervised Biomedical Image Segmentation

Chen, Cheng 15 March 2013 (has links)
Image segmentation is important with applications to several problems in biology and medicine. While extensively researched, generally, current segmentation methods perform adequately in the applications for which they were designed, but often require extensive modifications or calibrations before used in a different application. We describe a system that, with few modifications, can be used in a variety of image segmentation problems. The system is based on a supervised learning strategy that utilizes intensity neighborhoods to assign each pixel in a test image its correct class based on training data. In summary, we have several innovations: (1) A general framework for such a system is proposed, where rotations and variations of intensity neighborhoods in scales are modeled, and a multi-scale classification framework is utilized to segment unknown images; (2) A fast algorithm for training data selection and pixel classification is presented, where a majority voting based criterion is proposed for selecting a small subset from raw training set. When combined with 1-nearest neighbor (1-NN) classifier, such an algorithm is able to provide descent classification accuracy within reasonable computational complexity. (3) A general deformable model for optimization of segmented regions is proposed, which takes the decision values from previous pixel classification process as input, and optimize the segmented regions in a partial differential equation (PDE) framework. We show that the performance of this system in several different biomedical applications, such as tissue segmentation tasks in magnetic resonance and histopathology microscopy images, as well as nuclei segmentation from fluorescence microscopy images, is similar or better than several algorithms specifically designed for each of these applications. In addition, we describe another general segmentation system for biomedical applications where a strong prior on shape is available (e.g. cells, nuclei). The idea is based on template matching and supervised learning, and we show the examples of segmenting cells and nuclei from microscopy images. The method uses examples selected by a user for building a statistical model which captures the texture and shape variations of the nuclear structures from a given data set to be segmented. Segmentation of subsequent, unlabeled, images is then performed by finding the model instance that best matches (in the normalized cross correlation sense) local neighborhood in the input image. We demonstrate the application of our method to segmenting cells and nuclei from a variety of imaging modalities, and quantitatively compare our results to several other methods. Quantitative results using both simulated and real image data show that, while certain methods may work well for certain imaging modalities, our software is able to obtain high accuracy across several imaging modalities studied. Results also demonstrate that, relative to several existing methods, the template based method we propose presents increased robustness in the sense of better handling variations in illumination, variations in texture from different imaging modalities, providing more smooth and accurate segmentation borders, as well as handling better cluttered cells and nuclei.
14

Non-rigid image registration for deep brain stimulation surgery

Khan, Muhammad Faisal 05 November 2008 (has links)
Deep brain stimulation (DBS) surgery, a type of microelectrode-guided surgery, is an effective treatment for the movement disorders patients that can no longer be treated by medications. New rigid and non-rigid image registration methods were developed for the movement disorders patients that underwent DBS surgery. These new methods help study and analyze the brain shift during the DBS surgery and perform atlas-based segmentation of the deep brain structures for the DBS surgery planning and navigation. A diploë based rigid registration method for the intra-operative brain shift analysis during the DBS surgery was developed. The proposed method for the brain shift analysis ensures rigid registration based on diploë only, which can be treated as a rigid structure as opposed to the brain tissues. The results show that the brain shift during the DBS surgery is comparable to the size of the DBS targets and should not be neglected. This brain shift may further lengthen and complicate the DBS surgery contrary to the common belief that brain shift during the DBS surgery is not considerable. We also developed an integrated electrophysiological and anatomical atlas with eleven deep brain structures segmented by an expert, and electrophysiological data of four implant locations obtained from post-op MRI data of twenty patients that underwent DBS surgery. This atlas MR image is then non-rigidly registered with the pre-operative patient MR image, which provides initial DBS target location along with the segmented deep brain structures that can be used for guidance during the microelectrode mapping of the stereotactic procedure. The atlas based approach predicts the target automatically as opposed to the manual selection currently used. The results showed that 85% of the times, this automatic selection of the target location was closer to the target when compared to currently used technique.
15

On a Divide-and-Conquer Approach for Sensor Network Localization

Sanyal, Rajat January 2017 (has links) (PDF)
Advancement of micro-electro-mechanics and wireless communication have proliferated the deployment of large-scale wireless sensor networks. Due to cost, size and power constraints, at most a few sensor nodes can be equipped with a global positioning system; such nodes (whose positions can be accurately determined) are referred to as anchors. However, one can deter-mine the distance between two nearby sensors using some form of local communication. The problem of computing the positions of the non-anchor nodes from the inter-sensor distances and anchor positions is referred as sensor network localization (SNL). In this dissertation, our aim is to develop an accurate, efficient, and scalable localization algorithm, which can operate both in the presence and absence of anchors. It has been demon-strated in the literature that divide-and-conquer approaches can be used to localize large net-works without compromising the localization accuracy. The core idea with such approaches is to partition the network into overlapping subnetworks, localize each subnetwork using the available distances (and anchor positions), and finally register the subnetworks in a single coordinate system. In this regard, the contributions of this dissertation are as follows: We study the global registration problem and formulate a necessary “rigidity” condition for uniquely recovering the global sensor locations. In particular, we present a method for efficiently testing rigidity, and a heuristic for augmenting the partitioned network to enforce rigidity. We present a mechanism for partitioning the network into smaller subnetworks using cliques. Each clique is efficiently localized using multidimensional scaling. Finally, we use a recently proposed semidefinite program (SDP) to register the localized subnetworks. We develop a scalable ADMM solver for the SDP in question. We present simulation results on random and structured networks to demonstrate the pro-posed methods perform better than state-of-the-art methods in terms of run-time, accuracy, and scalability.
16

Rigid registration based on local geometric dissimilarity

Cejnog, Luciano Walenty Xavier 21 September 2015 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-06-07T15:41:47Z No. of bitstreams: 1 lucianowalentyxaviercejnog.pdf: 14234810 bytes, checksum: 492ebb7393b5f0e7cfc6e822067fe492 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-24T13:12:44Z (GMT) No. of bitstreams: 1 lucianowalentyxaviercejnog.pdf: 14234810 bytes, checksum: 492ebb7393b5f0e7cfc6e822067fe492 (MD5) / Made available in DSpace on 2017-06-24T13:12:44Z (GMT). No. of bitstreams: 1 lucianowalentyxaviercejnog.pdf: 14234810 bytes, checksum: 492ebb7393b5f0e7cfc6e822067fe492 (MD5) Previous issue date: 2015-09-21 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Este trabalho visa melhorar um método clássico para o problema de registro rígido, o ICP (iterative Closest Point), fazendo com que a busca dos pontos mais próximos, uma de suas fases principais, considere informações aproximadas da geometria local de cada ponto combinadas à distância Euclidiana originalmente usada. Para isso é necessária uma etapa de pré-processamento, na qual a geometria local é estimada em tensores de orientação de segunda ordem. É definido o CTSF, um fator de similaridade entre tensores. O ICP é alterado de modo a considerar uma combinação linear do CTSF com a distância Euclidiana para estabelecer correspondências entre duas nuvens de pontos, variando os pesos relativos entre os dois fatores. Isso proporciona uma capacidade maior de convergência para ângulos maiores em relação ao ICP original, tornando o método comparável aos que constituem o estado da arte da área. Para comprovar o ganho obtido, foram realizados testes exaustivos em malhas com características geométricas variadas, para diferentes níveis de ruído aditivo, outliers e em casos de sobreposição parcial, variando os parâmetros do método de estimativa dos tensores. Foi definida uma nova base com malhas sintéticas para os experimentos, bem como um protocolo estatístico de avaliação quantitativa. Nos resultados, a avaliação foi feita de modo a determinar bons valores de parâmetros para malhas com diferentes características, e de que modo os parâmetros afetam a qualidade do método em situações com ruído aditivo, outliers, e sobreposição parcial. / This work aims to enhance a classic method for the rigid registration problem, the ICP (Iterative Closest Point), modifying one of its main steps, the closest point search, in order to consider approximated information of local geometry combined to the Euclidean distance, originally used. For this, a preprocessing stage is applied, in which the local geometry is estimated in second-order orientation tensors. We define the CTSF, a similarity factor between tensors. Our method uses a linear combination between this factor and the Euclidean distance, in order to establish correspondences, and a strategy of weight variation between both factors. This increases the convergence probability for higher angles with respect to the original ICP, making our method comparable to some of the state-of-art techniques. In order to comprove the enhancement, exhaustive tests were made in point clouds with different geometric features, with variable levels of additive noise and outliers and in partial overlapping situations, varying also the parameters of the tensor estimative method. A dataset of synthetic point clouds was defined for the experiments, as well as a statistic protocol for quantitative evaluation. The results were analyzed in order to highlight good parameter ranges for different point clouds, and how these parameters affect the behavior of the method in situations of additive noise, outliers and partial overlapping.
17

A Shape-based weighting strategy applied to the covariance estimation on the ICP

Yamada, Fernando Akio de Araujo 15 March 2016 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-06-07T17:49:03Z No. of bitstreams: 1 fernandoakiodearaujoyamada.pdf: 21095203 bytes, checksum: 1842e801a538bdeef0368c963b9d98b7 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-24T13:47:22Z (GMT) No. of bitstreams: 1 fernandoakiodearaujoyamada.pdf: 21095203 bytes, checksum: 1842e801a538bdeef0368c963b9d98b7 (MD5) / Made available in DSpace on 2017-06-24T13:47:22Z (GMT). No. of bitstreams: 1 fernandoakiodearaujoyamada.pdf: 21095203 bytes, checksum: 1842e801a538bdeef0368c963b9d98b7 (MD5) Previous issue date: 2016-03-15 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / No problema de registro rígido por pares é preciso encontrar uma transformação rígida que alinha duas nuvens de pontos. A sulução clássica e mais comum é o algoritmo Iterative Closest Point (ICP). No entanto, o ICP e muitas de suas variantes requerem que as nuvens de pontos já estejam grosseiramente alinhadas. Este trabalho apresenta um método denominado Shape-based Weighting Covariance Iterative Closest Point (SWC-ICP), uma melhoria do ICP clássico. A abordagem proposta aumenta a possibilidade de alinhar corretamente duas nuvens de pontos, independente da pose inicial, mesmo quando existe apenas sobreposição parcial entre elas, ou na presença de ruído e outliers. Ela se beneficia da geometria local dos pontos, codificada em tensores de orientação de segunda ordem, para prover um segundo conjunto de correspondências para o ICP. A matriz de covariância cruzada computada a partir deste conjunto é combinada com a matriz de covariância cruzada usual, seguindo uma estratégia heurística. Para comparar o método proposto com algumas abordagens recentes, um protocolo de avaliação detalhado para registro rígido é apresentado. Os resultados mostram que o SWC-ICP está entre os melhores métodos comparados, com performance superior em situações de grande deslocamento angular, mesmo na presença de ruído e outliers. / In the pairwise rigid registration problem we need to find a rigid transformation that aligns two point clouds. The classical and most common solution is the Iterative Closest Point (ICP) algorithm. However, the ICP and many of its variants require that the point clouds are already coarsely aligned. We present in this work a method named Shape-based Weighting Covariance Iterative Closest Point (SWC-ICP), an improvement over the classical ICP. Our approach improves the possibility to correctly align two point clouds, regardless of the initial pose, even when there is only a partial overlapping between them, or in the presence of noise and outliers. It benefits from the local geometry of the points, encoded in second-order orientation tensors, to provide a second correspondences set to the ICP. The cross-covariance matrix computed from this set is combined with the usual cross-covariance matrix following a heuristic strategy. In order to compare our method with some recent approaches, we present a detailed evaluation protocol to rigid registration. Results show that the SWC-ICP is among the best methods compared, with superior performance in situations of wide angular displacement, even in situations of noise and outliers.
18

Représentation réduite de la segmentation et du suivi des images cardiaques pour l’analyse longitudinale de groupe / Reduced representation of segmentation and tracking in cardiac images for group-wise longitudinal analysis

Rohé, Marc-Michel 03 July 2017 (has links)
Cette thèse présente des méthodes d’imagerie pour l’analyse du mouvement cardiaque afin de permettre des statistiques groupées, un diagnostic automatique et une étude longitudinale. Ceci est réalisé en combinant des méthodes d’apprentissage et de modélisation statistique. En premier lieu, une méthode automatique de segmentation du myocarde est définie. Pour ce faire, nous développons une méthode de recalage très rapide basée sur des réseaux neuronaux convolutifs qui sont entrainés à apprendre le recalage cardiaque inter-sujet. Ensuite, nous intégrons cette méthode de recalage dans une pipeline de segmentation multi-atlas. Ensuite, nous améliorons des méthodes de suivi du mouvement cardiaque afin de définir des représentations à faible dimension. Deux méthodes différentes sont développées, l’une s’appuyant sur des sous-espaces barycentriques construits sur des frames de référence de la séquence et une autre basée sur une représentation d’ordre réduit du mouvement avec des transformations polyaffine. Enfin, nous appliquons la représentation précédemment définie au problème du diagnostic et de l’analyse longitudinale. Nous montrons que ces représentations en- codent des caractéristiques pertinentes permettant le diagnostic des patients atteint d’infarct et de Tétralogie de Fallot ainsi que l’analyse de l’évolution dans le temps du mouvement cardiaque des patients atteints de cardiomyopathies ou d’obésité. Ces trois axes forment un cadre pour l’étude du mouvement cardiaque de bout en bout de l’acquisition des images médicales jusqu’à leur analyse automatique afin d’améliorer la prise de décision clinique grâce à un traitement personnalisé assisté par ordinateur. / This thesis presents image-based methods for the analysis of cardiac motion to enable group-wise statistics, automatic diagnosis and longitudinal study. This is achieved by combining advanced medical image processing with machine learning methods and statistical modelling. The first axis of this work is to define an automatic method for the segmentation of the myocardium. We develop a very-fast registration method based on convolutional neural networks that is trained to learn inter-subject heart registration. Then, we embed this registration method into a multi-atlas segmentation pipeline. The second axis of this work is focused on the improvement of cardiac motion tracking methods in order to define relevant low-dimensional representations. Two different methods are developed, one relying on Barycentric Subspaces built on ref- erences frames of the sequence, and another based on a reduced order representation of the motion from polyaffine transformations. Finally, in the last axis, we apply the previously defined representation to the problem of diagnosis and longitudinal analysis. We show that these representations encode relevant features allowing the diagnosis of infarcted patients and Tetralogy of Fallot versus controls and the analysis of the evolution through time of the cardiac motion of patients with either cardiomyopathies or obesity. These three axes form an end to end framework for the study of cardiac motion starting from the acquisition of the medical images to their automatic analysis. Such a framework could be used for diagonis and therapy planning in order to improve the clinical decision making with a more personalised computer-aided medicine.
19

Medical Image Registration and Application to Atlas-Based Segmentation

Guo, Yujun 01 May 2007 (has links)
No description available.
20

Recalage non rigide d'images par approches variationnelles statistiques. Application à l'analyse et à la modélisation de la fonction myocardique en IRM

Petitjean, Caroline 01 September 2003 (has links) (PDF)
L'analyse quantitative de la fonction contractile myocardique constitue un enjeu majeur pour le dépistage, le traitement et le suivi des maladies cardio-vasculaires, première cause de mortalité dans les pays développés. Dans ce contexte, l'Imagerie par Résonance Magnétique (IRM) s'impose comme une modalité privilégiée pour l'exploration dynamique du coeur, renseignant, d'une part, sur l'évolution des parois (ciné IRM), et permettant, d'autre part, d'accéder à des informations cinématiques au sein du myocarde (IRM de marquage). L'exploitation quantitative de ces données est néanmoins actuellement limitée par la quasi-absence de méthodologies fiables, robustes et reproductibles d'estimation de mouvement non rigide à partir de séquences d'images acquises dans cette modalité.<br /><br />Cette thèse se propose de démontrer que les techniques de recalage non rigide statistique constituent un cadre approprié pour l'estimation des déformations myocardiques en IRM, leur quantification à des fins diagnostiques, et leur modélisation en vue d'établir une référence numérique de normalité. Ses contributions concernent :<br /><br /> 1. l'élaboration d'une méthode robuste non supervisée d'estimation des déplacements myocardiques à partir de séquences d'IRM de marquage. Elle permet l'obtention de mesures de mouvement fiables en tout point du myocarde, à tout instant du cycle cardiaque et sous incidence de coupe arbitraire.<br /><br /> 2. le développement d'un outil de quantification dynamique des déformations à l'échelle du pixel et du segment myocardique, intégrant un étape de segmentation automatique du coeur par recalage d'images ciné IRM acquises conjointement aux données de marquage. Pour le coeur sain, la comparaison des mesures obtenues à des valeurs de référence issues d'une synthèse approfondie de la littérature médicale démontre une excellente corrélation. Pour des coeurs pathologiques, les expériences menées ont montré la pertinence d'une analyse quantitative multiparamétrique pour localiser et caractériser les zones atteintes.<br /><br /> 3. la construction d'un modèle statistique (atlas) de contraction d'un coeur sain. Cet atlas fournit des modèles quantitatifs de référence locaux et segmentaires pour les paramètres de déformation. Son intégration, en tant que modèle de mouvement, au processus de recalage des données d'IRM de marquage conduit en outre à une description très compacte des déplacements myocardiques sans perte de précision notable.

Page generated in 0.4318 seconds