• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61
  • 30
  • 14
  • 8
  • 4
  • 4
  • 2
  • 1
  • Tagged with
  • 146
  • 146
  • 43
  • 33
  • 28
  • 27
  • 26
  • 23
  • 19
  • 17
  • 17
  • 15
  • 15
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Morphometry of the human hippocampus from MRI and conventional MRI high field

Gerardin, Emilie 13 December 2012 (has links) (PDF)
The hippocampus is a gray matter structure in the temporal lobe that plays a key role in memory processes and in many diseases (Alzheimer's disease, epilepsy, depression ...).The development of morphometric models is essential for the study of the functional anatomy and structure alterations associated with different pathologies. The objective of this thesis is to develop and validate methods for morphometry of the hippocampus in two contexts: the study of the external shape of the hippocampus from conventional MRI (1.5T or 3T) with millimeter resolution, and the study of its internal structure from 7T MRI with high spatial resolution. These two settings correspond to the two main parts of the thesis.In the first part, we propose a method for the automatic classification of patients from shape descriptors. This method is based on a spherical harmonic decomposition which is combined with a support vector machine classifier (SVM). The method is evaluated in the context of automatic classification of patients with Alzheimer's disease (AD) patients, mild cognitive impairment (MCI) patients and healthy elderly subjects. It is also compared to other approaches and a more comprehensive validation is available in a population of 509 subjects from the ADNI database. Finally, we present another application of morphometry to study structural alterations associated with the syndrome of Gilles de la Tourette.The second part of the thesis is devoted to the morphometry of the internal structure of the hippocampus from MRI at 7 Tesla. Indeed, the internal structure of the hippocampus is rich and complex but inaccessible to conventional MRI. We first propose an atlas of the internal structure of the hippocampus from postmortem data acquired at 9.4T. Then, we propose to model the Ammon's horn and the subiculum as a skeleton and a local measure thickness. To do this, we introduce a variational method using original Hilbert spaces reproducing kernels. The method is validated on the postmortem atlas and evaluated on in vivo data from healthy subjects and patients with epilepsy acquired at 7T.
52

Statistical methods for feature extraction in shape analysis and bioinformatics

Le Faucheur, Xavier Jean Maurice 05 April 2010 (has links)
The presented research explores two different problems of statistical data analysis. In the first part of this thesis, a method for 3D shape representation, compression and smoothing is presented. First, a technique for encoding non-spherical surfaces using second generation wavelet decomposition is described. Second, a novel model is proposed for wavelet-based surface enhancement. This part of the work aims to develop an efficient algorithm for removing irrelevant and noise-like variations from 3D shapes. Surfaces are encoded using second generation wavelets, and the proposed methodology consists of separating noise-like wavelet coefficients from those contributing to the relevant part of the signal. The empirical-based Bayesian models developed in this thesis threshold wavelet coefficients in an adaptive and robust manner. Once thresholding is performed, irrelevant coefficients are removed and the inverse wavelet transform is applied to the clean set of wavelet coefficients. Experimental results show the efficiency of the proposed technique for surface smoothing and compression. The second part of this thesis proposes using a non-parametric clustering method for studying RNA (RiboNucleic Acid) conformations. The local conformation of RNA molecules is an important factor in determining their catalytic and binding properties. RNA conformations can be characterized by a finite set of parameters that define the local arrangement of the molecule in space. Their analysis is particularly difficult due to the large number of degrees of freedom, such as torsion angles and inter-atomic distances among interacting residues. In order to understand and analyze the structural variability of RNA molecules, this work proposes a methodology for detecting repetitive conformational sub-structures along RNA strands. Clusters of similar structures in the conformational space are obtained using a nearest-neighbor search method based on the statistical mechanical Potts model. The proposed technique is a mostly automatic clustering algorithm and may be applied to problems where there is no prior knowledge on the structure of the data space, in contrast to many other clustering techniques. First, results are reported for both single residue conformations- where the parameter set of the data space includes four to seven torsional angles-, and base pair geometries. For both types of data sets, a very good match is observed between the results of the proposed clustering method and other known classifications, with only few exceptions. Second, new results are reported for base stacking geometries. In this case, the proposed classification is validated with respect to specific geometrical constraints, while the content and geometry of the new clusters are fully analyzed.
53

Statistical and geometric methods for visual tracking with occlusion handling and target reacquisition

Lee, Jehoon 17 January 2012 (has links)
Computer vision is the science that studies how machines understand scenes and automatically make decisions based on meaningful information extracted from an image or multi-dimensional data of the scene, like human vision. One common and well-studied field of computer vision is visual tracking. It is challenging and active research area in the computer vision community. Visual tracking is the task of continuously estimating the pose of an object of interest from the background in consecutive frames of an image sequence. It is a ubiquitous task and a fundamental technology of computer vision that provides low-level information used for high-level applications such as visual navigation, human-computer interaction, and surveillance system. The focus of the research in this thesis is visual tracking and its applications. More specifically, the object of this research is to design a reliable tracking algorithm for a deformable object that is robust to clutter and capable of occlusion handling and target reacquisition in realistic tracking scenarios by using statistical and geometric methods. To this end, the approaches developed in this thesis make extensive use of region-based active contours and particle filters in a variational framework. In addition, to deal with occlusions and target reacquisition problems, we exploit the benefits of coupling 2D and 3D information of an image and an object. In this thesis, first, we present an approach for tracking a moving object based on 3D range information in stereoscopic temporal imagery by combining particle filtering and geometric active contours. Range information is weighted by the proposed Gaussian weighting scheme to improve segmentation achieved by active contours. In addition, this work present an on-line shape learning method based on principal component analysis to reacquire track of an object in the event that it disappears from the field of view and reappears later. Second, we propose an approach to jointly track a rigid object in a 2D image sequence and to estimate its pose in 3D space. In this work, we take advantage of knowledge of a 3D model of an object and we employ particle filtering to generate and propagate the translation and rotation parameters in a decoupled manner. Moreover, to continuously track the object in the presence of occlusions, we propose an occlusion detection and handling scheme based on the control of the degree of dependence between predictions and measurements of the system. Third, we introduce the fast level-set based algorithm applicable to real-time applications. In this algorithm, a contour-based tracker is improved in terms of computational complexity and the tracker performs real-time curve evolution for detecting multiple windows. Lastly, we deal with rapid human motion in context of object segmentation and visual tracking. Specifically, we introduce a model-free and marker-less approach for human body tracking based on a dynamic color model and geometric information of a human body from a monocular video sequence. The contributions of this thesis are summarized as follows: 1. Reliable algorithm to track deformable objects in a sequence consisting of 3D range data by combining particle filtering and statistics-based active contour models. 2. Effective handling scheme based on object's 2D shape information for the challenging situations in which the tracked object is completely gone from the image domain during tracking. 3. Robust 2D-3D pose tracking algorithm using a 3D shape prior and particle filters on SE(3). 4. Occlusion handling scheme based on the degree of trust between predictions and measurements of the tracking system, which is controlled in an online fashion. 5. Fast level set based active contour models applicable to real-time object detection. 6. Model-free and marker-less approach for tracking of rapid human motion based on a dynamic color model and geometric information of a human body.
54

Computer-aided diagnosis for mammographic microcalcification clusters [electronic resource] / by Mugdha Tembey.

Tembey, Mugdha. January 2003 (has links)
Title from PDF of title page. / Document formatted into pages; contains 112 pages. / Thesis (M.S.C.S.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: Breast cancer is the second leading cause of cancer deaths among women in the United States and microcalcifications clusters are one of the most important indicators of breast disease. Computer methodologies help in the detection and differentiation between benign and malignant lesions and have the potential to improve radiologists' performance and breast cancer diagnosis significantly. A Computer-Aided Diagnosis (CAD-Dx) algorithm has been previously developed to assist radiologists in the diagnosis of mammographic clusters of calcifications with the modules: (a) detection of all calcification-like areas, (b) false-positive reduction and segmentation of the detected calcifications, (c) selection of morphological and distributional features and (d) classification of the clusters. Classification was based on an artificial neural network (ANN) with 14 input features and assigned a likelihood of malignancy to each cluster. / ABSTRACT: The purpose of this work was threefold: (a) optimize the existing algorithm and test on a large database, (b) rank classification features and select the best feature set, and (c) determine the impact of single and two-view feature estimation on classification and feature ranking. Classification performance was evaluated with the NevProp4 artificial neural network trained with the leave-one-out resampling technique. Sequential forward selection was used for feature selection and ranking. Mammograms from 136 patients, containing single or two views of a breast with calcification cluster were digitized at 60 microns and 16 bits per pixel. 260 regions of interest (ROI's) centered on calcification cluster were defined to build the single-view dataset. 100 of the 136 patients had a two-view mammogram which yielded 202 ROI's that formed the two-view dataset. Classification and feature selection were evaluated with both these datasets. / ABSTRACT: To decide on the optimal features for two-view feature estimation several combinations of CC and MLO view features were attempted. On the single-view dataset the classifier achieved an AZ =0.8891 with 88% sensitivity and 77% specificity at an operating point of 0.4; 12 features were selected as the most important. With the two-view dataset, the classifier achieved a higher performance with an AZ =0.9580 and sensitivity and specificity of 98% and 80% respectively at an operating point of 0.4; 10 features were selected as the most important. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
55

Verification of sequential and concurrent libraries

Deshmukh, Jyotirmoy Vinay 02 August 2011 (has links)
The goal of this dissertation is to present new and improved techniques for fully automatic verification of sequential and concurrent software libraries. In most cases, automatic software verification is plagued by undecidability, while in many others it suffers from prohibitively high computational complexity. Model checking -- a highly successful technique used for verifying finite state hardware circuits against logical specifications -- has been less widely adapted for software, as software verification tends to involve reasoning about potentially infinite state-spaces. Two of the biggest culprits responsible for making software model checking hard are heap-allocated data structures and concurrency. In the first part of this dissertation, we study the problem of verifying shape properties of sequential data structure libraries. Such libraries are implemented as collections of methods that manipulate the underlying data structure. Examples of such methods include: methods to insert, delete, and update data values of nodes in linked lists, binary trees, and directed acyclic graphs; methods to reverse linked lists; and methods to rotate balanced trees. Well-written methods are accompanied by documentation that specifies the observational behavior of these methods in terms of pre/post-conditions. A pre-condition [phi] for a method M characterizes the state of a data structure before the method acts on it, and the post-condition [psi] characterizes the state of the data structure after the method has terminated. In a certain sense, we can view the method as a function that operates on an input data structure, producing an output data structure. Examples of such pre/post-conditions include shape properties such as acyclicity, sorted-ness, tree-ness, reachability of particular data values, and reachability of pointer values, and data structure-specific properties such as: "no red node has a red child'', and "there is no node with data value 'a' in the data structure''. Moreover, methods are often expected not to violate certain safety properties such as the absence of dangling pointers, absence of null pointer dereferences, and absence of memory leaks. We often assume such specifications as implicit, and say that a method is incorrect if it violates such specifications. We model data structures as directed graphs, and use the two terms interchangeably. Verifying correctness of methods operating on graphs is an instance of the parameterized verification problem: for every input graph that satisfies [phi], we wish to ensure that the corresponding output graph satisfies [psi]. Control structures such as loops and recursion allow an arbitrary method to simulate a Turing Machine. Hence, the parameterized verification problem for arbitrary methods is undecidable. One of the main contributions of this dissertation is in identifying mathematical conditions on a programming language fragment for which parameterized verification is not only decidable, but also efficient from a complexity perspective. The decidable fragment we consider can be broadly sub-divided into two categories: the class of iterative methods, or methods which use loops as a control flow construct to traverse a data structure, and the class of recursive methods, or methods that use recursion to traverse the data structure. We show that for an iterative method operating on a directed graph, if we are guaranteed that if the number of destructive updates that a method performs is bounded (by a constant, i.e., O(1)), and is guaranteed to terminate, then the correctness of the method can be checked in time polynomial in the size of the method and its specifications. Further, we provide a well-defined syntactic fragment for recursive methods operating on tree-like data structures, which assures that any method in this fragment can be verified in time polynomial in the size of the method and its specifications. Our approach draws on the theory of tree automata, and we show that parameterized correctness can be reduced to emptiness of finite-state, nondeterministic tree automata that operate on infinite trees. We then leverage efficient algorithms for checking the emptiness of such tree automata to obtain a tractable verification framework. Our prototype tool demonstrates the low theoretical complexity of our technique by efficiently verifying common methods that operate on data structures. In the second part of the dissertation, we tackle another obstacle for tractable software verification: concurrency. In particular, we explore application of a static analysis technique based on interprocedural dataflow analysis to predict and document deadlocks in concurrent libraries, and analyze deadlocks in clients that use such libraries. The kind of deadlocks that we focus result from circular dependencies in the acquisition of shared resources (such as locks). Well-written applications that use several locks implicitly assume a certain partial order in which locks are acquired by threads. A cycle in the lock acquisition order is an indicator of a possible deadlock within the application. Methods in object-oriented concurrent libraries often encapsulate internal synchronization details. As a result of information hiding, clients calling the library methods may cause thread safety violations by invoking methods in a manner that violates the partial ordering between lock acquisitions that is implicit within the library. Given a concurrent library, we present a technique for inferring interface contracts that speciy permissible concurrent method calls and patterns of aliasing among method arguments that guarantee deadlock-free execution for the methods in the library. The contracts also help client developers by documenting required assumptions about the library methods. Alternatively, the contracts can be statically enforced in the client code to detect potential deadlocks in the client. Our technique combines static analysis with a symbolic encoding for tracking lock dependencies, allowing us to synthesize contracts using a satisfiability modulo theories (SMT) solver. Additionally, we investigate extensions of our technique to reason about deadlocks in libraries that employ signalling primitives such as wait-notify for cooperative synchronization. We demonstrate its scalability and efficiency with a prototype tool that analyzed over a million lines of code for some widely-used open-source Java libraries in less than 50 minutes. Furthermore, the contracts inferred by our approach have been able to pinpoint real bugs, i.e. deadlocks that have been reported by users of these libraries. / text
56

Segmentace 3D obrazových dat s využitím pokročilých texturních a tvarových příznaků / Segmentation of 3D image data using advanced textural and shape features

Novosadová, Michaela January 2014 (has links)
This thesis first describes theory of range of methods of textural and shape analysis. In several published articles some of the mentioned methods are used for automatic detection of lesion in spine in CT images. Some of these articles are shortly presented (in this thesis). Next part of the thesis includes description of various classifiers which are used for classification of feature vectors. Practical part of the thesis is a design and implementation of image data segmentation solution (metastatic lesions in vertebrae) with use of classification of feature vectors formed by texture and shape symptoms. The thesis also deals with the selection of significant features for segmentation. Segmentation algorithm is tested on medical data.
57

Probability on the spaces of curves and the associated metric spaces via information geometry; radar applications / Probabilités sur les espaces de chemins et dans les espaces métriques associés via la géométrie de l’information ; applications radar

Le Brigant, Alice 04 July 2017 (has links)
Nous nous intéressons à la comparaison de formes de courbes lisses prenant leurs valeurs dans une variété riemannienne M. Dans ce but, nous introduisons une métrique riemannienne invariante par reparamétrisations sur la variété de dimension infinie des immersions lisses dans M. L’équation géodésique est donnée et les géodésiques entre deux courbes sont construites par tir géodésique. La structure quotient induite par l’action du groupe des reparamétrisations sur l’espace des courbes est étudiée. À l’aide d’une décomposition canonique d’un chemin dans un fibré principal, nous proposons un algorithme qui construit la géodésique horizontale entre deux courbes et qui fournit un matching optimal. Dans un deuxième temps, nous introduisons une discrétisation de notre modèle qui est elle-même une structure riemannienne sur la variété de dimension finie Mn+1 des "courbes discrètes" définies par n + 1 points, où M est de courbure sectionnelle constante. Nous montrons la convergence du modèle discret vers le modèle continu, et nous étudions la géométrie induite. Des résultats de simulations dans la sphère, le plan et le demi-plan hyperbolique sont donnés. Enfin, nous donnons le contexte mathématique nécessaire à l’application de l’étude de formes dans une variété au traitement statistique du signal radar, où des signaux radars localement stationnaires sont représentés par des courbes dans le polydisque de Poincaré via la géométrie de l’information. / We are concerned with the comparison of the shapes of open smooth curves that take their values in a Riemannian manifold M. To this end, we introduce a reparameterization invariant Riemannian metric on the infinite-dimensional manifold of these curves, modeled by smooth immersions in M. We derive the geodesic equation and solve the boundary value problem using geodesic shooting. The quotient structure induced by the action of the reparametrization group on the space of curves is studied. Using a canonical decomposition of a path in a principal bundle, we propose an algorithm that computes the horizontal geodesic between two curves and yields an optimal matching. In a second step, restricting to base manifolds of constant sectional curvature, we introduce a detailed discretization of the Riemannian structure on the space of smooth curves, which is itself a Riemannian metric on the finite-dimensional manifold Mn+1 of "discrete curves" given by n + 1 points. We show the convergence of the discrete model to the continuous model, and study the induced geometry. We show results of simulations in the sphere, the plane, and the hyperbolic halfplane. Finally, we give the necessary framework to apply shape analysis of manifold-valued curves to radar signal processing, where locally stationary radar signals are represented by curves in the Poincaré polydisk using information geometry.
58

Análise multiescala de formas planas baseada em estatísticas da transformada de Hough / Multiscale shape analysis based on the Hough transform statistics

Ramos, Lucas Alexandre [UNESP] 12 August 2016 (has links)
Submitted by Lucas Alexandre Ramos null (magrelolukas@hotmail.com) on 2016-09-12T11:55:17Z No. of bitstreams: 1 Monografia_Final.pdf: 4956502 bytes, checksum: b3c792e3df597c4fabe2093c7ea8b357 (MD5) / Approved for entry into archive by Felipe Augusto Arakaki (arakaki@reitoria.unesp.br) on 2016-09-14T17:56:52Z (GMT) No. of bitstreams: 1 ramos_la_me_bauru.pdf: 4956502 bytes, checksum: b3c792e3df597c4fabe2093c7ea8b357 (MD5) / Made available in DSpace on 2016-09-14T17:56:52Z (GMT). No. of bitstreams: 1 ramos_la_me_bauru.pdf: 4956502 bytes, checksum: b3c792e3df597c4fabe2093c7ea8b357 (MD5) Previous issue date: 2016-08-12 / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / Atualmente, dada a difusão dos computadores, a tarefa de se reconhecer padrões visuais está sendo cada vez mais automatizada, em especial para tratar a vasta e crescente quantidade de imagens digitais existentes. Aplicações de diversas áreas como biometria, recuperação de imagens baseada em conteúdo e diagnóstico médico, se valem do processamento de imagens, bem como de técnicas de extração e análise de características das mesmas, a fim de identificar pessoas, objetos, gestos, textos, etc. As características básicas que são utilizadas para a análise de imagens são: cor, textura e forma. Recentemente, foi proposto um novo descritor de formas denominado HTS (Hough Transform Statistics), o qual se baseia no espaço de Hough para representar e reconhecer objetos em imagens por suas formas. Os resultados obtidos pelo HTS sobre bases de imagens públicas têm mostrado que este novo descritor, além de apresentar altas taxas de acurácia, melhores do que muitos descritores tradicionais propostos na literatura, é rápido, pois tem um algoritmo de complexidade linear. O objetivo deste trabalho foi explorar as possibilidades de representação multiescala do HTS e, assim, propor novos descritores de formas. Escala é um parâmetro essencial em Visão Computacional e a teoria de espaço-escala refere-se ao espaço formado quando se observa, simultaneamente, os aspectos espaciais de uma imagem em várias escalas, sendo a escala a terceira dimensão. Os novos métodos multiescala propostos foram avaliados sobre várias bases de dados e seus desempenhos foram comparados com o desempenho do HTS e com os principais descritores de formas encontrados na literatura. Resultados experimentais mostraram que os novos descritores propostos neste trabalho são mais rápidos e em alguns casos também mais precisos. / Currently, given the widespread of computers through society, the task of recognizing visual patterns is being more and more automated, in particular to treat the large and growing amount of digital images available. Applications in many areas, such as biometrics, content-based image retrieval, and medical diagnostic, make use of image processing, as well as techniques for the extraction and analysis of their characteristics, in order to identify persons, objects, gestures, texts, etc. The basic features that are used for image analysis are: color, texture and shape. Recently, it was proposed a new shape descriptor called HTS (Hough Transform Statistics), which is based on the Hough space to represent and recognize objects in images by their shapes. The results obtained by HTS on public image databases have shown that this new shape descriptor, besides showing high accuracy levels, better than many traditional shape descriptors proposed in the literature, is fast, since it has an algorithm of linear complexity. In this dissertation we explored the possibilities of a multiscale and scale-space representation of this new shape descriptor. Scale is a key parameter in Computer Vision and the theory of scale-space refers to the space formed when observing, simultaneously, special aspects of an image at several scales, being the scale the third dimension. The multiscale HTS methods were evaluated on the same databases and their performances were compared with the main shape descriptors found in the literature and with the monoscale HTS. Experimental results showed that these new descriptors are faster and can also be more accurate in some cases. / FAPESP: 2014/10611-0
59

Análise de formas 3D usando wavelets 1D, 2D e 3D / 3D Shape analysis using 1D, 2D and 3D wavelets

Sílvia Cristina Dias Pinto 24 October 2005 (has links)
Este trabalho apresenta novos métodos para análise de formas tridimensionais dentro do contexto de visão computacional, destacando-se o uso das transformadas wavelets 1D, 2D e 3D, as quais proporcionam uma análise multi-escala das formas estudadas. As formas analisadas se dividem em três tipos diferentes, dependendo da sua representação matemática: f(t)=(x(t),y(t),z(t)), f(x,y)=z e f(x,y,z)=w. Cada tipo de forma é analisado por um método melhor adaptado. Primeiramente, tais formas passam por uma rotina de pré-processamento e, em seguida, pela caracterização por meio da aplicação das transformadas wavelet 1D, 2D e 3D para as respectivas formas. Esta aplicação nos permite extrair características que sejam invariantes à rotação e translação, levando em consideração alguns conceitos matemáticos da geometria diferencial. Destaca-se também neste trabalho a não obrigatoriedade de parametrização das formas. Os resultados obtidos a partir de formas extraídas de imagens médicas e dados biológicos, que justificam este trabalho, são apresentados. / This work presents new methods for three-dimensional shape analysis in the context of computational vision, being emphasized the use of 1D, 2D and 3D wavelet transforms, which provide a multiscale analysis of the studied shapes. The analyzed shapes are divided in three different types depending on their representation: f(t)=(x(t),y(t),z(t)), f(x,y)=z and f(x,y,z)=w. Each type of shape is analyzed by a more suitable method. Firstly, such shapes undergo a pre-processing procedure followed by the characterization using the 1D, 2D or 3D wavelet transform, depending on its representation. This application allows to extract features that are rotation- and translation-invariant, based on some mathematical concepts of differential geometry. In this work, we emphasize that it is not necessary to use the parameterized version of the 2D and 3D shapes. The experimental results obtained from shapes extracted from medical and biological images, that corroborate the introduced methods, are presented.
60

Study on the cerebrospinal fluid volumes / Étude des volumes du liquide cérébrospinal

Lebret, Alain 05 December 2013 (has links)
Cette thèse contribue au manque d'outils informatiques pour l'analyse d'images médicales et le diagnostic, en particulier en ce qui concerne l'étude des volumes du liquide cérébrospinal. La première partie concerne la mesure du volume des compartiments du liquide à partir d'images corps entier, pour une population composée d'adultes sains et de patients atteints d'hydrocéphalie. Les images sont obtenues à partir d'une séquence IRM développée récemment et mettant en évidence le liquide par rapport aux structures voisines, de manière à faciliter sa segmentation. Nous proposons une méthode automatique de segmentation et de séparation des volumes permettant une quantification efficace et reproductible. Le ratio des volumes des compartiments sous-arachnoïdien et ventriculaire est constant chez l'adulte sain, ce qui permet de conserver une pression intracrânienne stable. En revanche, il diminue et varie fortement chez les patients atteints d'hydrocéphalie. Ce ratio fournit un index physiologique fiable pour l'aide au diagnostic de la maladie. La seconde partie de la thèse est consacrée à l'analyse de la distribution du liquide dans le compartiment sous-arachnoïdien intracrânial supérieur. Il convient de souligner que ce compartiment, particulièrement complexe d'un point de vue anatomique, demeure peu étudié. Nous proposons deux techniques de visualisation de la distribution du volume liquidien contenu dans ce compartiment, qui produisent des images bidimensionnelles à partir des images d'origine. Ces images permettent de caractériser la distribution du volume liquidien et de son réseau, tout en distinguant les adultes sains des patients souffrant d'hydrocéphalie / This work aims to contribute to the lack of computational methods for medical image analysis and diagnosis about the study of cerebrospinal fluid volumes. In the first part, we focus on the volume assessment of the fluid spaces, from whole body images, in a population consisting of healthy adults and hydrocephalus patients. To help segmentation, these images, obtained from a recent "tissue-specific" magnetic resonance imaging sequence, highlight cerebrospinal fluid unlike its neigh borhood structures. We propose automatic segmentation and separation methods of the different spaces, which allow efficient and reproducible quantification. We show that the ratio of the total subarachnoid space volume to the ventricular one is a proportionality constant for healthy adults, to support a stable intracranial pressure. However, this ratio decreases and varies significantly among patients suffering from hydrocephalus. This ratio provides a reliable physiological index to help in the diagnosis of hydrocephalus. The second part of this work is dedicated to the fluid volume distribution analysis within the superior cortical subarachnoid space. Anatomical complexity of this space induces that it remains poorly studied. We propose two complementary methods to visualize the fluid volume distribution, and which both produce two-dimensional images from the original ones. These images, called relief maps, are used to characterize respectively, the fluid volume distribution and the fluid network, to classify healthy adults and patients with hydrocephalus, and to perform patient monitoring before and after surgery

Page generated in 0.0384 seconds