• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 7
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 37
  • 13
  • 8
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Tumour vessel structural analysis and its application in image analysis

Wang, Po January 2010 (has links)
Abnormal vascular structure has been identified as one of the major characteristics of tumours. In this thesis, we carry out quantitative analysis on different tumour vascular structures and research the relationship between vascular structure and its transportation efficiency. We first study segmentation methods to extract the binary vessel representations from microscope images. We found that local phase-hysteresis thresholding is able to segment vessel objects from noisy microscope images. We also study methods to extract the centre lines of segmented vessel objects, a process termed as skeletonization. We modified the conventional thinning method to regularize the extremely asymmetrical structure found in the segmented vessel objects. We found this method is capable to produce vessel skeletons with satisfactory accuracy. We have developed a software for 3D vessel structural analysis. This software is consisted of four major parts: image segmentation, vessel skeletonization, skeleton modification and structure quantification. This software has implemented local phase-hysteresis thresholding and structure regularization-thinning methods. A GUI was introduced to enable users to alter the skeleton structures based on their subjective judgements. Radius and inter branch length quantification can be conducted based on the segmentation and skeletonization results. The accuracy of segmentation, skeletonization and quantification methods have been tested on several synthesized data sets. The change of tumour vascular structure after drug treatment was then investigated. We proposed metrics to quantify tumour vascular geometry and statistically analysed the effect of tested drugs on normalizing tumour vascular structure. finally, we developed a spatio-temporal model to simulate the delivery of oxygen and 3-18 F-fluoro-1-(2-nitro-1-imidazolyl)-2-propanol (Fmiso), which is the hypoxia tracer that gives out PET signal in an Fmiso PET scanning. This model is based on compartmental models, but also considers the spatial diffusion of oxygen and Fmiso. We validated our model on in vitro spheroid data and simulated the oxygen and Fmiso distribution on the segmented vessel images. We contend that the tumour Fmiso distribution (as observed in Fmiso PET imaging) is caused by the abnormal tumour vascular structure which is further aroused from tumour angiogenesis process. We depicted a modelling framework to research the relationships between tumour angiogenesis, vessel structure and Fmiso distribution, which is going to be the focus of our future work.
22

Détection automatique d'objets géologiques à partir de données numériques d'affleurements 3D

Kudelski, Dimitri 08 December 2011 (has links)
Depuis plusieurs années, le LIDAR est utilisé en géologie pour acquérir la géométrie des affleurements sous forme de nuages de points et de surfaces. L'objectif de cette thèse consiste à développer des techniques visant à automatiser le traitement de ces données et notamment l'interprétation de structures géologiques sur affleurements numériques. Ces travaux s'insèrent dans un projet de recherche plus important financé par ENI qui a pour objectif de concevoir des méthodologies pour intégrer des données d'affleurements aux modèles géologiques. La problématique de cette thèse se focalise sur l'extraction d'objets géologiques (ie, traces de fractures ou de limites stratigraphiques) à partir de modèles numériques 3D d'affleurements. L'idée fondamentale consiste à considérer ces entités géologiques comme des lignes de ravins (ie, des lignes de forte concavité). Ce problème fait référence à la détection de lignes caractéristiques en informatique graphique. Dans un premier temps, nous proposons une méthode reposant sur les propriétés différentielles de troisième ordre de la surface (ie, dérivées de courbures). Un traitement est intégré afin de prendre en compte une connaissance a priori pour n'extraire que les objets orientés selon une direction particulière. Du fait du caractère rugueux et erratique des géométries d'affleurements, plusieurs limites apparaissent et mettent en défaut ce genre d'approche. Ainsi, dans un second temps, nous présentons deux algorithmes alternatifs afin de détecter de manière pertinente les objets géologiques ciblés. Ceux-ci prennent le contre-pied des techniques existantes en s'appuyant sur un marquage de sommets, établi depuis des propriétés différentielles de second ordre, suivi d'une opération de squelettisation. Nous validons, dans une dernière partie, l'ensemble des méthodes développées et proposons diverses applications afin de souligner la généricité des approches. / For a few years now, the LIDAR technology has been employed in geology to capture outcrop geometries as point clouds and surfaces. The objective of this thesis is to develop solutions aiming at processing these data automatically and particularly interpreting geological structures on numerical outcrops. This work is funded by ENI-Agip and fits into a larger project which is devoted to creating methodologies to integrate outcrop data into geological models. The problematic of this thesis focuses on the extraction of geological objects (ie, fractures and stratigraphic limit traces) depicted as polylines from numerical outcrop data. The fundamental idea therefore considers these geological entities as ravine lines (ie, lines with high concavity). This problem belongs to the large domain of feature line detection in computer graphics. We propose an approach based on third-order differential properties of the surface (ie, curvature derivatives). An a priori knowledge is integrated to constrain the detection in order to extract the geological structures oriented in a particular direction.The outcrop rugosity and erratic body geometries however raise several limits of this kind of method. We present two alternative algorithms to detect targeted geological objects in a pertinent way. These algorithms rely on a vertex labeling which is executed according to second-order differential properties and followed by a skeletonization process overcoming traditional approaches of feature detection. We finally present a different context of application than the detection of geological structures to validate the proposed approaches and emphasize their genericity.
23

Skeleton-based visualization of massive voxel objects with network-like architecture

Prohaska, Steffen January 2007 (has links)
This work introduces novel internal and external memory algorithms for computing voxel skeletons of massive voxel objects with complex network-like architecture and for converting these voxel skeletons to piecewise linear geometry, that is triangle meshes and piecewise straight lines. The presented techniques help to tackle the challenge of visualizing and analyzing 3d images of increasing size and complexity, which are becoming more and more important in, for example, biological and medical research. Section 2.3.1 contributes to the theoretical foundations of thinning algorithms with a discussion of homotopic thinning in the grid cell model. The grid cell model explicitly represents a cell complex built of faces, edges, and vertices shared between voxels. A characterization of pairs of cells to be deleted is much simpler than characterizations of simple voxels were before. The grid cell model resolves topologically unclear voxel configurations at junctions and locked voxel configurations causing, for example, interior voxels in sets of non-simple voxels. A general conclusion is that the grid cell model is superior to indecomposable voxels for algorithms that need detailed control of topology. Section 2.3.2 introduces a noise-insensitive measure based on the geodesic distance along the boundary to compute two-dimensional skeletons. The measure is able to retain thin object structures if they are geometrically important while ignoring noise on the object's boundary. This combination of properties is not known of other measures. The measure is also used to guide erosion in a thinning process from the boundary towards lines centered within plate-like structures. Geodesic distance based quantities seem to be well suited to robustly identify one- and two-dimensional skeletons. Chapter 6 applies the method to visualization of bone micro-architecture. Chapter 3 describes a novel geometry generation scheme for representing voxel skeletons, which retracts voxel skeletons to piecewise linear geometry per dual cube. The generated triangle meshes and graphs provide a link to geometry processing and efficient rendering of voxel skeletons. The scheme creates non-closed surfaces with boundaries, which contain fewer triangles than a representation of voxel skeletons using closed surfaces like small cubes or iso-surfaces. A conclusion is that thinking specifically about voxel skeleton configurations instead of generic voxel configurations helps to deal with the topological implications. The geometry generation is one foundation of the applications presented in Chapter 6. Chapter 5 presents a novel external memory algorithm for distance ordered homotopic thinning. The presented method extends known algorithms for computing chamfer distance transformations and thinning to execute I/O-efficiently when input is larger than the available main memory. The applied block-wise decomposition schemes are quite simple. Yet it was necessary to carefully analyze effects of block boundaries to devise globally correct external memory variants of known algorithms. In general, doing so is superior to naive block-wise processing ignoring boundary effects. Chapter 6 applies the algorithms in a novel method based on confocal microscopy for quantitative study of micro-vascular networks in the field of microcirculation. / Die vorliegende Arbeit führt I/O-effiziente Algorithmen und Standard-Algorithmen zur Berechnung von Voxel-Skeletten aus großen Voxel-Objekten mit komplexer, netzwerkartiger Struktur und zur Umwandlung solcher Voxel-Skelette in stückweise-lineare Geometrie ein. Die vorgestellten Techniken werden zur Visualisierung und Analyse komplexer drei-dimensionaler Bilddaten, beispielsweise aus Biologie und Medizin, eingesetzt. Abschnitt 2.3.1 leistet mit der Diskussion von topologischem Thinning im Grid-Cell-Modell einen Beitrag zu den theoretischen Grundlagen von Thinning-Algorithmen. Im Grid-Cell-Modell wird ein Voxel-Objekt als Zellkomplex dargestellt, der aus den Ecken, Kanten, Flächen und den eingeschlossenen Volumina der Voxel gebildet wird. Topologisch unklare Situationen an Verzweigungen und blockierte Voxel-Kombinationen werden aufgelöst. Die Charakterisierung von Zellpaaren, die im Thinning-Prozess entfernt werden dürfen, ist einfacher als bekannte Charakterisierungen von so genannten "Simple Voxels". Eine wesentliche Schlussfolgerung ist, dass das Grid-Cell-Modell atomaren Voxeln überlegen ist, wenn Algorithmen detaillierte Kontrolle über Topologie benötigen. Abschnitt 2.3.2 präsentiert ein rauschunempfindliches Maß, das den geodätischen Abstand entlang der Oberfläche verwendet, um zweidimensionale Skelette zu berechnen, welche dünne, aber geometrisch bedeutsame, Strukturen des Objekts rauschunempfindlich abbilden. Das Maß wird im weiteren mit Thinning kombiniert, um die Erosion von Voxeln auf Linien zuzusteuern, die zentriert in plattenförmigen Strukturen liegen. Maße, die auf dem geodätischen Abstand aufbauen, scheinen sehr geeignet zu sein, um ein- und zwei-dimensionale Skelette bei vorhandenem Rauschen zu identifizieren. Eine theoretische Begründung für diese Beobachtung steht noch aus. In Abschnitt 6 werden die diskutierten Methoden zur Visualisierung von Knochenfeinstruktur eingesetzt. Abschnitt 3 beschreibt eine Methode, um Voxel-Skelette durch kontrollierte Retraktion in eine stückweise-lineare geometrische Darstellung umzuwandeln, die als Eingabe für Geometrieverarbeitung und effizientes Rendering von Voxel-Skeletten dient. Es zeigt sich, dass eine detaillierte Betrachtung der topologischen Eigenschaften eines Voxel-Skeletts einer Betrachtung von allgemeinen Voxel-Konfigurationen für die Umwandlung zu einer geometrischen Darstellung überlegen ist. Die diskutierte Methode bildet die Grundlage für die Anwendungen, die in Abschnitt 6 diskutiert werden. Abschnitt 5 führt einen I/O-effizienten Algorithmus für Thinning ein. Die vorgestellte Methode erweitert bekannte Algorithmen zur Berechung von Chamfer-Distanztransformationen und Thinning so, dass diese effizient ausführbar sind, wenn die Eingabedaten den verfügbaren Hauptspeicher übersteigen. Der Einfluss der Blockgrenzen auf die Algorithmen wurde analysiert, um global korrekte Ergebnisse sicherzustellen. Eine detaillierte Analyse ist einer naiven Zerlegung, die die Einflüsse von Blockgrenzen vernachlässigt, überlegen. In Abschnitt 6 wird, aufbauend auf den I/O-effizienten Algorithmen, ein Verfahren zur quantitativen Analyse von Mikrogefäßnetzwerken diskutiert.
24

Synthèse de modèles de plantes et reconstructions de baies à partir d’images / Analysis and 3D reconstruction of natural objects from images

Guénard, Jérôme 04 October 2013 (has links)
Les plantes sont des éléments essentiels du monde qui nous entoure. Ainsi, si l’on veut créer des environnements virtuels qui soient à la fois agréables et réalistes, un effort doit être fait pour modéliser les plantes. Malgré les immenses progrès en vision par ordinateur pour reconstruire des objets de plus en plus compliqués, les plantes restent difficiles à reconstruire à cause de la complexité de leur topologie. Cette thèse se divise en deux grandes parties. La première partie s’intéresse à la modélisation de plantes, biologiquement réalistes, à partir d’une seule image. Nous générons un modèle de plante respectant les contraintes biologiques de son espèce et tel que sa projection soit la plus fidèle possible à l’image. La première étape consiste à extraire de l’image le squelette de la plante. Dans la plupart de nos images, aucune branche n’est visible et les images peuvent être de qualité moyenne. Notre première contribution consiste en une méthode de squelettisation basée sur les champs de vecteurs. Le squelette est extrait suite à un partitionnement non déterministe du feuillage de l’image assurant son réalisme. Dans un deuxième temps, la plante est modélisée en 3D. Notre deuxième contribution est la création de modèles pour différents types de plantes, basée sur les L-systèmes. Puis, un processus d’analyse-par-synthèse permet de choisir le modèle 3D final : plusieurs propositions de squelette sont générées et un processus bayésien permet d’extraire le modèle maximisant le critère a posteriori. Le terme d’attache aux données (vraisemblance) mesure la similarité entre l’image et la reprojection du modèle, la probabilité a priori mesure le réalisme du modèle. Après avoir généré des modèles de plantes, des modèles de fruits doivent être créés. Ayant travaillé principalement sur les pieds de vigne, nous avons développé une méthode pour reconstruire une grappe de raisin à partir d’au moins deux vues. Chaque baie est assimilée à un ellipsoïde de révolution. La méthode obtenue peut être plus généralement adaptée à tout type de fruits assimilables à une quadrique de révolution. La seconde partie de cette thèse s’intéresse à la reconstruction de quadriques de révolution à partir d’une ou plusieurs vues. La reconstruction de quadriques et, en général, la reconstruction de surfaces 3D est un problème très ancien en vision par ordinateur qui a donné lieu à de nombreux travaux. Nous rappelons les notions nécessaires de géométrie projective des quadriques, et de vision par ordinateur puis, nous présentons un état de l’art sur les méthodes existantes sur la reconstruction de surfaces quadratiques. Nous détaillons un premier algorithme permettant de retrouver les images des foyers principaux d’une quadrique de révolution à partir d’une vue « calibrée », c’est-à-dire pour laquelle les paramètres intrinsèques de la caméra sont connus. Puis, nous détaillons comment utiliser ce résultat pour reconstruire, à partir d’un schéma de triangulation linéaire, tout type de quadriques de révolution à partir d’au moins deux vues. Enfin, nous montrons comment il est possible de retrouver la pose 3D d’une quadrique de révolution dont on connaît les paramètres à partir d’un seul contour occultant. Nous évaluons les performances de nos méthodes et montrons quelques applications possibles. / Plants are essential elements of our world. Thus, 3D plant models are necessary to create realistic virtual environments. Mature computer vision techniques allow the reconstruction of 3D objects from images. However, due to the complexity of the topology of plants, dedicated methods for generating 3D plant models must be devised. This thesis is divided into two parts. The first part focuses on the modeling of biologically realistic plants from a single image. We propose to generate a 3D model of a plant, using an analysis-by-synthesis method considering both a priori information of the plant species and a single image. First, a dedicated 2D skeletonisation algorithm generates possible branching structures from the foliage segmentation. Then, we built a 3D generative model based on a parametric model of branching systems taking into account botanical knowledge. The resulting skeleton follows the hierarchical organisation of natural branching structures. Varying parameter values of the generative model (main branching structure of the plant and foliage), we produce a series of candidate models. A Bayesian model optimizes a posterior criterion which is composed of a likelihood function which measures the similarity between the image and the reprojected 3D model and a prior probability measuring the realism of the model. After modeling plant models branching systems and foliage, we propose to model the fruits. As we mainly worked on vines, we propose a method for reconstructing a vine grape from at least two views. Each bay is considered to be an ellipsoid of revolution. The resulting method can be adapted to any type of fruits with a shape similar to a quadric of revolution. The second part of this thesis focuses on the reconstruction of quadrics of revolution from one or several views. Reconstruction of quadrics, and in general, 3D surface reconstruction is a very classical problem in computer vision. First, we recall the necessary background in projective geometry quadrics and computer vision and present existing methods for the reconstruction of quadrics or more generally quadratic surfaces. A first algorithm identifies the images of the principal foci of a quadric of revolution from a "calibrated" view (that is, the intrinsic parameters of the camera are given). Then we show how to use this result to reconstruct, from a linear triangulation scheme, any type of quadrics of revolution from at least two views. Finally, we show that we can derive the 3D pose of a given quadric of revolution from a single occluding contour. We evaluate the performance of our methods and show some possible applications.
25

Squelettes pour la reconstruction 3D : de l'estimation de la projection du squelette dans une image 2D à la triangulation du squelette en 3D / Skeletons for 3D reconstruction : from the estimation of the skeleton projection in a 2D image to the triangulation of the 3D skeleton

Durix, Bastien 12 December 2017 (has links)
La reconstruction 3D consiste à acquérir des images d’un objet, et à s’en servir pour en estimer un modèle 3D. Dans ce manuscrit, nous développons une méthode de reconstruction basée sur la modélisation par squelette. Cette méthode a l’avantage de renvoyer un modèle 3D qui est un objet virtuel complet (i.e. fermé) et aisément éditable, grâce à la structure du squelette. Enfin, l’objet acquis n’a pas besoin d’être texturé, et entre 3 et 5 images sont suffisantes pour la reconstruction. Dans une première partie, nous étudions les aspects 2D de l’étude. En effet, l’estimation d’un squelette 3D nécessite d’étudier la formation de la silhouette de l’objet à partir de son squelette, et donc les propriétés de sa projection perspective, appelée squelette perspectif. Cette étude est suivie par notre première contribution : un algorithme d’estimation de la projection perspective d’un squelette 3D curviligne, constitué d’un ensemble de courbes. Cet algorithme a toutefois tendance, comme beaucoup d’algorithmes estimant un squelette, à générer des branches peu informatives, notamment sur une image rastérisée. Notre seconde contribution est donc un algorithme d’estimation de squelette 2D, capable de prendre en compte la discrétisation du contour de la forme 2D, et d’éviter ces branches peu informatives. Cet algorithme, d’abord conçu pour estimer un squelette classique, est ensuite généralisé à l’estimation d’un squelette perspectif. Dans une seconde partie, nous estimons le squelette 3D d’un objet à partir de ses projections. Tout d’abord, nous supposons que le squelette de l’objet 3D à reconstruire est curviligne. Ainsi, chaque squelette perspectif estimé correspond à la projection du squelette 3D de l’objet, sous différents points de vue. La topologie du squelette étant affectée par la projection, nous proposons notre troisième contribution, l’estimation de la topologie du squelette 3D à partir de l’ensemble de ses projections. Une fois celle-ci estimée, la projection d’une branche 3D du squelette est identifiée sur chaque image, i.e. sur chacun des squelettes perspectifs. Avec cette identification, nous pouvons trianguler les branches du squelette 3D, ce qui constitue notre quatrième contribution : nous sommes donc en mesure d’estimer un squelette curviligne associé à un ensemble d’images d’un objet. Toutefois, les squelettes 3D ne sont pas tous constitués d’un ensemble de courbes : certains d’entre eux possèdent aussi des parties surfaciques. Notre dernière contribution, pour reconstruire des squelettes 3D surfaciques, est une nouvelle approche pour l’estimation d’un squelette 3D à partir d’images : son principe est de faire grandir le squelette 3D, sous les contraintes données par les images de l’objet. / The principle of 3D reconstruction is to acquire one or more images of an object, and to use it to estimate a 3D model of the object. In this manuscript, we develop a reconstruction method based on a particular model, the skeleton. The main advantages of our reconstruction approach are: we do reconstruct a whole, complete objet, and thanks to the skeleton structure, easily editable. Moreover, the method we propose allows us to free ourselves from constraints related to more classical reconstruction methods: the reconstructed object does not need to be textured, and between 3 and 5 images are sufficient to perform the reconstruction. In the first part, we focus on the 2D aspects of the work. Indeed, before estimating a 3D skeleton, we study the perspective silhouette of the object, and thus evaluate the properties of the perspective projection of the skeleton. Thus, our first contribution is an algorithm estimating the perspective projection of a curvilinear 3D skeleton, consisting of a set of curves. This algorithm, however, like most skeletonisation algorithms, tends to generate non-informative branches, in particular on a rasterized image. Our second contribution is thus an original 2D skeleton estimation algorithm, able to take into account the noise on the contour of the 2D shape, and to avoid uninformative skeleton branches. This algorithm, first designed to estimate a classical skeleton, is then generalized for computing a perspective skeleton. In a second part, we estimate the 3D skeleton of an object from its projections. First, we assume that the skeleton of the considered object is curvilinear. Thus, each estimated perspective skeleton corresponds to the projection of the 3D skeleton, from several viewpoints. The topology of the skeleton is however affected by the perspective projection, so we propose our third contribution: the estimation of the topology of the 3D skeleton based on its projections. Once this topology is estimated, for any 3D branch of the skeleton we indentify its projections on each image, that is a branch on each of the perspective skeletons. From this identification, we triangulate the branches of the 3D skeleton, which is our fourth contribution. Thus, we are able to estimate a curvilinear skeleton associated with a set of images of a 3D object. However, 3D skeletons are not necessarily made up of a set of curves, and some of them also have surface parts. Our last contribution is a new approach for the estimation of a general 3D skeleton (with surface parts) from images, which principle is to increase the 3D skeleton under the constraints given by the different images of the object.
26

Estudo e desenvolvimento de algoritmos de esqueletização com aplicação em redes vasculares ósseas

Abreu, Andrêssa Finzi de 02 September 2016 (has links)
Apesar de ser uma técnica muito difundida, a radioterapia pode causar danos ao reparo ósseo, como por exemplo, a diminuição da vascularização. Entretanto, a rede vascular óssea tem um papel importante na capacidade de regeneração dos ossos, pois fornece oxigênio e nutrientes essenciais, logo, ferramentas que auxiliem a análise dessas redes são importantes para o estudo de diversas terapias que têm influência sobre o tecido ósseo. Para analisar tais redes foi feita a reconstrução tridimensional de imagens coletadas a partir do seccionamento dos fêmures de um rato que recebeu doses de radiação em seu fêmur esquerdo, enquanto que o direito não foi irradiado sendo, portanto, utilizado para controle. Com o objetivo de auxiliar a análise desses volumes foi utilizada a técnica de esqueletização, que tem a finalidade de diminuir a quantidade de informação dos objetos e tornar a análise mais precisa e eficiente. Entretanto, existem diversos tipos de algoritmos esqueletização, sendo eles, de Afinamento, Geométricos, baseados na Transformada Distância, em Campo de Força e em Propagação de Ondas. Com o objetivo de analisar qual deles produz melhores resultados em volumes de redes vasculares foi escolhida uma implementação de cada tipo para ser testada e analisada em volumes pertencentes às redes vasculares. Além disso, o algoritmo escolhido para representar os métodos baseados em Propagação de Ondas foi desenvolvido e proposto neste trabalho exclusivamente para extrair canais de redes vasculares. Por fim, os esqueletos das redes vasculares conseguiram reproduzir com clareza a rede estudada e possibilitaram a conclusão de análises relacionadas ao impacto da radioterapia sobre a topologia vascular. Além disso, a comparação entre os tipos de algoritmos de esqueletização possibilitou um estudo aprofundado sobre o tema e sobre as diversas características de esqueletos de curva que podem ser usadas para classificar e comparar os métodos presentes na literatura. / Although a common technique, the radiotherapy can cause damage to bone repair, such as decrease in vascularization. However, the bone vascular network has an important role in capacity of bone regeneration because it provides oxygen and nutrients, therefore, tools that helps the analysis of vascular networks are important for the study of various therapies that have influence on the bone repair. In order to analyze such networks, we mande three-dimensional reconstructions of collected images from the sectioning of a mouse femurs that received radiation doses in the left femur, while the right was not irradiated and used for control. In order to aid the analysis of these volumes, skeletonization techniques were used to decrease the amount of the objects’s information and make the analysis more accurate and efficient. However, there are several types of skeletonization algorithms which uses different approachs as based on Forcefield, Thinning, based on Distance Transform, Geometrical and based on Wave Propagation. In order to analyze which of them produces the best results in vascular networks, an implementation of each type was chosen to be tested and analyzed in vascular network volumes. Furthermore, the algorithm chosen to represent the methods based on Wave Propagation was developed and proposed in this work exclusively to extract vascular networks. Finally, the skeletons of the vascular networks reproduced the network studied with clarity and enabled the conclusion of analysis related to the radiation impact on vascular topology. In addition, the comparison between the types of skeletonization algorithms allowed a deep study about the subject and on the various curve skeletons characteristics that can be used to classify and compare the methods in the literature. / Dissertação (Mestrado)
27

Discrete shape analysis for global illumination / Analyse de formes pour l'illumination globale

Noel, Laurent 15 December 2015 (has links)
Les images de synthèse sont présentes à travers un grand nombre d'applications tel que les jeux vidéo, le cinéma, l'architecture, la publicité, l'art, la réalité virtuelle, la visualisation scientifique, l'ingénierie en éclairage, etc. En conséquence, la demande en photoréalisme et techniques de rendu rapide ne cesse d'augmenter. Le rendu réaliste d'une scène virtuelle nécessite l'estimation de son illumination globale grâce à une simulation du transport de lumière, un processus coûteux en temps de calcul dont la vitesse de convergence diminue généralement lorsque la complexité de la scène augmente. En particulier, une forte illumination indirecte combinée à de nombreuses occlusions constitue une caractéristique globale de la scène que les techniques existantes ont du mal à gérer. Cette thèse s'intéresse à ce problème à travers l'application de techniques d'analyse de formes pour le rendu 3D.Notre principal outil est un squelette curviligne du vide de la scène, représenté par un graphe contenant des informations sur la topologie et la géométrie de la scène. Ce squelette nous permet de proposer de nouvelles méthodes pour améliorer des techniques de rendu temps réel et non temps réel. Concernant le rendu temps réel, nous utilisons les informations géométriques du squelette afin d'approximer le rendu des ombres projetés par un grand nombre de points virtuels de lumière représentant l'illumination indirecte de la scène 3D.Pour ce qui est du rendu non temps réel, nos travaux se concentrent sur des algorithmes basés sur l'échantillonnage de chemins, constituant actuellement le principal paradigme en rendu physiquement plausible. Notre squelette mène au développement de nouvelles stratégies d'échantillonnage de chemins, guidés par des caractéristiques topologiques et géométriques. Nous adressons également ce problème à l'aide d'un second outil d'analyse de formes: la fonction d'ouverture du vide de la scène, décrivant l'épaisseur locale du vide en chacun de ses points. Nos contributions offrent une amélioration des méthodes existantes and indiquent clairement que l'analyse de formes offre de nombreuses opportunités pour le développement de nouvelles techniques de rendu 3D / Nowadays, computer generated images can be found everywhere, through a wide range of applications such as video games, cinema, architecture, publicity, artistic design, virtual reality, scientific visualization, lighting engineering, etc. Consequently, the need for visual realism and fast rendering is increasingly growing. Realistic rendering involves the estimation of global illumination through light transport simulation, a time consuming process for which the convergence rate generally decreases as the complexity of the input virtual 3D scene increases. In particular, occlusions and strong indirect illumination are global features of the scene that are difficult to handle efficiently with existing techniques. This thesis addresses this problem through the application of discrete shape analysis to rendering. Our main tool is a curvilinear skeleton of the empty space of the scene, a sparse graph containing important geometric and topological information about the structure of the scene. By taking advantage of this skeleton, we propose new methods to improve both real-time and off-line rendering methods. Concerning real-time rendering, we exploit geometric information carried by the skeleton for the approximation of shadows casted by a large set of virtual point lights representing the indirect illumination of the 3D scene. Regarding off-line rendering, our works focus on algorithms based on path sampling, that constitute the main paradigm of state-of-the-art methods addressing physically based rendering. Our skeleton leads to new efficient path sampling strategies guided by topological and geometric features. Addressing the same problem, we also propose a sampling strategy based on a second tool from discrete shape analysis: the opening function of the empty space of the scene, describing the local thickness of that space at each point. Our contributions demonstrate improvements over existing approaches and clearly indicate that discrete shape analysis offers many opportunities for the development of new rendering techniques
28

Computation with continuous mode CMOS circuits in image processing and probabilistic reasoning

Mroszczyk, Przemyslaw January 2014 (has links)
The objective of the research presented in this thesis is to investigate alternative ways of information processing employing asynchronous, data driven, and analogue computation in massively parallel cellular processor arrays, with applications in machine vision and artificial intelligence. The use of cellular processor architectures, with only local neighbourhood connectivity, is considered in VLSI realisations of the trigger-wave propagation in binary image processing, and in Bayesian inference. Design issues, critical in terms of the computational precision and system performance, are extensively analysed, accounting for the non-ideal operation of MOS devices caused by the second order effects, noise and parameter mismatch. In particular, CMOS hardware solutions for two specific tasks: binary image skeletonization and sum-product algorithm for belief propagation in factor graphs, are considered, targeting efficient design in terms of the processing speed, power, area, and computational precision. The major contributions of this research are in the area of continuous-time and discrete-time CMOS circuit design, with applications in moderate precision analogue and asynchronous computation, accounting for parameter variability. Various analogue and digital circuit realisations, operating in the continuous-time and discrete-time domains, are analysed in theory and verified using combined Matlab-Hspice simulations, providing a versatile framework suitable for custom specific analyses, verification and optimisation of the designed systems. Novel solutions, exhibiting reduced impact of parameter variability on the circuit operation, are presented and applied in the designs of the arithmetic circuits for matrix-vector operations and in the data driven asynchronous processor arrays for binary image processing. Several mismatch optimisation techniques are demonstrated, based on the use of switched-current approach in the design of current-mode Gilbert multiplier circuit, novel biasing scheme in the design of tunable delay gates, and averaging technique applied to the analogue continuous-time circuits realisations of Bayesian networks. The most promising circuit solutions were implemented on the PPATC test chip, fabricated in a standard 90 nm CMOS process, and verified in experiments.
29

Automatic Retrieval of Skeletal Structures of Trees from Terrestrial Laser Scanner Data

Schilling, Anita 10 October 2014 (has links)
Research on forest ecosystems receives high attention, especially nowadays with regard to sustainable management of renewable resources and the climate change. In particular, accurate information on the 3D structure of a tree is important for forest science and bioclimatology, but also in the scope of commercial applications. Conventional methods to measure geometric plant features are labor- and time-intensive. For detailed analysis, trees have to be cut down, which is often undesirable. Here, Terrestrial Laser Scanning (TLS) provides a particularly attractive tool because of its contactless measurement technique. The object geometry is reproduced as a 3D point cloud. The objective of this thesis is the automatic retrieval of the spatial structure of trees from TLS data. We focus on forest scenes with comparably high stand density and with many occlusions resulting from it. The varying level of detail of TLS data poses a big challenge. We present two fully automatic methods to obtain skeletal structures from scanned trees that have complementary properties. First, we explain a method that retrieves the entire tree skeleton from 3D data of co-registered scans. The branching structure is obtained from a voxel space representation by searching paths from branch tips to the trunk. The trunk is determined in advance from the 3D points. The skeleton of a tree is generated as a 3D line graph. Besides 3D coordinates and range, a scan provides 2D indices from the intensity image for each measurement. This is exploited in the second method that processes individual scans. Furthermore, we introduce a novel concept to manage TLS data that facilitated the researchwork. Initially, the range image is segmented into connected components. We describe a procedure to retrieve the boundary of a component that is capable of tracing inner depth discontinuities. A 2D skeleton is generated from the boundary information and used to decompose the component into sub components. A Principal Curve is computed from the 3D point set that is associated with a sub component. The skeletal structure of a connected component is summarized as a set of polylines. Objective evaluation of the results remains an open problem because the task itself is ill-defined: There exists no clear definition of what the true skeleton should be w.r.t. a given point set. Consequently, we are not able to assess the correctness of the methods quantitatively, but have to rely on visual assessment of results and provide a thorough discussion of the particularities of both methods. We present experiment results of both methods. The first method efficiently retrieves full skeletons of trees, which approximate the branching structure. The level of detail is mainly governed by the voxel space and therefore, smaller branches are reproduced inadequately. The second method retrieves partial skeletons of a tree with high reproduction accuracy. The method is sensitive to noise in the boundary, but the results are very promising. There are plenty of possibilities to enhance the method’s robustness. The combination of the strengths of both presented methods needs to be investigated further and may lead to a robust way to obtain complete tree skeletons from TLS data automatically. / Die Erforschung des ÖkosystemsWald spielt gerade heutzutage im Hinblick auf den nachhaltigen Umgang mit nachwachsenden Rohstoffen und den Klimawandel eine große Rolle. Insbesondere die exakte Beschreibung der dreidimensionalen Struktur eines Baumes ist wichtig für die Forstwissenschaften und Bioklimatologie, aber auch im Rahmen kommerzieller Anwendungen. Die konventionellen Methoden um geometrische Pflanzenmerkmale zu messen sind arbeitsintensiv und zeitaufwändig. Für eine genaue Analyse müssen Bäume gefällt werden, was oft unerwünscht ist. Hierbei bietet sich das Terrestrische Laserscanning (TLS) als besonders attraktives Werkzeug aufgrund seines kontaktlosen Messprinzips an. Die Objektgeometrie wird als 3D-Punktwolke wiedergegeben. Basierend darauf ist das Ziel der Arbeit die automatische Bestimmung der räumlichen Baumstruktur aus TLS-Daten. Der Fokus liegt dabei auf Waldszenen mit vergleichsweise hoher Bestandesdichte und mit zahlreichen daraus resultierenden Verdeckungen. Die Auswertung dieser TLS-Daten, die einen unterschiedlichen Grad an Detailreichtum aufweisen, stellt eine große Herausforderung dar. Zwei vollautomatische Methoden zur Generierung von Skelettstrukturen von gescannten Bäumen, welche komplementäre Eigenschaften besitzen, werden vorgestellt. Bei der ersten Methode wird das Gesamtskelett eines Baumes aus 3D-Daten von registrierten Scans bestimmt. Die Aststruktur wird von einer Voxelraum-Repräsentation abgeleitet indem Pfade von Astspitzen zum Stamm gesucht werden. Der Stamm wird im Voraus aus den 3D-Punkten rekonstruiert. Das Baumskelett wird als 3D-Liniengraph erzeugt. Für jeden gemessenen Punkt stellt ein Scan neben 3D-Koordinaten und Distanzwerten auch 2D-Indizes zur Verfügung, die sich aus dem Intensitätsbild ergeben. Bei der zweiten Methode, die auf Einzelscans arbeitet, wird dies ausgenutzt. Außerdem wird ein neuartiges Konzept zum Management von TLS-Daten beschrieben, welches die Forschungsarbeit erleichtert hat. Zunächst wird das Tiefenbild in Komponenten aufgeteilt. Es wird eine Prozedur zur Bestimmung von Komponentenkonturen vorgestellt, die in der Lage ist innere Tiefendiskontinuitäten zu verfolgen. Von der Konturinformation wird ein 2D-Skelett generiert, welches benutzt wird um die Komponente in Teilkomponenten zu zerlegen. Von der 3D-Punktmenge, die mit einer Teilkomponente assoziiert ist, wird eine Principal Curve berechnet. Die Skelettstruktur einer Komponente im Tiefenbild wird als Menge von Polylinien zusammengefasst. Die objektive Evaluation der Resultate stellt weiterhin ein ungelöstes Problem dar, weil die Aufgabe selbst nicht klar erfassbar ist: Es existiert keine eindeutige Definition davon was das wahre Skelett in Bezug auf eine gegebene Punktmenge sein sollte. Die Korrektheit der Methoden kann daher nicht quantitativ beschrieben werden. Aus diesem Grund, können die Ergebnisse nur visuell beurteiltwerden. Weiterhinwerden die Charakteristiken beider Methoden eingehend diskutiert. Es werden Experimentresultate beider Methoden vorgestellt. Die erste Methode bestimmt effizient das Skelett eines Baumes, welches die Aststruktur approximiert. Der Detaillierungsgrad wird hauptsächlich durch den Voxelraum bestimmt, weshalb kleinere Äste nicht angemessen reproduziert werden. Die zweite Methode rekonstruiert Teilskelette eines Baums mit hoher Detailtreue. Die Methode reagiert sensibel auf Rauschen in der Kontur, dennoch sind die Ergebnisse vielversprechend. Es gibt eine Vielzahl von Möglichkeiten die Robustheit der Methode zu verbessern. Die Kombination der Stärken von beiden präsentierten Methoden sollte weiter untersucht werden und kann zu einem robusteren Ansatz führen um vollständige Baumskelette automatisch aus TLS-Daten zu generieren.
30

Optimization Methods for Patient Positioning in Leksell Gamma Knife Perfexion

Ghobadi, Kimia 21 July 2014 (has links)
We study inverse treatment planning approaches for stereotactic radiosurgery using Leksell Gamma Knife Perfexion (PFX, Elekta, Stockholm, Sweden) to treat brain cancer and tumour patients. PFX is a dedicated head-and-neck radiation delivery device that is commonly used in clinics. In a PFX treatment, the patient lies on a couch and the radiation beams are emitted from eight banks of radioactive sources around the patient's head that are focused at a single spot, called an isocentre. The radiation delivery in PFX follows a step-and-shoot manner, i.e., the couch is stationary while the radiation is delivered at an isocentre location, and only moves when no beam is being emitted. To find a set of well-positioned isocentres in tumour volumes, we explore fast geometry-based algorithms, including skeletonization and hybrid grassfire and sphere-packing approaches. For the selected set of isocentres, the optimal beam durations to deliver a high prescription dose to the tumour are later found using a penalty-based optimization model. We next extend our grassfire and sphere-packing isocentre selection method to treatments with homogenous dose distributions. Dose homogeneity is required in multi-session plans where a larger volume is treated to account for daily setup errors, and thus large overlaps with surrounding healthy tissue may exist. For multi-session plans, we explicitly consider the healthy tissue overlaps in our algorithms and strategically select many isocentres in adjacent volumes to avoid hotspots. There is also interest in treating patients with continuous couch motion to decrease the total treatment session and increase plan quality. We therefore investigate continuous dose delivery treatment plans for PFX. We present various path selection methods along which the dose is delivered using Hamiltonian paths techniques, and develop mixed-integer and linear approximation models to determine the configuration and duration of the radiation time along the paths. We consider several criteria in our optimization models, including machine speed constraints and movement accuracy, preference for single or multiple paths, and smoothness of movement. Our plans in all proposed approaches are tested on seven clinical cases and can meet or exceed clinical guidelines and usually outperform clinical treatments.

Page generated in 0.473 seconds