Spelling suggestions: "subject:"scalespace"" "subject:"paleospace""
21 |
Modèles statistiques morphométriques et structurels du cortex pour l'étude du développement cérébralCachia, Arnaud 11 1900 (has links) (PDF)
La recherche des variations anatomiques du cortex, complémentaire des investigations fonctionnelles, a été fortement stimulée ces dernières années par le développement des méthodes d'analyse des images cérébrales. Ces nouvelles possibilités ont conduit à la création de vastes projets de cartographie anatomo-fonctionnelle du cerveau humain, comparables par l'ampleur qu'ils pourraient prendre aux projets de cartographie du génome. Durant les années 90, la communauté de la neuroimagerie a choisi d'appréhender ce problème en développant une technique appelée la normalisation spatiale. Il s'agit de doter chaque cerveau d'un système de coordonnées (surfaciques ou volumiques) qui indiquent une localisation dans un cerveau de référence. Ce système s'obtient en déformant chaque nouveau cerveau de manière à l'ajuster autant que possible au cerveau de référence. Cependant, cette morphométrie fond ée sur la technique de normalisation spatiale a des limites. En effet, il est largement admis qu'elle ne permet pas de gérer précisément la très grande variabilité des plissements corticaux et ne donne accès qu'aux différences anatomiques les plus marquées. Ces considérations ont motivé le développement de nouveaux outils de morphométrie, permettant l'analyse ne des structures corticales. Jusqu'à ces dernières années, une telle morphométrie structurelle, prenant en compte les particularités anatomiques individuelles de chaque cortex, était limitée par la difculté et la lourdeur du travail «manuel» à réaliser. Le développement récent de nouveaux outils d'analyse d'images, permettant d'extraire et de reconnaître automatiquement les sillons corticaux des images IRM anatomiques, a modié cet état de fait et a ouvert la voie aux études à grandes échelles de morphométrie structurelle. Cependant, d'un point de vue anatomo-fonctionnel, la structure de base du cortex est le gyrus et non pas le sillon. Or, si la littérature propose maintenant de nombreuses méthodes dédiées aux sillons corticaux, il n'en existe aucune spécifique aux gyri, essentiellement à cause de leur très grande variabilité morphologique. Le premier axe de travail de cette thèse est le développement d'une méthode entièrement automatique pour les segmenter, prenant en compte leur anatomie individuelle. Cette méthode propose un formalisme générique pour définir chaque gyrus à partir d'un ensemble de sillons-frontières le délimitant; un critère de distance, sous-jacent au diagramme de Voronoï utilisé pour parcelliser la surface corticale, permet d'extrapoler cette définition dans les zones où les sillons sont interrompus. L'étude des mécanismes mis en jeu lors du plissement du cortex durant son développement, ante- et post-natal, est un point clé pour analyser et comprendre les variations de l'anatomie corticale, normale ou non, et caractériser ses liens avec le fonctionnement du cerveau. Des travaux récents suggèrent qu'il existerait une proto-organisation sulcale stable, visible sur le cerveau du foeœtus, et qui laisserait une empreinte dans le relief cortical adulte. Pour le deuxième axe de travail de cette thèse, nous avons essayé de recouvrer les traces de ces structures enfouies, les racines sulcales, inscrites dans les plissements corticaux. Nous avons pour cela développé un modèle original du cortex, le primal sketch des courbures, permettant une description multi-échelles et structurelle de la courbure corticale. Cette description est issue d'un lissage surfacique de la carte (2D) de la courbure, obtenu par l'implantation de l'équation de la chaleur, calculée géodésiquement au maillage de la surface corticale. Cette description nous a permis de recouvrer les deux racines sulcales putatives enfouies dans le sillon central, et les quatre racines du sillon temporal supérieur. En parallèle, nous avons initié une étude directe des premiers plis sulcaux à travers la reconstruction tridimensionnel du cerveau foeœtal in utero.
|
22 |
Shape Analysis Using Contour-based And Region-based ApproachesCiftci, Gunce 01 January 2004 (has links) (PDF)
The user of an image database often wishes to retrieve all images similar to the one (s)he already has. In this thesis, shape analysis methods for retrieving shape are investigated. Shape analysis methods can be classified in two groups as contour-based and region-based according to the shape information used. In such a classification, curvature scale space (CSS) representation and angular radial transform (ART) are promising methods for shape similarity retrieval respectively. The CSS representation operates by decomposing the shape contour into convex and concave sections. CSS descriptor is extracted by using the curvature zero-crossings behaviour of the shape boundary while smoothing the boundary with Gaussian filter. The ART descriptor decomposes the shape region into a number of orthogonal 2-D basis functions defined on a unit disk. ART descriptor is extracted using the magnitudes of ART coefficients. These methods are implemented for similarity comparison of binary images and the retrieval performances of descriptors for changing number of sampling points of boundary and order of ART coefficients are investigated. The experiments are done using 1000 images from MPEG7 Core Experiments Shape-1. Results show that for different classes of shape, different descriptors are more successful. When the choice of approach depends on the properties of the query shape, similarity retrieval performance increases.
|
23 |
Interest Point Matching Across Arbitrary ViewsBayram, Ilker 01 June 2004 (has links) (PDF)
Making a computer &lsquo / see&rsquo / is certainly one of the greatest challanges for today. Apart from possible applications, the solution may also shed light or at least give some idea on how, actually, the biological vision works. Many problems faced en route to successful algorithms require finding corresponding tokens in different views, which is termed the
correspondence problem. For instance, given two images of the same scene from different views, if the camera positions and their internal parameters are known, it is possible to obtain the 3-Dimensional coordinates of a point in space, relative to the cameras, if the same point may be located in both images. Interestingly, the camera positions and internal parameters may be extracted solely from the images if a sufficient number of corresponding tokens can be found. In this sense, two subproblems, as the choice of the tokens and how to match these tokens, are examined. Due to the
arbitrariness of the image pairs, invariant schemes for extracting and matching interest points, which were taken as the tokens to be matched, are utilised. In order to appreciate the ideas of the mentioned schemes, topics as scale-space, rotational and affine invariants are introduced. The geometry of the problem is briefly reviewed and the epipolar constraint is
imposed using statistical outlier rejection methods. Despite the
satisfactory matching performance of simple correlation-based matching schemes on small-baseline pairs, the simulation results show the improvements when the mentioned invariants are used on the cases for which they are strictly necessary.
|
24 |
Análise multiescala de formas planas baseada em estatísticas da transformada de Hough / Multiscale shape analysis based on the Hough transform statisticsRamos, Lucas Alexandre [UNESP] 12 August 2016 (has links)
Submitted by Lucas Alexandre Ramos null (magrelolukas@hotmail.com) on 2016-09-12T11:55:17Z
No. of bitstreams: 1
Monografia_Final.pdf: 4956502 bytes, checksum: b3c792e3df597c4fabe2093c7ea8b357 (MD5) / Approved for entry into archive by Felipe Augusto Arakaki (arakaki@reitoria.unesp.br) on 2016-09-14T17:56:52Z (GMT) No. of bitstreams: 1
ramos_la_me_bauru.pdf: 4956502 bytes, checksum: b3c792e3df597c4fabe2093c7ea8b357 (MD5) / Made available in DSpace on 2016-09-14T17:56:52Z (GMT). No. of bitstreams: 1
ramos_la_me_bauru.pdf: 4956502 bytes, checksum: b3c792e3df597c4fabe2093c7ea8b357 (MD5)
Previous issue date: 2016-08-12 / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / Atualmente, dada a difusão dos computadores, a tarefa de se reconhecer padrões visuais está sendo cada vez mais automatizada, em especial para tratar a vasta e crescente quantidade de imagens digitais existentes. Aplicações de diversas áreas como biometria, recuperação de imagens baseada em conteúdo e diagnóstico médico, se valem do processamento de imagens, bem como de técnicas de extração e análise de características das mesmas, a fim de identificar pessoas, objetos, gestos, textos, etc. As características básicas que são utilizadas para a análise de imagens são: cor, textura e forma. Recentemente, foi proposto um novo descritor de formas denominado HTS (Hough Transform Statistics), o qual se baseia no espaço de Hough para representar e reconhecer objetos em imagens por suas formas. Os resultados obtidos pelo HTS sobre bases de imagens públicas têm mostrado que este novo descritor, além de apresentar altas taxas de acurácia, melhores do que muitos descritores tradicionais propostos na literatura, é rápido, pois tem um algoritmo de complexidade linear. O objetivo deste trabalho foi explorar as possibilidades de representação multiescala do HTS e, assim, propor novos descritores de formas. Escala é um parâmetro essencial em Visão Computacional e a teoria de espaço-escala refere-se ao espaço formado quando se observa, simultaneamente, os aspectos espaciais de uma imagem em várias escalas, sendo a escala a terceira dimensão. Os novos métodos multiescala propostos foram avaliados sobre várias bases de dados e seus desempenhos foram comparados com o desempenho do HTS e com os principais descritores de formas encontrados na literatura. Resultados experimentais mostraram que os novos descritores propostos neste trabalho são mais rápidos e em alguns casos também mais precisos. / Currently, given the widespread of computers through society, the task of recognizing visual patterns is being more and more automated, in particular to treat the large and growing amount of digital images available. Applications in many areas, such as biometrics, content-based image retrieval, and medical diagnostic, make use of image processing, as well as techniques for the extraction and analysis of their characteristics, in order to identify persons, objects, gestures, texts, etc. The basic features that are used for image analysis are: color, texture and shape. Recently, it was proposed a new shape descriptor called HTS (Hough Transform Statistics), which is based on the Hough space to represent and recognize objects in images by their shapes. The results obtained by HTS on public image databases have shown that this new shape descriptor, besides showing high accuracy levels, better than many traditional shape descriptors proposed in the literature, is fast, since it has an algorithm of linear complexity. In this dissertation we explored the possibilities of a multiscale and scale-space representation of this new shape descriptor. Scale is a key parameter in Computer Vision and the theory of scale-space refers to the space formed when observing, simultaneously, special aspects of an image at several scales, being the scale the third dimension. The multiscale HTS methods were evaluated on the same databases and their performances were compared with the main shape descriptors found in the literature and with the monoscale HTS. Experimental results showed that these new descriptors are faster and can also be more accurate in some cases. / FAPESP: 2014/10611-0
|
25 |
Three dimensional object recognition for robot conveyor pickingWikander, Gustav January 2009 (has links)
Shape-based matching (SBM) is a method for matching objects in greyscale images. It extracts edges from search images and matches them to a model using a similarity measure. In this thesis we extend SBM to find the tilt and height position of the object in addition to the z-plane rotation and x-y-position. The search is conducted using a scale pyramid to improve the search speed. A 3D matching can be done for small tilt angles by using SBM on height data and extending it with additional steps to calculate the tilt of the object. The full pose is useful for picking objects with an industrial robot. The tilt of the object is calculated using a RANSAC plane estimator. After the 2D search the differences in height between all corresponding points of the model and the live image are calculated. By estimating a plane to this difference the tilt of the object can be calculated. Using the tilt the model edges are tilted in order to improve the matching at the next scale level. The problems that arise with occlusion and missing data have been studied. Missing data and erroneous data have been thresholded manually after conducting tests where automatic filling of missing data did not noticeably improve the matching. The automatic filling could introduce new false edges and remove true ones, thus lowering the score. Experiments have been conducted where objects have been placed at increasing tilt angles. The results show that the matching algorithm is object dependent and correct matches are almost always found for tilt angles less than 10 degrees. This is very similar to the original 2D SBM because the model edges does not change much for such small angels. For tilt angles up to about 25 degrees most objects can be matched and for nice objects correct matches can be done at large tilt angles of up to 40 degrees.
|
26 |
Representação em multiplas escalas para identificação automatica de estruturas em imagens medicas / Multiscale representation for automatic identification of structures in medical imagesRebelo, Marina de Fatima de Sa 14 October 2005 (has links)
Orientadores: Lincoln de Assis Moura Junior, Sergio Shiguemi Furuie, Eduardo Tavares Costa / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-05T07:59:16Z (GMT). No. of bitstreams: 1
Rebelo_MarinadeFatimadeSa_D.pdf: 3509163 bytes, checksum: 9cd8b854b66521bcf122c15fe2fd1ffd (MD5)
Previous issue date: 2005 / Resumo: A identificação de estruturas constitui uma etapa importante em processamento de imagens médicas. Este trabalho pretende contribuir na área de identificação de imagens médicas, e tem por objetivo propor uma metodologia genérica para identificação de estruturas, utilizando uma abordagem em múltiplas resoluções, o espaço de escalas. Avalia-se a utilização de uma representação de dados em múltiplas escalas que permite a inclusão de conhecimento a priori sobre as estruturas em diversas escalas e ainda explora-se a idéia de realizar o processamento em uma escala apropriada. A metodologia é composta das seguintes etapas: (i) criação de uma representação dos dados em diversas escalas utilizando a teoria de espaço de escalas linear. (ii) A seguir, analisa-se as imagens presentes em todas as escalas e detecta-se as características relevantes das imagens. O produto dessa etapa é uma representação em forma de árvore que mapeia as relações entre as estruturas no espaço de escalas. Essa representação serve como base para o passo seguinte, o processamento de alto nível, no qual o conhecimento a priori sobre a estrutura procurada é modelado e incluído na representação. (iii) A última etapa é o casamento entre os elementos presentes na estrutura construída e um padrão conhecido que descreve a estrutura de interesse. A metodologia é genérica e o tipo de informação armazenada no padrão depende da aplicação específica. Neste trabalho, foi implementado um protótipo, no qual são utilizadas informações geométricas para identificação de órgãos em imagens 2D de phantom que reproduz a anatomia humana. Os resultados da aplicação da metodologia em imagens com diferentes níveis de ruído e contraste foram bastante satisfatórios. As duas primeiras etapas já estão implementadas para imagens 3D e novos parâmetros podem ser facilmente incluídos na etapa de casamento para aplicações em imagens tri-dimensionais / Abstract: The identification of structures is an important step for several applications in the field of medical imaging. The purpose of this thesis is to contribute to the field of identification in medical images. Its main goal is to propose a generic methodology for identification of structures by using a multiresolution approach, the scale-space. We evaluate the use of a data representation that allows the inclusion of a priori knowledge about the structures in several scales and we also develop the idea of an appropriate scale to perform the processing. The proposed methodology comprises the following steps: (i) creation of an image representation in several scales using the scale space theory. (ii) Then the images in all scales are inspected and relevant features are extracted; the output of this step is a tree structure that maps the relations of the detected features throughout the scale space; the representation serves as a guide to subsequent high level processing step, where a priori knowledge about the desired feature is modeled and included in the representation. (iii) The last step is the matching between the elements present in the built structure and a known pattern that describes the structure of interest. The proposed methodology is generic and the type of information to be used depends strongly on the application. In this Thesis, we built a prototype application in which we used geometric information for identification of organs in 2D phantom images that reproduces human anatomy. The results of applying this method to a set of images with different noise and contrast levels were quite satisfactory. The two initial steps of the method were also implemented for 3D images. New parameters can be easily included in the matching step for extension to 3D / Doutorado / Engenharia Biomedica / Doutor em Engenharia Elétrica
|
27 |
A Hardware Architecture for Scale-space Extrema DetectionIjaz, Hamza January 2012 (has links)
Vision based object recognition and localization have been studied widely in recent years. Often the initial step in such tasks is detection of interest points from a grey-level image. The current state-of-the-art algorithms in this domain, like Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF) suffer from low execution speeds on a GPU(graphic processing unit) based system. Generally the performance of these algorithms on a GPU is below real-time due to high computational complexity and data intensive nature and results in elevated power consumption. Since real-time performance is desirable in many vision based applications, hardware based feature detection is an emerging solution that exploits inherent parallelism in such algorithms to achieve significant speed gains. The efficient utilization of resources still remains a challenge that directly effects the cost of hardware. This work proposes a novel hardware architecture for scale-space extrema detection part of the SIFT algorithm. The implementation of proposed architecture for Xilinx Virtex-4 FPGA and its evaluation are also presented. The implementation is sufficiently generic and can be adapted to different design parameters efficiently according to the requirements of application. The achieved system performance exceeds real-time requirements (30 frames per second) on a 640 x 480 image. Synthesis results show efficient resource utilization when compared with the existing known implementations.
|
28 |
Inverse geometry : from the raw point cloud to the 3d surface : theory and algorithms / Géométrie inverse : du nuage de points brut à la surface 3D : théorie et algorithmesDigne, Julie 23 November 2010 (has links)
De nombreux scanners laser permettent d'obtenir la surface 3D a partir d'un objet. Néanmoins, la surface reconstruite est souvent lisse, ce qui est du au débruitage interne du scanner et aux décalages entre les scans. Cette these utilise des scans haute precision et choisit de ne pas perdre ni alterer les echantillons initiaux au cours du traitement afin de les visualiser. C'est en effet la seule façon de decouvrir les imperfections (trous, decalages de scans). De plus, comme les donnees haute precision capturent meme le plus leger detail, tout debruitage ou sous-echantillonnage peut amener a perdre ces details.La these s'attache a prouver que l'on peut trianguler le nuage de point initial en ne perdant presque aucun echantillon. Le probleme de la visualisation exacte sur des donnees de plus de 35 millions de points et de 300 scans differents est ainsi resolu. Deux problemes majeurs sont traites: le premier est l'orientation du nuage de point brut complet et la creation d'un maillage. Le second est la correction des petits decalages entre les scans qui peuvent creer un tres fort aliasing et compromettre la visualisation de la surface. Le second developpement de la these est une decomposition des nuages de points en hautes/basses frequences. Ainsi, des methodes classiques pour l'analyse d'image, l'arbre des ensembles de niveau et la representation MSER, sont etendues aux maillages, ce qui donne une methode intrinseque de segmentation de maillages. Une analyse mathematiques d'operateurs differentiels discrets, proposes dans la litterature et operant sur des nuages de points est realisee. En considerant les developpements asymptotiques de ces operateurs sur une surface reguliere, ces operateurs peuvent etre classifies. Cette analyse amene au developpement d'un operateur discret consistant avec Ie mouvement par courbure moyenne (l'equation de la chaleur intrinseque) definissant ainsi un espace-echelle numerique simple et remarquablement robuste. Cet espace-echelle permet de resoudre de maniere unifiee tous les problemes mentionnes auparavant (orientation et triangulation du nuage de points, fusion de scans, segmentation de maillages) qui sont ordinairement traites avec des techniques distinctes. / Many laser devices acquire directly 3D objects and reconstruct their surface. Nevertheless, the final reconstructed surface is usually smoothed out as a result of the scanner internal de-noising process and the offsets between different scans. This thesis, working on results from high precision scans, adopts the somewhat extreme conservative position, not to loose or alter any raw sample throughout the whole processing pipeline, and to attempt to visualize them. Indeed, it is the only way to discover all surface imperfections (holes, offsets). Furthermore, since high precision data can capture the slightest surface variation, any smoothing and any sub-sampling can incur in the loss of textural detail.The thesis attempts to prove that one can triangulate the raw point cloud with almost no sample loss. It solves the exact visualization problem on large data sets of up to 35 million points made of 300 different scan sweeps and more. Two major problems are addressed. The first one is the orientation of the complete raw point set, an the building of a high precision mesh. The second one is the correction of the tiny scan misalignments which can cause strong high frequency aliasing and hamper completely a direct visualization.The second development of the thesis is a general low-high frequency decomposition algorithm for any point cloud. Thus classic image analysis tools, the level set tree and the MSER representations, are extended to meshes, yielding an intrinsic mesh segmentation method.The underlying mathematical development focuses on an analysis of a half dozen discrete differential operators acting on raw point clouds which have been proposed in the literature. By considering the asymptotic behavior of these operators on a smooth surface, a classification by their underlying curvature operators is obtained.This analysis leads to the development of a discrete operator consistent with the mean curvature motion (the intrinsic heat equation) defining a remarkably simple and robust numerical scale space. By this scale space all of the above mentioned problems (point set orientation, raw point set triangulation, scan merging, segmentation), usually addressed by separated techniques, are solved in a unified framework.
|
29 |
Real Time 3d Surface Feature Extraction On FpgaTellioglu, Zafer Hasim 01 July 2010 (has links) (PDF)
Three dimensional (3D) surface feature extractions based on mean (H) and
Gaussian (K) curvature analysis of range maps, also known as depth maps, is an
important tool for machine vision applications such as object detection,
registration and recognition. Mean and Gaussian curvature calculation algorithms
have already been implemented and examined as software. In this thesis,
hardware based digital curvature processors are designed. Two types of real time
surface feature extraction and classification hardware are developed which
perform mean and Gaussian curvature analysis at different scale levels. The
techniques use different gradient approximations. A fast square root algorithm
using both LUT (look up table) and linear fitting technique is developed to
calculate H and K values of the surface described by the 3D Range Map formed
by fixed point numbers. The proposed methods are simulated in MatLab software
and implemented on different FPGAs using VHDL hardware language.
Calculation times, outputs and power analysis of these techniques are compared to
CPU based 64 bit float data type calculations.
|
30 |
A Statistical Approach to Feature Detection and Scale Selection in Images / Eine Statistische Methode zur Merkmalsextraktion und Skalenselektion in Bildern.Majer, Peter 07 July 2000 (has links)
No description available.
|
Page generated in 0.026 seconds