• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 8
  • 2
  • 2
  • 1
  • Tagged with
  • 25
  • 25
  • 9
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

AUTOMATED MACHINE LEARNING BASED ANALYSIS OF INTRAVASCULAR OPTICAL COHERENCE TOMOGRAPHY IMAGES

Shalev, Ronny Y. 31 May 2016 (has links)
No description available.
22

Object Surface Exploration Using a Tactile-Enabled Robotic Fingertip

Monteiro Rocha Lima, Bruno 16 December 2019 (has links)
Exploring surfaces is an essential ability for humans, allowing them to interact with a large variety of objects within their environment. This ability to explore surfaces is also of a major interest in the development of a new generation of humanoid robots, which requires the development of more efficient artificial tactile sensing techniques. The details perceived by statically touching different surfaces of objects not only improve robotic hand performance in force-controlled grasping tasks but also enables the feeling of vibrations on touched surfaces. This thesis presents an extensive experimental study of object surface exploration using biologically-Inspired tactile-enabled robotic fingers. A new multi-modal tactile sensor, embedded in both versions of the robotic fingertips (similar to the human distal phalanx) is capable of measuring the heart rate with a mean absolute error of 1.47 bpm through static explorations of the human skin. A two-phalanx articulated robotic finger with a new miniaturized tactile sensor embedded into the fingertip was developed in order to detect and classify surface textures. This classification is performed by the dynamic exploration of touched object surfaces. Two types of movements were studied: one-dimensional (1D) and two-dimensional (2D) movements. The machine learning techniques - Support Vector Machine (SVM), Multilayer Perceptron (MLP), Random Forest, Extra Trees, and k-Nearest Neighbors (kNN) - were tested in order to find the most efficient one for the classification of the recovered textured surfaces. A 95% precision was achieved when using the Extra Trees technique for the classification of the 1D recovered texture patterns. Experimental results confirmed that the 2D textured surface exploration using a hemispheric tactile-enabled finger was superior to the 1D exploration. Three exploratory velocities were used for the 2D exploration: 30 mm/s, 35 mm/s, and 40 mm/s. The best classification accuracy of the 2D recovered texture patterns was 99.1% and 99.3%, using the SVM classifier, for the two lower exploratory velocities (30 mm/s and 35mm/s), respectively. For the 40 mm/s velocity, the Extra Trees classifier provided a classification accuracy of 99.4%. The results of the experimental research presented in this thesis could be suitable candidates for future development.
23

Quantitative follow-up of pulmonary diseases using deep learning models / Suivi quantitatif de pathologies pulmonaires à base de modèles d'apprentissage profond

Tarando, Sebastian Roberto 16 May 2018 (has links)
Les pathologies infiltrantes diffuses recensent un large groupe de désordres pulmonaires et nécessitent un suivi régulier en imagerie tomodensitométrique (TDM). Une évaluation quantitative est nécessaire pour établir la progression (régionale) de la maladie et/ou l’impact thérapeutique. Cela implique le développement d’outils automatiques de diagnostic assisté par ordinateur (DAO) pour la segmentation du tissu pathologique dans les images TDM, problème adressé comme classification de texture. Traditionnellement, une telle classification repose sur une analyse des caractéristiques texturales 2D dans les images TDM axiales selon des critères définis par l’utilisateur. Récemment, des techniques d’intelligence artificielle fondées sur l’apprentissage profond, notamment les réseaux neuronaux convolutionnels (CNN), ont démontré des performances meilleures pour résoudre des tâches visuelles. Toutefois, pour les architectures CNN « classiques » il a été prouvé que les performances étaient moins bonnes en classification de texture par rapport à la reconnaissance d’objets, en raison de la dimensionnalité intrinsèque élevée des données texturales. Dans ce contexte, ce travail propose un système automatique pour l’analyse quantitative des pathologies infiltrantes diffuses du poumon fondé sur une architecture CNN en cascade (conçue spécialement pour l’analyse de texture) et sur un prétraitement spécifique des données d’entrée par filtrage localement connexe (permettant d’atténuer l’intensité des vaisseaux pulmonaires et d’augmenter ainsi le contraste des régions pathologiques). La classification, s’appliquant à l’ensemble du volume pulmonaire, atteint une précision moyenne de 84% (75.8% pour le tissu normal, 90% pour l’emphysème et la fibrose, 81.5% pour le verre dépoli) / Infiltrative lung diseases (ILDs) enclose a large group of irreversible lung disorders which require regular follow-up with computed tomography (CT) imaging. A quantitative assessment is mandatory to establish the (regional) disease progression and/or the therapeutic impact. This implies the development of automated computer-aided diagnosis (CAD) tools for pathological lung tissue segmentation, problem addressed as pixel-based texture classification. Traditionally, such classification relies on a two-dimensional analysis of axial CT images by means of handcrafted features. Recently, the use of deep learning techniques, especially Convolutional Neural Networks (CNNs) for visual tasks, has shown great improvements with respect to handcrafted heuristics-based methods. However, it has been demonstrated the limitations of "classic" CNN architectures when applied to texture-based datasets, due to their inherently higher dimension compared to handwritten digits or other object recognition datasets, implying the need of redesigning the network or enriching the system to learn meaningful textural features from input data. This work addresses an automated quantitative assessment of different disorders based on lung texture classification. The proposed approach exploits a cascade of CNNs (specially redesigned for texture categorization) for a hierarchical classification and a specific preprocessing of input data based on locally connected filtering (applied to the lung images to attenuate the vessel densities while preserving high opacities related to pathologies). The classification targeting the whole lung parenchyma achieves an average of 84% accuracy (75.8% for normal, 90% for emphysema and fibrosis, 81.5% for ground glass)
24

Développement d’un modèle d’analyse de texture multibande / New model for multiband texture analysis

Safia, Abdelmounaime January 2014 (has links)
Résumé : En télédétection, la texture facilite l’identification des classes de surfaces sur des critères de similitude d’organisation spatiale des pixels. Les méthodes d’analyse texturale utilisées en télédétection et en traitement d’image en général sont principalement proposées pour extraire la texture dans une seule bande à la fois. Pour les images multispectrales, ceci revient à extraire la texture dans chaque bande spectrale séparément. Cette stratégie ignore la dépendance qui existe entre la texture des différentes bandes (texture inter-bande) qui peut être une source d’information additionnelle aux côtés de l’information texturale classique intra-bande. La prise en charge de la texture multibande (intra- et inter-bande) engendre une complexité calculatoire importante. Dans sa recherche de solution pour l’analyse de la texture multibande, ce projet de thèse revient vers les aspects fondamentaux de l’analyse de la texture, afin de proposer un modèle de texture qui possède intrinsèquement une complexité calculatoire réduite, et cela indépendamment de l’aspect multibande de la texture. Une solution pour la texture multibande est ensuite greffée sur ce nouveau modèle, de manière à lui permettre d’hériter de sa complexité calculatoire réduite. La première partie de ce projet de recherche introduit donc un nouveau modèle l’analyse de texture appelé modèle d’unité texturale compacte (en anglais : Compact Texture Unit, C-TU). Le C-TU prend comme point de départ le modèle de spectre de texture et propose une réduction significative de sa complexité. Cette réduction est atteinte en proposant une solution générale pour une codification de la texture avec la seule information d’occurrence, sans l’information structurelle. En prenant avantage de la grande efficacité calculatoire du modèle de C-TU développé, un nouvel indice qui analyse la texture multibande comme un ensemble indissociable d’interactions spatiales intra- et inter-bandes est proposé. Cet indice, dit C-TU multibande, utilise la notion de voisinage multibande afin de comparer le pixel central avec ses voisins dans la même bande et avec ceux des autres bandes spectrales. Ceci permet à l’indice de C-TU multibande d’extraire la texture de plusieurs bandes simultanément. Finalement, une nouvelle base de données de textures couleurs multibandes est proposée, pour une validation des méthodes texturales multibandes. Une série de tests visant principalement à évaluer la qualité discriminante des solutions proposées a été conduite. L’ensemble des résultats obtenus dont nous faisons rapport ici confirme que le modèle de C-TU proposé ainsi que sa version multibande sont des outils performants pour l’analyse de la texture en télédétection et en traitement d’images en général. Les tests ont également démontré que la nouvelle base de données de textures multibande possède toutes les caractéristiques nécessaires pour être utilisée en validation des méthodes de texture multibande. // Abstract : In multispectral images, texture is typically extracted independently in each band using existing grayscale texture methods. However, reducing texture of multispectral images into a set of independent grayscale texture ignores inter-band spatial interactions which can be a valuable source of information. The main obstacle for characterizing texture as intra- and inter-band spatial interactions is that the required calculations are cumbersome. In the first part of this PhD thesis, a new texture model named the Compact Texture Unit (C-TU) model was proposed. The C-TU model is a general solution for the texture spectrum model, in order to decrease its computational complexity. This simplification comes from the fact that the C-TU model characterizes texture using only statistical information, while the texture spectrum model uses both statistical and structural information. The proposed model was evaluated using a new monoband C-TU descriptor in the context of texture classification and image retrieval. Results showed that the monoband C-TU descriptor that uses the proposed C-TU model provides performances equivalent to those delivered by the texture spectrum model but with much more lower complexity. The calculation efficiency of the proposed C-TU model is exploited in the second part of this thesis in order to propose a new descriptor for multiband texture characterization. This descriptor, named multiband C-TU, extracts texture as a set of intra- and inter-band spatial interactions simultaneously. The multiband C-TU descriptor is very simple to extract and computationally efficient. The proposed descriptor was compared with three strategies commonly adopted in remote sensing. The first is extracting texture using panchromatic data; the second is extracting texture separately from few newbands obtained by principal components transform; and the third is extracting texture separately in each spectral band. These strategies were applied using cooccurrence matrix and monoband compact texture descriptors. For all experiments, the proposed descriptor provided the best results. In the last part of this thesis, a new color texture images database is developed, named Multiband Brodatz Texture database. Images from this database have two important characteristics. First, their chromatic content, even if it is rich, does not have discriminative value, yet it contributes to form texture. Second, their textural content is characterized by high intra- and inter-band variation. These two characteristics make this database ideal for multiband texture analysis without the influence of color information.
25

Modelos de compressão de dados para classificação e segmentação de texturas

Honório, Tatiane Cruz de Souza 31 August 2010 (has links)
Made available in DSpace on 2015-05-14T12:36:26Z (GMT). No. of bitstreams: 1 parte1.pdf: 2704137 bytes, checksum: 1bc9cc5c3099359131fb11fa1878c22f (MD5) Previous issue date: 2010-08-31 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / This work analyzes methods for textures images classification and segmentation using lossless data compression algorithms models. Two data compression algorithms are evaluated: the Prediction by Partial Matching (PPM) and the Lempel-Ziv-Welch (LZW) that had been applied in textures classification in previous works. The textures are pre-processed using histogram equalization. The classification method is divided into two stages. In the learning stage or training, the compression algorithm builds statistical models for the horizontal and the vertical structures of each class. In the classification stage, samples of textures to be classified are compressed using models built in the learning stage, sweeping the samples horizontally and vertically. A sample is assigned to the class that obtains the highest average compression. The classifier tests were made using the Brodatz textures album. The classifiers were tested for various contexts sizes (in the PPM case), samples number and training sets. For some combinations of these parameters, the classifiers achieved 100% of correct classifications. Texture segmentation process was made only with the PPM. Initially, the horizontal models are created using eight textures samples of size 32 x 32 pixels for each class, with the PPM context of a maximum size 1. The images to be segmented are compressed by the models of classes, initially in blocks of size 64 x 64 pixels. If none of the models achieve a compression ratio at a predetermined interval, the block is divided into four blocks of size 32 x 32. The process is repeated until a model reach a compression ratio in the range of the compression ratios set for the size of the block in question. If the block get the 4 x 4 size it is classified as belonging to the class of the model that reached the highest compression ratio. / Este trabalho se propõe a analisar métodos de classificação e segmentação de texturas de imagens digitais usando algoritmos de compressão de dados sem perdas. Dois algoritmos de compressão são avaliados: o Prediction by Partial Matching (PPM) e o Lempel-Ziv-Welch (LZW), que já havia sido aplicado na classificação de texturas em trabalhos anteriores. As texturas são pré-processadas utilizando equalização de histograma. O método de classificação divide-se em duas etapas. Na etapa de aprendizagem, ou treinamento, o algoritmo de compressão constrói modelos estatísticos para as estruturas horizontal e vertical de cada classe. Na etapa de classificação, amostras de texturas a serem classificadas são comprimidas utilizando modelos construídos na etapa de aprendizagem, varrendo-se as amostras na horizontal e na vertical. Uma amostra é atribuída à classe que obtiver a maior compressão média. Os testes dos classificadores foram feitos utilizando o álbum de texturas de Brodatz. Os classificadores foram testados para vários tamanhos de contexto (no caso do PPM), amostras e conjuntos de treinamento. Para algumas das combinações desses parâmetros, os classificadores alcançaram 100% de classificações corretas. A segmentação de texturas foi realizada apenas com o PPM. Inicialmente, são criados os modelos horizontais usados no processo de segmentação, utilizando-se oito amostras de texturas de tamanho 32 x 32 pixels para cada classe, com o contexto PPM de tamanho máximo 1. As imagens a serem segmentadas são comprimidas utilizando-se os modelos das classes, inicialmente, em blocos de tamanho 64 x 64 pixels. Se nenhum dos modelos conseguir uma razão de compressão em um intervalo pré-definido, o bloco é dividido em quatro blocos de tamanho 32 x 32. O processo se repete até que algum modelo consiga uma razão de compressão no intervalo de razões de compressão definido para o tamanho do bloco em questão, podendo chegar a blocos de tamanho 4 x 4 quando o bloco é classificado como pertencente à classe do modelo que atingiu a maior taxa de compressão.

Page generated in 0.1523 seconds