• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 247
  • 58
  • 58
  • 56
  • 21
  • 12
  • 10
  • 9
  • 8
  • 7
  • 6
  • 5
  • 3
  • 3
  • 2
  • Tagged with
  • 556
  • 222
  • 178
  • 173
  • 169
  • 167
  • 147
  • 80
  • 74
  • 70
  • 68
  • 67
  • 64
  • 64
  • 58
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

REGION-COLOR BASED AUTOMATED BLEEDING DETECTION IN CAPSULE ENDOSCOPY VIDEOS

2014 June 1900 (has links)
Capsule Endoscopy (CE) is a unique technique for facilitating non-invasive and practical visualization of the entire small intestine. It has attracted a critical mass of studies for improvements. Among numerous studies being performed in capsule endoscopy, tremendous efforts are being made in the development of software algorithms to identify clinically important frames in CE videos. This thesis presents a computer-assisted method which performs automated detection of CE video-frames that contain bleeding. Specifically, a methodology is proposed to classify the frames of CE videos into bleeding and non-bleeding frames. It is a Support Vector Machine (SVM) based supervised method which classifies the frames on the basis of color features derived from image-regions. Image-regions are characterized on the basis of statistical features. With 15 available candidate features, an exhaustive feature-selection is followed to obtain the best feature subset. The best feature-subset is the combination of features that has the highest bleeding discrimination ability as determined by the three performance-metrics: accuracy, sensitivity and specificity. Also, a ground truth label annotation method is proposed in order to partially automate delineation of bleeding regions for training of the classifier. The method produced promising results with sensitivity and specificity values up to 94%. All the experiments were performed separately for RGB and HSV color spaces. Experimental results show the combination of the mean planes in red and green planes to be the best feature-subset in RGB (Red-Green-Blue) color space and the combination of the mean values of all three planes of the color space to be the best feature-subset in HSV (Hue-Saturation-Value).
192

REGION-COLOR BASED AUTOMATED BLEEDING DETECTION IN CAPSULE ENDOSCOPY VIDEOS

2014 June 1900 (has links)
Capsule Endoscopy (CE) is a unique technique for facilitating non-invasive and practical visualization of the entire small intestine. It has attracted a critical mass of studies for improvements. Among numerous studies being performed in capsule endoscopy, tremendous efforts are being made in the development of software algorithms to identify clinically important frames in CE videos. This thesis presents a computer-assisted method which performs automated detection of CE video-frames that contain bleeding. Specifically, a methodology is proposed to classify the frames of CE videos into bleeding and non-bleeding frames. It is a Support Vector Machine (SVM) based supervised method which classifies the frames on the basis of color features derived from image-regions. Image-regions are characterized on the basis of statistical features. With 15 available candidate features, an exhaustive feature-selection is followed to obtain the best feature subset. The best feature-subset is the combination of features that has the highest bleeding discrimination ability as determined by the three performance-metrics: accuracy, sensitivity and specificity. Also, a ground truth label annotation method is proposed in order to partially automate delineation of bleeding regions for training of the classifier. The method produced promising results with sensitivity and specificity values up to 94%. All the experiments were performed separately for RGB and HSV color spaces. Experimental results show the combination of the mean planes in red and green planes to be the best feature-subset in RGB (Red-Green-Blue) color space and the combination of the mean values of all three planes of the color space to be the best feature-subset in HSV (Hue-Saturation-Value).
193

Reconocimiento de gestos dinámicos

Quiroga, Facundo January 2014 (has links)
El objetivo de esta tesina es estudiar, desarrollar, analizar y comparar distintas técnicas de aprendizaje automático aplicables al reconocimiento automático de gestos dinámicos. Para ello, se definió un modelo de gestos a reconocer, se generó una base de datos de prueba con gestos llamadas LNHG, y se estudiaron e implementaron clasificadores basados en máquinas de vectores de soporte (SVM), redes neuronales feedfoward (FF) y redes neuronales competitivas (CPN), utilizando representaciones locales y globales para caracterizar los gestos. Además, se propone un nuevo modelo de reconocimiento de gestos, el clasificador neuronal competitivo (CNC). Los gestos a reconocer son movimientos de la mano, con invariancia a la velocidad, la rotación, la escala y la traslación. La captura de la información referida a los gestos para generar la base de datos se realizó mediante el dispositivo Kinect y su SDK correspondiente, que reconoce las partes del cuerpo y determina sus posiciones en tiempo real. Los clasificadores se entrenaron con dichos datos para poder determinar si una secuencia de posiciones de la mano es un gesto. Se implementó una librería de clasificadores con los métodos mencionados anteriormente, junto con las transformaciones para llevar una secuencia de posiciones a una representación adecuada para el reconocimiento. Se realizaron experimentos con la base de datos LNHG, compuesta de gestos que representan dígitos y letras, y con un base de datos de otro autor con gestos típicos de interacción, obteniendo resultados satisfactorios.
194

Local Part Model for Action Recognition in Realistic Videos

Shi, Feng 27 May 2014 (has links)
This thesis presents a framework for automatic recognition of human actions in uncontrolled, realistic video data such as movies, internet and surveillance videos. In this thesis, the human action recognition problem is solved from the perspective of local spatio-temporal feature and bag-of-features representation. The bag-of-features model only contains statistics of unordered low-level primitives, and any information concerning temporal ordering and spatial structure is lost. To address this issue, we proposed a novel multiscale local part model on the purpose of maintaining both structure information and ordering of local events for action recognition. The method includes both a coarse primitive level root feature covering event-content statistics and higher resolution overlapping part features incorporating local structure and temporal relationships. To extract the local spatio-temporal features, we investigated a random sampling strategy for efficient action recognition. We also introduced the idea of using very high sampling density for efficient and accurate classification. We further explored the potential of the method with the joint optimization of two constraints: the classification accuracy and its efficiency. On the performance side, we proposed a new local descriptor, called GBH, based on spatial and temporal gradients. It significantly improved the performance of the pure spatial gradient-based HOG descriptor on action recognition while preserving high computational efficiency. We have also shown that the performance of the state-of-the-art MBH descriptor can be improved with a discontinuity-preserving optical flow algorithm. In addition, a new method based on histogram intersection kernel was introduced to combine multiple channels of different descriptors. This method has the advantages of improving recognition accuracy with multiple descriptors and speeding up the classification process. On the efficiency side, we applied PCA to reduce the feature dimension which resulted in fast bag-of-features matching. We also evaluated the FLANN method on real-time action recognition. We conducted extensive experiments on real-world videos from challenging public action datasets. We showed that our methods achieved the state-of-the-art with real-time computational potential, thus highlighting the effectiveness and efficiency of the proposed methods.
195

Intelligent Recognition of Texture and Material Properties of Fabrics

Wang, Xin 02 November 2011 (has links)
Fabrics are unique materials which consist of various properties affecting their performance and end-uses. A computerized fabric property evaluation and analysis method plays a crucial role not only in textile industry but also in scientific research. An accurate analysis and measurement of fabric property provides a powerful tool for gauging product quality, assuring regulatory compliance and assessing the performance of textile materials. This thesis investigated the solutions for applying computerized methods to evaluate and intelligently interpret the texture and material properties of fabric in an inexpensive and efficient way. Firstly, a method which allows automatic recognition of basic weave pattern and precisely measuring the yarn count is proposed. The yarn crossed-areas are segmented by a spatial domain integral projection approach. Combining fuzzy c-means (FCM) and principal component analysis (PCA) on grey level co-occurrence matrix (GLCM) feature vectors extracted from the segments enables to classify detected segments into two clusters. Based on the analysis on texture orientation features, the yarn crossed-area states are automatically determined. An autocorrelation method is used to find weave repeats and correct detection errors. The method was validated by using computer simulated woven samples and real woven fabric images. The test samples have various yarn counts, appearance, and weave types. All weave patterns of tested fabric samples are successfully recognized and computed yarn counts are consistent to the manual counts. Secondly, we present a methodology for using the high resolution 3D surface data of fabric samples to measure surface roughness in a nondestructive and accurate way. A parameter FDFFT, which is the fractal dimension estimation from 2DFFT of 3D surface scan, is proposed as the indicator of surface roughness. The robustness of FDFFT, which consists of the rotation-invariance and scale-invariance, is validated on a number of computer simulated fractal Brownian images. Secondly, in order to evaluate the usefulness of FDFFT, a novel method of calculating standard roughness parameters from 3D surface scan is introduced. According to the test results, FDFFT has been demonstrated as a fast and reliable parameter for measuring the fabric roughness from 3D surface data. We attempt a neural network model using back propagation algorithm and FDFFT for predicting the standard roughness parameters. The proposed neural network model shows good performance experimentally. Finally, an intelligent approach for the interpretation of fabric objective measurements is proposed using supported vector machine (SVM) techniques. The human expert assessments of fabric samples are used during the training phase in order to adjust the general system into an applicable model. Since the target output of the system is clear, the uncertainty which lies in current subjective fabric evaluation does not affect the performance of proposed model. The support vector machine is one of the best solutions for handling high dimensional data classification. The complexity problem of the fabric property has been optimally dealt with. The generalization ability shown in SVM allows the user to separately implement and design the components. Sufficient cross-validations are performed and demonstrate the performance test of the system.
196

Segmentation of Multiple Sclerosis Lesions in Brain MRI

Abdullah, Bassem A 17 February 2012 (has links)
Multiple Sclerosis (MS) is an autoimmune disease of central nervous system. It may result in a variety of symptoms from blurred vision to severe muscle weakness and degradation, depending on the affected regions in brain. To better understand this disease and to quantify its evolution, magnetic resonance imaging (MRI) is increasingly used nowadays. Manual delineation of MS lesions in MR images by human expert is time-consuming, subjective, and prone to inter-expert variability. Therefore, automatic segmentation is needed as an alternative to manual segmentation. However, the progression of the MS lesions shows considerable variability and MS lesions present temporal changes in shape, location, and area between patients and even for the same patient, which renders the automatic segmentation of MS lesions a challenging problem. In this dissertation, a set of segmentation pipelines are proposed for automatic segmentation of multiple sclerosis (MS) lesions from brain magnetic resonance imaging (MRI) data. These techniques use a trained support vector machine (SVM) to discriminate between the blocks in regions of MS lesions and the blocks in non-MS lesion regions mainly based on the textural features with aid of the other features. The main contribution of this set of frameworks is the use of textural features to detect MS lesions in a fully automated approach that does not rely on manually delineating the MS lesions. In addition, the technique introduces the concept of the multi-sectional views segmentation to produce verified segmentation. The multi-sectional views pipeline is customized to provide better segmentation performance and to benefit from the properties and the nature of MS lesion in MRI. These customization and enhancement leads to development of the customized MV-T-SVM. The MRI datasets that were used in the evaluation of the proposed pipelines are simulated MRI datasets (3 subjects) generated using the McGill University BrainWeb MRI Simulator, real datasets (51 subjects) publicly available at the workshop of MS Lesion Segmentation Challenge 2008 and real MRI datasets (10 subjects) for MS subjects acquired at the University of Miami. The obtained results indicate that the proposed method would be viable for use in clinical practice for the detection of MS lesions in MRI.
197

Feature Based Learning for Point Cloud Labeling and Grasp Point Detection

Olsson, Fredrik January 2018 (has links)
Robotic bin picking is the problem of emptying a bin of randomly distributedobjects through a robotic interface. This thesis examines an SVM approach to ex-tract grasping points for a vacuum-type gripper. The SVM is trained on syntheticdata and used to classify the points of a non-synthetic 3D-scanned point cloud aseither graspable or non-graspable. The classified points are then clustered intograspable regions from which the grasping points are extracted. The SVM models and the algorithm as a whole are trained and evaluated againstcubic and cylindrical objects. Separate SVM models are trained for each type ofobject in addition to one model being trained on a dataset containing both typesof objects. It is shown that the performance of the SVM in terms accuracy isdependent on the objects and their geometrical properties. Further, it is shownthat the algorithm is reasonably robust in terms of successfully picking objects,regardless of the scale of the objects.
198

Automatizace generování stopslov

Krupník, Jiří January 2014 (has links)
This diploma thesis focuses its point on automatization of stopwords generation as one method of pre-processing a textual documents. It analyses an influence of stopwords removal to a result of data mining tasks (classification and clustering). First the text mining techniques and frequently used algorithms are described. Methods of creating domain specific lists of stopwords are described to detail. In the end the results of large collections of text files testing and implementation methods are presented and discussed.
199

Klassificering av svenska nyhetsartiklar med hjälp av Support Vector Machines

Blomberg, Jossefin, Jansson Martén, Felicia January 2018 (has links)
Uppsatsen syftar till att minska omfattningen av påverkanskampanjer genom maskininlärningsmodellen Support Vector Machine. Arbetet utgår från en litteraturstudie samt två experiment. Litteraturstudien syftar till att ge en referensram till textklassificering med Support Vector Machines. Det första experimentet innebar träning av en Support Vector Machine för att klassificera svenska nyhetsartiklar utefter pålitlighet. Det andra experimentet innefattade en jämförelse av tränad SVM-modell och andra standardmetoder inom textklassificering. Resultaten från experimenten tyder på att SVM är ett effektivt verktyg för klassificering av svenska nyhetsartiklar men även att det finns fler modeller som är lämpliga för samma uppgift. / The aim of this paper is to reduce the extent of impact campaigns through use of the machine learning algorithm Support Vector Machine. The process involved a literature study and two experiments. The aim of the literature study was to give a frame of reference to text classification with Support Vector Machines. The first experiment involved training a SVM to be able to classify news articles written in swedish based on the reliability of the article. The second experiment involved a comparison between the trained SVM-model and other standard methods in the field. The results from the experiment indicates that SVM is a effective tool for classification of news articles written in Swedish, but also that other standard methods are suitable for the same task.
200

Classificação de pedestres em imagens degradadas

Costa, André Fonseca 25 November 2013 (has links)
Submitted by Daniella Sodre (daniella.sodre@ufpe.br) on 2015-03-09T14:45:09Z No. of bitstreams: 2 dissertacao Andre Costa.pdf: 10722387 bytes, checksum: bff242b1a21e34e27f228538f8f5d6b1 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-09T14:45:10Z (GMT). No. of bitstreams: 2 dissertacao Andre Costa.pdf: 10722387 bytes, checksum: bff242b1a21e34e27f228538f8f5d6b1 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2013-11-25 / Capes / Um detector de pedestres básico geralmente possui dois componentes principais: um que seleciona regiões da imagem que possivelmente contêm um pedestre (gerador de candidatos) e outro que classifica estas regiões em grupos de pedestres e não-pedestres (classificador). Estes classificadores normalmente baseiam-se em extratores de características, que são transformações que alteram a intensidade ou cor original dos pixels de uma imagem em uma nova representação, para ressaltar algum tipo de conhecimento sobre o conteúdo da imagem. Quando o ambiente é não-controlado, fatores externos podem influenciar negativamente no desempenho do classificador. Baixa resolução, ruído, desfoque e oclusão são alguns efeitos que podem ser gerados por estes fatores, degradando a qualidade das imagens obtidas e, consequentemente, das características extraídas. Esta dissertação propõe-se a avaliar como extratores de características comportam-se nesse tipo de ambiente. Estes cinco tipos de degradação foram simulados nas bases de imagem usadas nos experimentos: INRIA Person e Caltech Pedestrian. Como estamos interessados apenas na etapa de classificação, as imagens foram transformadas em janelas de tamanho fixo na etapa de pré-processamento. Os experimentos usam uma combinação de extratores de características (HOG, LBP, CSS, LGIP e LTP) e modelos de aprendizagem (AdaBoost e SVM linear) para formar classificadores. Os classificadores foram treinados com as imagens intactas e testados com imagens em diversos níveis de degradação. O HOG (42%) e LTP (54%) foram superiores aos demais em aproximadamente metade dos pontos de teste na INRIA Person e Caltech Pedestrian, respectivamente. Foi confirmada a queda de desempenho do LBP quando exposto a ruído, mostrando que o LGIP e o LTP amenizam isso. Também observou-se que o CSS é robusto a ruído, mas gera características fracas no geral. Por fim, notou-se que classificadores que combinam mais de um extrator de características foram superiores aos individuais em boa parte dos pontos de teste. Combinando-se todos os extratores, tem-se um classificador superior em 95,8% das situações ao criado somente com o melhor extrator no geral (HOG, na base da INRIA, e LTP, na base da Caltech).

Page generated in 0.0261 seconds