• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 281
  • 73
  • 23
  • 15
  • 10
  • 7
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 511
  • 511
  • 126
  • 117
  • 112
  • 103
  • 98
  • 94
  • 94
  • 74
  • 73
  • 69
  • 66
  • 62
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

Diagnóstico de falhas em motores de indução trifásicos baseado em decomposição em componentes ortogonais e aprendizagem de máquinas / Fault diagnosis in three-phase induction motors based on orthogonal component decomposition and machine learning

Liboni, Luisa Helena Bartocci 05 June 2017 (has links)
O objetivo principal desta tese consiste no desenvolvimento de ferramentas matemáticas e computacionais dedicadas a um sistema de diagnóstico de barras quebradas no rotor de Motores de Indução Trifásicos. O sistema proposto é baseado em um método matemático de decomposição de sinais elétricos, denominado de Decomposição em Componentes Ortogonais, e ferramentas de aprendizagem de máquinas. Como uma das principais contribuições desta pesquisa, realizou-se um aprofundamento do entendimento da técnica de Decomposição em Componentes Ortogonais e de sua aplicabilidade como ferramenta de processamento de sinais para sistemas elétricos e eletromecânicos. Redes Neurais Artificiais e Support Vector Machines, tanto para classificação multi-classes quanto para detecção de novidades, foram configurados para receber índices advindos do processamento de sinais elétricos de motores, e a partir deles, identificar os padrões normais e os padrões com falhas. Além disso, a severidade da falha também é diagnosticada, a qual é representada pelo número de barras quebradas no rotor. Para a avaliação da metodologia, considerou-se o acionamento de motores de indução pela tensão de alimentação da rede e por inversores de frequência, operando sob diversas condições de torque de carga. Os resultados alcançados demonstram a eficácia das ferramentas matemáticas e computacionais desenvolvidas para o sistema de diagnóstico, sendo que os índices criados se mostraram altamente correlacionados com o fenômeno da falha. Mais especificamente, foi possível criar índices monotônicos com a severidade da falha e com baixa variabilidade, demonstrando-se que as ferramentas são eficientes extratores de características. / This doctoral thesis consists of the development of mathematical and computational tools dedicated to a diagnostic system for broken rotor bars in Three Phase Induction Motors. The proposed system is based on a mathematical method for decomposing electrical signals, named the Orthogonal Components Decomposition, and machine learning tools. As one of the main contributions of this research, an in-depth investigation of the decomposition technique and its applicability as a signal processing tool for electrical and electromechanical systems was carried-out. Artificial Neural Networks and Support Vector Machines for multi-class classification and novelty detection were configured to receive indices derived from the processing of electrical signals and then identify normal motors and faulty motors. In addition, the fault severity is also diagnosed, which is represented by the number of broken rotor bars. Experimental data was tested in order to evaluate the proposed method. Signals were obtained from induction motors operating with different torque levels and driven either directly by the grid or by frequency inverters. The results demonstrate the effectiveness of the mathematical and computational tools developed for the diagnostic system since the indices created are highly correlated with the fault phenomenon. More specifically, it was possible to create monotonic indices with the fault severity and with low variability, what supports that the solution is an efficient fault-specific feature extractor.
422

Reconhecimento multibiométrico baseado em imagens de face parcialmente ocluídas / Multibiometric Recognition Based on Partially Occluded Face Images

Jozias Rolim de Araújo Junior 28 May 2018 (has links)
Com o avanço da tecnologia, as estratégias tradicionais para identificação de pessoas se tornaram mais suscetíveis a falhas. De forma a superar essas dificuldades algumas abordagens vêm sendo propostas na literatura. Dentre estas abordagens destaca-se a Biometria. O campo da Biometria abarca uma grande variedade de tecnologias usadas para identificar ou verificar a identidade de uma pessoa por meio da mensuração e análise de aspectos físicos e/ou comportamentais do ser humano. Em função disso, a biometria tem um amplo campo de aplicações em sistemas que exigem uma identificação segura de seus usuários. Os sistemas biométricos mais populares são baseados em reconhecimento facial ou em impressões digitais. Entretanto, existem sistemas biométricos que utilizam a íris, varredura de retina, voz, geometria da mão e termogramas faciais. Atualmente, tem havido progresso significativo em reconhecimento automático de face em condições controladas. Em aplicações do mundo real, o reconhecimento facial sofre de uma série de problemas nos cenários não controlados. Esses problemas são devidos, principalmente, a diferentes variações faciais que podem mudar muito a aparência da face, incluindo variações de expressão, de iluminação, alterações da pose, assim como oclusões parciais. Em comparação com o grande número de trabalhos na literatura em relação aos problemas de variação de expressão/iluminação/pose, o problema de oclusão é relativamente negligenciado pela comunidade científica. Embora tenha sido dada pouca atenção ao problema de oclusão na literatura de reconhecimento facial, a importância deste problema deve ser enfatizada, pois a presença de oclusão é muito comum em cenários não controlados e pode estar associada a várias questões de segurança. Por outro lado, a Multibiométria é uma abordagem relativamente nova para representação de conhecimento biométrico que visa consolida múltiplas fontes de informação visando melhorar a performance do sistema biométrico. Multibiométria é baseada no conceito de que informações obtidas a partir de diferentes modalidades ou da mesma modalidade capturada de diversas formas se complementam. Consequentemente, uma combinação adequada dessas informações pode ser mais útil que o uso de informações obtidas a partir de qualquer uma das modalidades individualmente. A fim de melhorar a performance dos sistemas biométricos faciais na presença de oclusão parciais será investigado o emprego de diferentes técnicas de reconstrução de oclusões parciais de forma a gerar diferentes imagens de face, as quais serão combinadas no nível de extração de característica e utilizadas como entrada para um classificador neural. Os resultados demonstram que a abordagem proposta é capaz de melhorar a performance dos sistemas biométricos baseados em face parcialmente ocluídas / With the advancement of technology, traditional strategies for identifying people have become more susceptible to failures. In order to overcome these difficulties, some approaches have been proposed in the literature. Among these approaches, Biometrics stands out. The field of biometrics covers a wide range of technologies used to identify or verify a person\'s identity by measuring and analyzing physical and / or behavioral aspects of the human being. As a result, a biometry has a wide field of applications in systems that require a secure identification of its users. The most popular biometric systems are based on facial recognition or fingerprints. However, there are biometric systems that use the iris, retinal scan, voice, hand geometry, and facial thermograms. Currently, there has been significant progress in automatic face recognition under controlled conditions. In real world applications, facial recognition suffers from a number of problems in uncontrolled scenarios. These problems are mainly due to different facial variations that can greatly change the appearance of the face, including variations in expression, illumination, posture, as well as partial occlusions. Compared with the large number of papers in the literature regarding problems of expression / illumination / pose variation, the occlusion problem is relatively neglected by the research community. Although attention has been paid to the occlusion problem in the facial recognition literature, the importance of this problem should be emphasized, since the presence of occlusion is very common in uncontrolled scenarios and may be associated with several safety issues. On the other hand, multibiometry is a relatively new approach to biometric knowledge representation that aims to consolidate multiple sources of information to improve the performance of the biometric system. Multibiometry is based on the concept that information obtained from different modalities or from the same modalities captured in different ways complement each other. Accordingly, a suitable combination of such information may be more useful than the use of information obtained from any of the individuals modalities. In order to improve the performance of facial biometric systems in the presence of partial occlusion, the use of different partial occlusion reconstruction techniques was investigated in order to generate different face images, which were combined at the feature extraction level and used as input for a neural classifier. The results demonstrate that the proposed approach is capable of improving the performance of biometric systems based on partially occluded faces
423

Análise da influência de funções de distância para o processamento de consultas por similaridade em recuperação de imagens por conteúdo / Analysis of the influence of distance functions to answer similarity queries in content-based image retrieval.

Bugatti, Pedro Henrique 16 April 2008 (has links)
A recuperação de imagens baseada em conteúdo (Content-based Image Retrieval - CBIR) embasa-se sobre dois aspectos primordiais, um extrator de características o qual deve prover as características intrínsecas mais significativas dos dados e uma função de distância a qual quantifica a similaridade entre tais dados. O grande desafio é justamente como alcançar a melhor integração entre estes dois aspectos chaves com intuito de obter maior precisão nas consultas por similaridade. Apesar de inúmeros esforços serem continuamente despendidos para o desenvolvimento de novas técnicas de extração de características, muito pouca atenção tem sido direcionada à importância de uma adequada associação entre a função de distância e os extratores de características. A presente Dissertação de Mestrado foi concebida com o intuito de preencher esta lacuna. Para tal, foi realizada a análise do comportamento de diferentes funções de distância com relação a tipos distintos de vetores de características. Os três principais tipos de características intrínsecas às imagens foram analisados, com respeito a distribuição de cores, textura e forma. Além disso, foram propostas duas novas técnicas para realização de seleção de características com o desígnio de obter melhorias em relação à precisão das consultas por similaridade. A primeira técnica emprega regras de associação estatísticas e alcançou um ganho de até 38% na precisão, enquanto que a segunda técnica utilizando a entropia de Shannon alcançou um ganho de aproximadamente 71% ao mesmo tempo em que reduz significantemente a dimensionalidade dos vetores de características. O presente trabalho também demonstra que uma adequada utilização das funções de distância melhora efetivamente os resultados das consultas por similaridade. Conseqüentemente, desdobra novos caminhos para realçar a concepção de sistemas CBIR / The retrieval of images by visual content relies on a feature extractor to provide the most meaningful intrinsic characteristics (features) from the data, and a distance function to quantify the similarity between them. A challenge in this field supporting content-based image retrieval (CBIR) to answer similarity queries is how to best integrate these two key aspects. There are plenty of researching on algorithms for feature extraction of images. However, little attention have been paid to the importance of the use of a well-suited distance function associated to a feature extractor. This Master Dissertation was conceived to fill in this gap. Therefore, herein it was investigated the behavior of different distance functions regarding distinct feature vector types. The three main types of image features were evaluated, regarding color distribution, texture and shape. It was also proposed two new techniques to perform feature selection over the feature vectors, in order to improve the precision when answering similarity queries. The first technique employed statistical association rules and achieve up to 38% gain in precision, while the second one employing the Shannon entropy achieved 71%, while siginificantly reducing the size of the feature vector. This work also showed that the proper use of a distance function effectively improves the similarity query results. Therefore, it opens new ways to enhance the acceptance of CBIR systems
424

Identificação de espécies vegetais por meio de análise de imagens microscópicas de folhas / Identification of vegetal species by analysis of microscope images of leaves

Sá Junior, Jarbas Joaci de Mesquita 18 April 2008 (has links)
A taxonomia vegetal atualmente exige um grande esforço dos botânicos, desde o processo de aquisição do espécime até a morosa comparação com as amostras já catalogadas em um herbário. Nesse contexto, o projeto TreeVis surge como uma ferramenta para a identificação de vegetais por meio da análise de atributos foliares. Este trabalho é uma ramificação do projeto TreeVis e tem o objetivo de identificar vegetais por meio da análise do corte transversal de uma folha ampliado por um microscópio. Para tanto, foram extraídas assinaturas da cutícula, epiderme superior, parênquima paliçádico e parênquima lacunoso. Cada assinatura foi avaliada isoladamente por uma rede neural pelo método leave-one-out para verificar a sua capacidade de discriminar as amostras. Uma vez selecionados os vetores de características mais importantes, os mesmos foram combinados de duas maneiras. A primeira abordagem foi a simples concatenação dos vetores selecionados; a segunda, mais elaborada, reduziu a dimensionalidade (três atributos apenas) de algumas das assinaturas componentes antes de fazer a concatenação. Os vetores finais obtidos pelas duas abordagens foram testados com rede neural via leave-one-out para medir a taxa de acertos alcançada pelo sinergismo das assinaturas das diferentes partes da folha. Os experimentos consitiram na identificação de oito espécies diferentes e na identificação da espécie Gochnatia polymorpha nos ambientes Cerrado e Mata Ciliar, nas estações Chuvosa e Seca, e sob condições de Sol e Sombra / Currently, taxonomy demands a great effort from the botanists, ranging from the process of acquisition of the sample to the comparison with the species already classified in the herbarium. For this reason, the TreeVis is a project created to identify vegetal species using leaf attributes. This work is a part of the TreeVis project and aims at identifying vegetal species by analysing cross-sections of leaves amplified by a microscope. Signatures were extract from cuticle, adaxial epiderm, palisade parenchyma and sponge parenchyma. Each signature was analysed by a neural network with the leave-one-out method to verify its ability to identify species. Once the most important feature vectors were selected, two different approachs were adopted. The first was a simple concatenation of the selected feature vectors. The second, and more elaborated approach, consisted of reducing the dimensionality (three attributes only) of some component signatures before the feature vector concatenation. The final vectors obtained by these two approaches were tested by a neural network with leave-one-out to measure the correctness rate reached by the synergism of the signatures of different leaf regions. The experiments resulted in the identification of eight different species and the identification of the Gochnatia polymorpha species in Cerradão and Gallery Forest environments, Wet and Dry seasons, and under Sun and Shadow constraints
425

Probing sequence-level instructions for gene expression / Etude des instructions pour l’expression des gènes présentes dans la séquence ADN

Taha, May 28 November 2018 (has links)
La régulation des gènes est fortement contrôlée afin d’assurer une large variété de types cellulaires ayant des fonctions spécifiques. Ces contrôles prennent place à différents niveaux et sont associés à différentes régions génomiques régulatrices. Il est donc essentiel de comprendre les mécanismes à la base des régulations géniques dans les différents types cellulaires, dans le but d’identifier les régulateurs clés. Plusieurs études tentent de mieux comprendre les mécanismes de régulation en modulant l’expression des gènes par des approches épigénétiques. Cependant, ces approches sont basées sur des données expérimentales limitées à quelques échantillons, et sont à la fois couteuses et chronophages. Par ailleurs, les constituants nécessaires à la régulation des gènes au niveau des séquences ne peut pas être capturées par ces approches. L’objectif principal de cette thèse est d’expliquer l’expression des ARNm en se basant uniquement sur les séquences d’ADN.Dans une première partie, nous utilisons le modèle de régression linéaire avec pénalisation Lasso pour prédire l’expression des gènes par l’intermédiaire des caractéristique de l’ADN comme la composition nucléotidique et les sites de fixation des facteurs de transcription. La précision de cette approche a été mesurée sur plusieurs données provenant de la base de donnée TCGA et nous avons trouvé des performances similaires aux modèles ajustés aux données expérimentales. Nous avons montré que la composition nucléotidique a un impact majeur sur l’expression des gènes. De plus, l’influence de chaque régions régulatrices est évaluée et l’effet du corps de gène, spécialement les introns semble être clé dans la prédiction de l’expression. En second partie, nous présentons une tentative d’amélioration des performances du modèle. D’abord, nous considérons inclure dans le modèles les interactions entres les différents variables et appliquer des transformations non linéaires sur les variables prédictives. Cela induit une légère augmentation des performances du modèles. Pour aller plus loin, des modèles d’apprentissage profond sont étudiés. Deux types de réseaux de neurones sont considérés : Les perceptrons multicouches et les réseaux de convolutions.Les paramètres de chaque neurone sont optimisés. Les performances des deux types de réseaux semblent être plus élevées que celles du modèle de régression linéaire pénalisée par Lasso. Les travaux de cette thèse nous ont permis (i) de démontrer l’existence des instructions au niveau de la séquence en relation avec l’expression des gènes, et (ii) de fournir différents cadres de travail basés sur des approches complémentaires. Des travaux complémentaires sont en cours en particulier sur le deep learning, dans le but de détecter des informations supplémentaires présentes dans les séquences. / Gene regulation is tightly controlled to ensure a wide variety of cell types and functions. These controls take place at different levels and are associated with different genomic regulatory regions. An actual challenge is to understand how the gene regulation machinery works in each cell type and to identify the most important regulators. Several studies attempt to understand the regulatory mechanisms by modeling gene expression using epigenetic marks. Nonetheless, these approaches rely on experimental data which are limited to some samples, costly and time-consuming. Besides, the important component of gene regulation based at the sequence level cannot be captured by these approaches. The main objective of this thesis is to explain mRNA expression based only on DNA sequences features. In a first work, we use Lasso penalized linear regression to predict gene expression using DNA features such as transcription factor binding site (motifs) and nucleotide compositions. We measured the accuracy of our approach on several data from the TCGA database and find similar performance as that of models fitted with experimental data. In addition, we show that nucleotide compositions of different regulatory regions have a major impact on gene expression. Furthermore, we rank the influence of each regulatory regions and show a strong effect of the gene body, especially introns.In a second part, we try to increase the performances of the model. We first consider adding interactions between nucleotide compositions and applying non-linear transformations on predictive variables. This induces a slight increase in model performances.To go one step further, we then learn deep neuronal networks. We consider two types of neural networks: multilayer perceptrons and convolution networks. Hyperparameters of each network are optimized. The performances of both types of networks appear slightly higher than those of a Lasso penalized linear model. In this thesis, we were able to (i) demonstrate the existence of sequence-level instructions for gene expression and (ii) provide different frameworks based on complementary approaches. Additional work is ongoing, in particular with the last direction based on deep learning, with the aim of detecting additional information present in the sequence.
426

Information spotting in huge repositories of scanned document images / Localisation d'information dans des très grands corpus de documents numérisés

Dang, Quoc Bao 06 April 2018 (has links)
Ce travail vise à développer un cadre générique qui est capable de produire des applications de localisation d'informations à partir d’une caméra (webcam, smartphone) dans des très grands dépôts d'images de documents numérisés et hétérogènes via des descripteurs locaux. Ainsi, dans cette thèse, nous proposons d'abord un ensemble de descripteurs qui puissent être appliqués sur des contenus aux caractéristiques génériques (composés de textes et d’images) dédié aux systèmes de recherche et de localisation d'images de documents. Nos descripteurs proposés comprennent SRIF, PSRIF, DELTRIF et SSKSRIF qui sont construits à partir de l’organisation spatiale des points d’intérêts les plus proches autour d'un point-clé pivot. Tous ces points sont extraits à partir des centres de gravité des composantes connexes de l‘image. A partir de ces points d’intérêts, des caractéristiques géométriques invariantes aux dégradations sont considérées pour construire nos descripteurs. SRIF et PSRIF sont calculés à partir d'un ensemble local des m points d’intérêts les plus proches autour d'un point d’intérêt pivot. Quant aux descripteurs DELTRIF et SSKSRIF, cette organisation spatiale est calculée via une triangulation de Delaunay formée à partir d'un ensemble de points d’intérêts extraits dans les images. Cette seconde version des descripteurs permet d’obtenir une description de forme locale sans paramètres. En outre, nous avons également étendu notre travail afin de le rendre compatible avec les descripteurs classiques de la littérature qui reposent sur l’utilisation de points d’intérêts dédiés de sorte qu'ils puissent traiter la recherche et la localisation d'images de documents à contenu hétérogène. La seconde contribution de cette thèse porte sur un système d'indexation de très grands volumes de données à partir d’un descripteur volumineux. Ces deux contraintes viennent peser lourd sur la mémoire du système d’indexation. En outre, la très grande dimensionnalité des descripteurs peut amener à une réduction de la précision de l'indexation, réduction liée au problème de dimensionnalité. Nous proposons donc trois techniques d'indexation robustes, qui peuvent toutes être employées sans avoir besoin de stocker les descripteurs locaux dans la mémoire du système. Cela permet, in fine, d’économiser la mémoire et d’accélérer le temps de recherche de l’information, tout en s’abstrayant d’une validation de type distance. Pour cela, nous avons proposé trois méthodes s’appuyant sur des arbres de décisions : « randomized clustering tree indexing” qui hérite des propriétés des kd-tree, « kmean-tree » et les « random forest » afin de sélectionner de manière aléatoire les K dimensions qui permettent de combiner la plus grande variance expliquée pour chaque nœud de l’arbre. Nous avons également proposé une fonction de hachage étendue pour l'indexation de contenus hétérogènes provenant de plusieurs couches de l'image. Comme troisième contribution de cette thèse, nous avons proposé une méthode simple et robuste pour calculer l'orientation des régions obtenues par le détecteur MSER, afin que celui-ci puisse être combiné avec des descripteurs dédiés. Comme la plupart de ces descripteurs visent à capturer des informations de voisinage autour d’une région donnée, nous avons proposé un moyen d'étendre les régions MSER en augmentant le rayon de chaque région. Cette stratégie peut également être appliquée à d'autres régions détectées afin de rendre les descripteurs plus distinctifs. Enfin, afin d'évaluer les performances de nos contributions, et en nous fondant sur l'absence d'ensemble de données publiquement disponibles pour la localisation d’information hétérogène dans des images capturées par une caméra, nous avons construit trois jeux de données qui sont disponibles pour la communauté scientifique. / This work aims at developing a generic framework which is able to produce camera-based applications of information spotting in huge repositories of heterogeneous content document images via local descriptors. The targeted systems may take as input a portion of an image acquired as a query and the system is capable of returning focused portion of database image that match the query best. We firstly propose a set of generic feature descriptors for camera-based document images retrieval and spotting systems. Our proposed descriptors comprise SRIF, PSRIF, DELTRIF and SSKSRIF that are built from spatial space information of nearest keypoints around a keypoints which are extracted from centroids of connected components. From these keypoints, the invariant geometrical features are considered to be taken into account for the descriptor. SRIF and PSRIF are computed from a local set of m nearest keypoints around a keypoint. While DELTRIF and SSKSRIF can fix the way to combine local shape description without using parameter via Delaunay triangulation formed from a set of keypoints extracted from a document image. Furthermore, we propose a framework to compute the descriptors based on spatial space of dedicated keypoints e.g SURF or SIFT or ORB so that they can deal with heterogeneous-content camera-based document image retrieval and spotting. In practice, a large-scale indexing system with an enormous of descriptors put the burdens for memory when they are stored. In addition, high dimension of descriptors can make the accuracy of indexing reduce. We propose three robust indexing frameworks that can be employed without storing local descriptors in the memory for saving memory and speeding up retrieval time by discarding distance validating. The randomized clustering tree indexing inherits kd-tree, kmean-tree and random forest from the way to select K dimensions randomly combined with the highest variance dimension from each node of the tree. We also proposed the weighted Euclidean distance between two data points that is computed and oriented the highest variance dimension. The secondly proposed hashing relies on an indexing system that employs one simple hash table for indexing and retrieving without storing database descriptors. Besides, we propose an extended hashing based method for indexing multi-kinds of features coming from multi-layer of the image. Along with proposed descriptors as well indexing frameworks, we proposed a simple robust way to compute shape orientation of MSER regions so that they can combine with dedicated descriptors (e.g SIFT, SURF, ORB and etc.) rotation invariantly. In the case that descriptors are able to capture neighborhood information around MSER regions, we propose a way to extend MSER regions by increasing the radius of each region. This strategy can be also applied for other detected regions in order to make descriptors be more distinctive. Moreover, we employed the extended hashing based method for indexing multi-kinds of features from multi-layer of images. This system are not only applied for uniform feature type but also multiple feature types from multi-layers separated. Finally, in order to assess the performances of our contributions, and based on the assessment that no public dataset exists for camera-based document image retrieval and spotting systems, we built a new dataset which has been made freely and publicly available for the scientific community. This dataset contains portions of document images acquired via a camera as a query. It is composed of three kinds of information: textual content, graphical content and heterogeneous content.
427

Obstacle detection and emergency exit sign recognition for autonomous navigation using camera phone

Mohammed, Abdulmalik January 2017 (has links)
In this research work, we develop an obstacle detection and emergency exit sign recognition system on a mobile phone by extending the feature from accelerated segment test detector with Harris corner filter. The first step often required for many vision based applications is the detection of objects of interest in an image. Hence, in this research work, we introduce emergency exit sign detection method using colour histogram. The hue and saturation component of an HSV colour model are processed into features to build a 2D colour histogram. We backproject a 2D colour histogram to detect emergency exit sign from a captured image as the first task required before performing emergency exit sign recognition. The result of classification shows that the 2D histogram is fast and can discriminate between objects and background with accuracy. One of the challenges confronting object recognition methods is the type of image feature to compute. In this work therefore, we present two feature detectors and descriptor methods based on the feature from accelerated segment test detector with Harris corner filter. The first method is called Upright FAST-Harris and binary detector (U-FaHB), while the second method Scale Interpolated FAST-Harris and Binary (SIFaHB). In both methods, feature points are extracted using the accelerated segment test detectors and Harris filter to return the strongest corner points as features. However, in the case of SIFaHB, the extraction of feature points is done across the image plane and along the scale-space. The modular design of these detectors allows for the integration of descriptors of any kind. Therefore, we combine these detectors with binary test descriptor like BRIEF to compute feature regions. These detectors and the combined descriptor are evaluated using different images observed under various geometric and photometric transformations and the performance is compared with other detectors and descriptors. The results obtained show that our proposed feature detector and descriptor method is fast and performs better compared with other methods like SIFT, SURF, ORB, BRISK, CenSurE. Based on the potential of U-FaHB detector and descriptor, we extended it for use in optical flow computation, which we termed the Nearest-flow method. This method has the potential of computing flow vectors for use in obstacle detection. Just like any other new methods, we evaluated the Nearest flow method using real and synthetic image sequences. We compare the performance of the Nearest-flow with other methods like the Lucas and Kanade, Farneback and SIFT-flow. The results obtained show that our Nearest-flow method is faster to compute and performs better on real scene images compared with the other methods. In the final part of this research, we demonstrate the application potential of our proposed methods by developing an obstacle detection and exit sign recognition system on a camera phone and the result obtained shows that the methods have the potential to solve this vision based object detection and recognition problem.
428

Diagnóstico de falhas em motores de indução trifásicos baseado em decomposição em componentes ortogonais e aprendizagem de máquinas / Fault diagnosis in three-phase induction motors based on orthogonal component decomposition and machine learning

Luisa Helena Bartocci Liboni 05 June 2017 (has links)
O objetivo principal desta tese consiste no desenvolvimento de ferramentas matemáticas e computacionais dedicadas a um sistema de diagnóstico de barras quebradas no rotor de Motores de Indução Trifásicos. O sistema proposto é baseado em um método matemático de decomposição de sinais elétricos, denominado de Decomposição em Componentes Ortogonais, e ferramentas de aprendizagem de máquinas. Como uma das principais contribuições desta pesquisa, realizou-se um aprofundamento do entendimento da técnica de Decomposição em Componentes Ortogonais e de sua aplicabilidade como ferramenta de processamento de sinais para sistemas elétricos e eletromecânicos. Redes Neurais Artificiais e Support Vector Machines, tanto para classificação multi-classes quanto para detecção de novidades, foram configurados para receber índices advindos do processamento de sinais elétricos de motores, e a partir deles, identificar os padrões normais e os padrões com falhas. Além disso, a severidade da falha também é diagnosticada, a qual é representada pelo número de barras quebradas no rotor. Para a avaliação da metodologia, considerou-se o acionamento de motores de indução pela tensão de alimentação da rede e por inversores de frequência, operando sob diversas condições de torque de carga. Os resultados alcançados demonstram a eficácia das ferramentas matemáticas e computacionais desenvolvidas para o sistema de diagnóstico, sendo que os índices criados se mostraram altamente correlacionados com o fenômeno da falha. Mais especificamente, foi possível criar índices monotônicos com a severidade da falha e com baixa variabilidade, demonstrando-se que as ferramentas são eficientes extratores de características. / This doctoral thesis consists of the development of mathematical and computational tools dedicated to a diagnostic system for broken rotor bars in Three Phase Induction Motors. The proposed system is based on a mathematical method for decomposing electrical signals, named the Orthogonal Components Decomposition, and machine learning tools. As one of the main contributions of this research, an in-depth investigation of the decomposition technique and its applicability as a signal processing tool for electrical and electromechanical systems was carried-out. Artificial Neural Networks and Support Vector Machines for multi-class classification and novelty detection were configured to receive indices derived from the processing of electrical signals and then identify normal motors and faulty motors. In addition, the fault severity is also diagnosed, which is represented by the number of broken rotor bars. Experimental data was tested in order to evaluate the proposed method. Signals were obtained from induction motors operating with different torque levels and driven either directly by the grid or by frequency inverters. The results demonstrate the effectiveness of the mathematical and computational tools developed for the diagnostic system since the indices created are highly correlated with the fault phenomenon. More specifically, it was possible to create monotonic indices with the fault severity and with low variability, what supports that the solution is an efficient fault-specific feature extractor.
429

Identificação automatizada de espécies de abelhas através de imagens de asas. / Automated bee species identification through wing images.

Felipe Leno da Silva 19 February 2015 (has links)
Diversas pesquisas focam no estudo e conservação das abelhas, em grande parte por sua importância para a agricultura. Entretanto, a identicação de espécies de abelhas vem sendo um impedimento para a condução de novas pesquisas, já que demanda tempo e um conhecimento muito especializado. Apesar de existirem diversos métodos para realizar esta tarefa, muitos deles são excessivamente custosos, restringindo sua aplicabilidade. Por serem facilmente acessíveis, as asas das abelhas vêm sendo amplamente utilizadas para a extração de características, já que é possível aplicar técnicas morfométricas utilizando apenas uma foto da asa. Como a medição manual de diversas características é tediosa e propensa a erros, sistemas foram desenvolvidos com este propósito. Entretanto, os sistemas ainda possuem limitações e não há um estudo voltado às técnicas de classificação que podem ser utilizadas para este m. Esta pesquisa visa avaliar as técnicas de extração de características e classificação de modo a determinar o conjunto de técnicas mais apropriado para a discriminação de espécies de abelhas. Nesta pesquisa foi demonstrado que o uso de uma conjunção de características morfométricas e fotométricas obtêm melhores resultados que o uso de somente características morfométricas. Também foram analisados os melhores algoritmos de classificação tanto usando somente características morfométricas, quanto usando uma conjunção de características morfométricas e fotométricas, os quais são, respectivamente, o Naïve Bayes e o classificador Logístico. Os Resultados desta pesquisa podem guiar o desenvolvimento de novos sistemas para identificação de espécies de abelha, objetivando auxiliar pesquisas conduzidas por biólogos. / Several researches focus on the study and conservation of bees, largely because of its importance for agriculture. However, the identification of bee species has hampering new studies, since it demands a very specialized knowledge and is time demanding. Although there are several methods to accomplish this task, many of them are excessively costly, restricting its applicability. For being accessible, the bee wings have been widely used for the extraction of features, since it is possible to apply morphometric techniques using just one image of the wing. As the manual measurement of various features is tedious and error prone, some systems have been developed for this purpose. However, these systems also have limitations, and there is no study concerning classification techniques that can be used for this purpose. This research aims to evaluate the feature extraction and classification techniques in order to determine the combination of more appropriate techniques for discriminating species of bees. The results of our research indicate that the use of a conjunction of Morphometric and Pixel-based features is more effective than only using Morphometric features. OuranalysisalsoconcludedthatthebestclassicationalgorithmsusingbothonlyMorphometric features and a conjunction of Morphometric and Pixel-based features are, respectively, Naïve Bayes and Logistic classier. The results of this research can guide the development of new systems to identify bee species in order to assist in researches conducted by biologists.
430

Identificação automatizada de espécies de abelhas através de imagens de asas. / Automated bee species identification through wing images.

Silva, Felipe Leno da 19 February 2015 (has links)
Diversas pesquisas focam no estudo e conservação das abelhas, em grande parte por sua importância para a agricultura. Entretanto, a identicação de espécies de abelhas vem sendo um impedimento para a condução de novas pesquisas, já que demanda tempo e um conhecimento muito especializado. Apesar de existirem diversos métodos para realizar esta tarefa, muitos deles são excessivamente custosos, restringindo sua aplicabilidade. Por serem facilmente acessíveis, as asas das abelhas vêm sendo amplamente utilizadas para a extração de características, já que é possível aplicar técnicas morfométricas utilizando apenas uma foto da asa. Como a medição manual de diversas características é tediosa e propensa a erros, sistemas foram desenvolvidos com este propósito. Entretanto, os sistemas ainda possuem limitações e não há um estudo voltado às técnicas de classificação que podem ser utilizadas para este m. Esta pesquisa visa avaliar as técnicas de extração de características e classificação de modo a determinar o conjunto de técnicas mais apropriado para a discriminação de espécies de abelhas. Nesta pesquisa foi demonstrado que o uso de uma conjunção de características morfométricas e fotométricas obtêm melhores resultados que o uso de somente características morfométricas. Também foram analisados os melhores algoritmos de classificação tanto usando somente características morfométricas, quanto usando uma conjunção de características morfométricas e fotométricas, os quais são, respectivamente, o Naïve Bayes e o classificador Logístico. Os Resultados desta pesquisa podem guiar o desenvolvimento de novos sistemas para identificação de espécies de abelha, objetivando auxiliar pesquisas conduzidas por biólogos. / Several researches focus on the study and conservation of bees, largely because of its importance for agriculture. However, the identification of bee species has hampering new studies, since it demands a very specialized knowledge and is time demanding. Although there are several methods to accomplish this task, many of them are excessively costly, restricting its applicability. For being accessible, the bee wings have been widely used for the extraction of features, since it is possible to apply morphometric techniques using just one image of the wing. As the manual measurement of various features is tedious and error prone, some systems have been developed for this purpose. However, these systems also have limitations, and there is no study concerning classification techniques that can be used for this purpose. This research aims to evaluate the feature extraction and classification techniques in order to determine the combination of more appropriate techniques for discriminating species of bees. The results of our research indicate that the use of a conjunction of Morphometric and Pixel-based features is more effective than only using Morphometric features. OuranalysisalsoconcludedthatthebestclassicationalgorithmsusingbothonlyMorphometric features and a conjunction of Morphometric and Pixel-based features are, respectively, Naïve Bayes and Logistic classier. The results of this research can guide the development of new systems to identify bee species in order to assist in researches conducted by biologists.

Page generated in 0.1212 seconds