• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Dynamic machine learning for supervised and unsupervised classification / Apprentissage automatique dynamique pour la classification supervisée et non supervisée

Sîrbu, Adela-Maria 06 June 2016 (has links)
La direction de recherche que nous abordons dans la thèse est l'application des modèles dynamiques d'apprentissage automatique pour résoudre les problèmes de classification supervisée et non supervisée. Les problèmes particuliers que nous avons décidé d'aborder dans la thèse sont la reconnaissance des piétons (un problème de classification supervisée) et le groupement des données d'expression génétique (un problème de classification non supervisée). Les problèmes abordés sont représentatifs pour les deux principaux types de classification et sont très difficiles, ayant une grande importance dans la vie réelle. La première direction de recherche que nous abordons dans le domaine de la classification non supervisée dynamique est le problème de la classification dynamique des données d'expression génétique. L'expression génétique représente le processus par lequel l'information d'un gène est convertie en produits de gènes fonctionnels : des protéines ou des ARN ayant différents rôles dans la vie d'une cellule. La technologie des micro-réseaux moderne est aujourd'hui utilisée pour détecter expérimentalement les niveaux d'expression de milliers de gènes, dans des conditions différentes et au fil du temps. Une fois que les données d'expression génétique ont été recueillies, l'étape suivante consiste à analyser et à extraire des informations biologiques utiles. L'un des algorithmes les plus populaires traitant de l'analyse des données d'expression génétique est le groupement, qui consiste à diviser un certain ensemble en groupes, où les composants de chaque groupe sont semblables les uns aux autres données. Dans le cas des ensembles de données d'expression génique, chaque gène est représenté par ses valeurs d'expression (caractéristiques), à des points distincts dans le temps, dans les conditions contrôlées. Le processus de regroupement des gènes est à la base des études génomiques qui visent à analyser les fonctions des gènes car il est supposé que les gènes qui sont similaires dans leurs niveaux d'expression sont également relativement similaires en termes de fonction biologique. Le problème que nous abordons dans le sens de la recherche de classification non supervisée dynamique est le regroupement dynamique des données d'expression génique. Dans notre cas, la dynamique à long terme indique que l'ensemble de données ne sont pas statiques, mais elle est sujette à changement. Pourtant, par opposition aux approches progressives de la littérature, où l'ensemble de données est enrichie avec de nouveaux gènes (instances) au cours du processus de regroupement, nos approches abordent les cas lorsque de nouvelles fonctionnalités (niveaux d'expression pour de nouveaux points dans le temps) sont ajoutés à la gènes déjà existants dans l'ensemble de données. À notre connaissance, il n'y a pas d'approches dans la littérature qui traitent le problème de la classification dynamique des données d'expression génétique, définis comme ci-dessus. Dans ce contexte, nous avons introduit trois algorithmes de groupement dynamiques que sont capables de gérer de nouveaux niveaux d'expression génique collectés, en partant d'une partition obtenue précédente, sans la nécessité de ré-exécuter l'algorithme à partir de zéro. L'évaluation expérimentale montre que notre méthode est plus rapide et plus précis que l'application de l'algorithme de classification à partir de zéro sur la fonctionnalité étendue ensemble de données... / The research direction we are focusing on in the thesis is applying dynamic machine learning models to salve supervised and unsupervised classification problems. We are living in a dynamic environment, where data is continuously changing and the need to obtain a fast and accurate solution to our problems has become a real necessity. The particular problems that we have decided te approach in the thesis are pedestrian recognition (a supervised classification problem) and clustering of gene expression data (an unsupervised classification. problem). The approached problems are representative for the two main types of classification and are very challenging, having a great importance in real life.The first research direction that we approach in the field of dynamic unsupervised classification is the problem of dynamic clustering of gene expression data. Gene expression represents the process by which the information from a gene is converted into functional gene products: proteins or RNA having different roles in the life of a cell. Modern microarray technology is nowadays used to experimentally detect the levels of expressions of thousand of genes, across different conditions and over time. Once the gene expression data has been gathered, the next step is to analyze it and extract useful biological information. One of the most popular algorithms dealing with the analysis of gene expression data is clustering, which involves partitioning a certain data set in groups, where the components of each group are similar to each other. In the case of gene expression data sets, each gene is represented by its expression values (features), at distinct points in time, under the monitored conditions. The process of gene clustering is at the foundation of genomic studies that aim to analyze the functions of genes because it is assumed that genes that are similar in their expression levels are also relatively similar in terms of biological function.The problem that we address within the dynamic unsupervised classification research direction is the dynamic clustering of gene expression data. In our case, the term dynamic indicates that the data set is not static, but it is subject to change. Still, as opposed to the incremental approaches from the literature, where the data set is enriched with new genes (instances) during the clustering process, our approaches tackle the cases when new features (expression levels for new points in time) are added to the genes already existing in the data set. To our best knowledge, there are no approaches in the literature that deal with the problem of dynamic clustering of gene expression data, defined as above. In this context we introduced three dynamic clustering algorithms which are able to handle new collected gene expression levels, by starting from a previous obtained partition, without the need to re-run the algorithm from scratch. Experimental evaluation shows that our method is faster and more accurate than applying the clustering algorithm from scratch on the feature extended data set...
2

Object detection and classication in outdoor environments for autonomous passenger vehicle navigation based on Data Fusion of Articial Vision System and LiDAR sensor / Detecção e classificação de objetos em ambientes externos para navegação de um veículo de passeio autônomo utilizando fusão de dados de visão artificial e sensor laser

Roncancio Velandia, Henry 30 May 2014 (has links)
This research project took part in the SENA project (Autonomous Embedded Navigation System), which was developed at the Mobile Robotics Lab of the Mechatronics Group at the Engineering School of São Carlos, University of São Paulo (EESC - USP) in collaboration with the São Carlos Institute of Physics. Aiming for an autonomous behavior in the prototype vehicle this dissertation focused on deploying some machine learning algorithms to support its perception. These algorithms enabled the vehicle to execute articial-intelligence tasks, such as prediction and memory retrieval for object classication. Even though in autonomous navigation there are several perception, cognition and actuation tasks, this dissertation focused only on perception, which provides the vehicle control system with information about the environment around it. The most basic information to be provided is the existence of objects (obstacles) around the vehicle. In formation about the sort of object it is also provided, i.e., its classication among cars, pedestrians, stakes, the road, as well as the scale of such an object and its position in front of the vehicle. The environmental data was acquired by using a camera and a Velodyne LiDAR. A ceiling analysis of the object detection pipeline was used to simulate the proposed methodology. As a result, this analysis estimated that processing specic regions in the PDF Compressor Pro xii image (i.e., Regions of Interest, or RoIs), where it is more likely to nd an object, would be the best way of improving our recognition system, a process called image normalization. Consequently, experimental results in a data-fusion approach using laser data and images, in which RoIs were found using the LiDAR data, showed that the fusion approach can provide better object detection and classication compared with the use of either camera or LiDAR alone. Deploying a data-fusion classication using RoI method can be executed at 6 Hz and with 100% precision in pedestrians and 92.3% in cars. The fusion also enabled road estimation even when there were shadows and colored road markers in the image. Vision-based classier supported by LiDAR data provided a good solution for multi-scale object detection and even for the non-uniform illumination problem. / Este projeto de pesquisa fez parte do projeto SENA (Sistema Embarcado de Navegação Autônoma), ele foi realizado no Laboratório de Robótica Móvel do Grupo de Mecatrônica da Escola de Engenharia de São Carlos (EESC), em colaboração com o Instituto de Física de São Carlos (IFSC). A grande motivação do projeto SENA é o desenvolvimento de tecnologias assistidas e autônomas que possam atender às necessidades de diferentes tipos de motoristas (inexperientes, idosos, portadores de limitações, etc.). Vislumbra-se que a aplicação em larga escala desse tipo de tecnologia, em um futuro próximo, certamente reduzirá drasticamente a quantidade de pessoas feridas e mortas em acidentes automobilísticos em estradas e em ambientes urbanos. Nesse contexto, este projeto de pesquisa teve como objetivo proporcionar informações relativas ao ambiente ao redor do veículo, ao sistema de controle e de tomada de decisão embarcado no veículo autônomo. As informações mais básicas fornecidas são as posições dos objetos (obstáculos) ao redor do veículo; além disso, informações como o tipo de objeto (ou seja, sua classificação em carros, pedestres, postes e a própria rua mesma), assim como o tamanho deles. Os dados do ambiente são adquiridos através do emprego de uma câmera e um Velodyne LiDAR. Um estudo do tipo ceiling foi usado para simular a metodologia da detecção dos obstáculos. Estima-se que , após realizar o estudo, que analisar regiões especificas da imagem, chamadas de regiões de interesse, onde é mais provável encontrar um obstáculo, é o melhor jeito de melhorar o sistema de reconhecimento. Observou-se na implementação da fusão dos sensores que encontrar regiões de interesse usando LiDAR, e classificá-las usando visão artificial fornece um melhor resultado na hora de compará-lo com os resultados ao usar apenas câmera ou LiDAR. Obteve-se uma classificação com precisão de 100% para pedestres e 92,3% para carros, rodando em uma frequência de 6 Hz. A fusão dos sensores também forneceu um método para estimar a estrada mesmo quando esta tinha sombra ou faixas de cor. Em geral, a classificação baseada em visão artificial e LiDAR mostrou uma solução para detecção de objetos em várias escalas e mesmo para o problema da iluminação não uniforme do ambiente.
3

Vision-based moving pedestrian recognition from imprecise and uncertain data / Reconnaissance de piétons par vision à partir de données imprécises et incertaines

Zhou, Dingfu 05 December 2014 (has links)
La mise en oeuvre de systèmes avancés d’aide à la conduite (ADAS) basée vision, est une tâche complexe et difficile surtout d’un point de vue robustesse en conditions d’utilisation réelles. Une des fonctionnalités des ADAS vise à percevoir et à comprendre l’environnement de l’ego-véhicule et à fournir l’assistance nécessaire au conducteur pour réagir à des situations d’urgence. Dans cette thèse, nous nous concentrons sur la détection et la reconnaissance des objets mobiles car leur dynamique les rend plus imprévisibles et donc plus dangereux. La détection de ces objets, l’estimation de leurs positions et la reconnaissance de leurs catégories sont importants pour les ADAS et la navigation autonome. Par conséquent, nous proposons de construire un système complet pour la détection des objets en mouvement et la reconnaissance basées uniquement sur les capteurs de vision. L’approche proposée permet de détecter tout type d’objets en mouvement en fonction de deux méthodes complémentaires. L’idée de base est de détecter les objets mobiles par stéréovision en utilisant l’image résiduelle du mouvement apparent (RIMF). La RIMF est définie comme l’image du mouvement apparent causé par le déplacement des objets mobiles lorsque le mouvement de la caméra a été compensé. Afin de détecter tous les mouvements de manière robuste et de supprimer les faux positifs, les incertitudes liées à l’estimation de l’ego-mouvement et au calcul de la disparité doivent être considérées. Les étapes principales de l’algorithme sont les suivantes : premièrement, la pose relative de la caméra est estimée en minimisant la somme des erreurs de reprojection des points d’intérêt appariées et la matrice de covariance est alors calculée en utilisant une stratégie de propagation d’erreurs de premier ordre. Ensuite, une vraisemblance de mouvement est calculée pour chaque pixel en propageant les incertitudes sur l’ego-mouvement et la disparité par rapport à la RIMF. Enfin, la probabilité de mouvement et le gradient de profondeur sont utilisés pour minimiser une fonctionnelle d’énergie de manière à obtenir la segmentation des objets en mouvement. Dans le même temps, les boîtes englobantes des objets mobiles sont générées en utilisant la carte des U-disparités. Après avoir obtenu la boîte englobante de l’objet en mouvement, nous cherchons à reconnaître si l’objet en mouvement est un piéton ou pas. Par rapport aux algorithmes de classification supervisée (comme le boosting et les SVM) qui nécessitent un grand nombre d’exemples d’apprentissage étiquetés, notre algorithme de boosting semi-supervisé est entraîné avec seulement quelques exemples étiquetés et de nombreuses instances non étiquetées. Les exemples étiquetés sont d’abord utilisés pour estimer les probabilités d’appartenance aux classes des exemples non étiquetés, et ce à l’aide de modèles de mélange de gaussiennes après une étape de réduction de dimension réalisée par une analyse en composantes principales. Ensuite, nous appliquons une stratégie de boosting sur des arbres de décision entraînés à l’aide des instances étiquetées de manière probabiliste. Les performances de la méthode proposée sont évaluées sur plusieurs jeux de données de classification de référence, ainsi que sur la détection et la reconnaissance des piétons. Enfin, l’algorithme de détection et de reconnaissances des objets en mouvement est testé sur les images du jeu de données KITTI et les résultats expérimentaux montrent que les méthodes proposées obtiennent de bonnes performances dans différents scénarios de conduite en milieu urbain. / Vision-based Advanced Driver Assistance Systems (ADAS) is a complex and challenging task in real world traffic scenarios. The ADAS aims at perceiving andunderstanding the surrounding environment of the ego-vehicle and providing necessary assistance for the drivers if facing some emergencies. In this thesis, we will only focus on detecting and recognizing moving objects because they are more dangerous than static ones. Detecting these objects, estimating their positions and recognizing their categories are significantly important for ADAS and autonomous navigation. Consequently, we propose to build a complete system for moving objects detection and recognition based on vision sensors. The proposed approach can detect any kinds of moving objects based on two adjacent frames only. The core idea is to detect the moving pixels by using the Residual Image Motion Flow (RIMF). The RIMF is defined as the residual image changes caused by moving objects with compensated camera motion. In order to robustly detect all kinds of motion and remove false positive detections, uncertainties in the ego-motion estimation and disparity computation should also be considered. The main steps of our general algorithm are the following : first, the relative camera pose is estimated by minimizing the sum of the reprojection errors of matched features and its covariance matrix is also calculated by using a first-order errors propagation strategy. Next, a motion likelihood for each pixel is obtained by propagating the uncertainties of the ego-motion and disparity to the RIMF. Finally, the motion likelihood and the depth gradient are used in a graph-cut-based approach to obtain the moving objects segmentation. At the same time, the bounding boxes of moving object are generated based on the U-disparity map. After obtaining the bounding boxes of the moving object, we want to classify the moving objects as a pedestrian or not. Compared to supervised classification algorithms (such as boosting and SVM) which require a large amount of labeled training instances, our proposed semi-supervised boosting algorithm is trained with only a few labeled instances and many unlabeled instances. Firstly labeled instances are used to estimate the probabilistic class labels of the unlabeled instances using Gaussian Mixture Models after a dimension reduction step performed via Principal Component Analysis. Then, we apply a boosting strategy on decision stumps trained using the calculated soft labeled instances. The performances of the proposed method are evaluated on several state-of-the-art classification datasets, as well as on a pedestrian detection and recognition problem.Finally, both our moving objects detection and recognition algorithms are tested on the public images dataset KITTI and the experimental results show that the proposed methods can achieve good performances in different urban scenarios.
4

Object detection and classication in outdoor environments for autonomous passenger vehicle navigation based on Data Fusion of Articial Vision System and LiDAR sensor / Detecção e classificação de objetos em ambientes externos para navegação de um veículo de passeio autônomo utilizando fusão de dados de visão artificial e sensor laser

Henry Roncancio Velandia 30 May 2014 (has links)
This research project took part in the SENA project (Autonomous Embedded Navigation System), which was developed at the Mobile Robotics Lab of the Mechatronics Group at the Engineering School of São Carlos, University of São Paulo (EESC - USP) in collaboration with the São Carlos Institute of Physics. Aiming for an autonomous behavior in the prototype vehicle this dissertation focused on deploying some machine learning algorithms to support its perception. These algorithms enabled the vehicle to execute articial-intelligence tasks, such as prediction and memory retrieval for object classication. Even though in autonomous navigation there are several perception, cognition and actuation tasks, this dissertation focused only on perception, which provides the vehicle control system with information about the environment around it. The most basic information to be provided is the existence of objects (obstacles) around the vehicle. In formation about the sort of object it is also provided, i.e., its classication among cars, pedestrians, stakes, the road, as well as the scale of such an object and its position in front of the vehicle. The environmental data was acquired by using a camera and a Velodyne LiDAR. A ceiling analysis of the object detection pipeline was used to simulate the proposed methodology. As a result, this analysis estimated that processing specic regions in the PDF Compressor Pro xii image (i.e., Regions of Interest, or RoIs), where it is more likely to nd an object, would be the best way of improving our recognition system, a process called image normalization. Consequently, experimental results in a data-fusion approach using laser data and images, in which RoIs were found using the LiDAR data, showed that the fusion approach can provide better object detection and classication compared with the use of either camera or LiDAR alone. Deploying a data-fusion classication using RoI method can be executed at 6 Hz and with 100% precision in pedestrians and 92.3% in cars. The fusion also enabled road estimation even when there were shadows and colored road markers in the image. Vision-based classier supported by LiDAR data provided a good solution for multi-scale object detection and even for the non-uniform illumination problem. / Este projeto de pesquisa fez parte do projeto SENA (Sistema Embarcado de Navegação Autônoma), ele foi realizado no Laboratório de Robótica Móvel do Grupo de Mecatrônica da Escola de Engenharia de São Carlos (EESC), em colaboração com o Instituto de Física de São Carlos (IFSC). A grande motivação do projeto SENA é o desenvolvimento de tecnologias assistidas e autônomas que possam atender às necessidades de diferentes tipos de motoristas (inexperientes, idosos, portadores de limitações, etc.). Vislumbra-se que a aplicação em larga escala desse tipo de tecnologia, em um futuro próximo, certamente reduzirá drasticamente a quantidade de pessoas feridas e mortas em acidentes automobilísticos em estradas e em ambientes urbanos. Nesse contexto, este projeto de pesquisa teve como objetivo proporcionar informações relativas ao ambiente ao redor do veículo, ao sistema de controle e de tomada de decisão embarcado no veículo autônomo. As informações mais básicas fornecidas são as posições dos objetos (obstáculos) ao redor do veículo; além disso, informações como o tipo de objeto (ou seja, sua classificação em carros, pedestres, postes e a própria rua mesma), assim como o tamanho deles. Os dados do ambiente são adquiridos através do emprego de uma câmera e um Velodyne LiDAR. Um estudo do tipo ceiling foi usado para simular a metodologia da detecção dos obstáculos. Estima-se que , após realizar o estudo, que analisar regiões especificas da imagem, chamadas de regiões de interesse, onde é mais provável encontrar um obstáculo, é o melhor jeito de melhorar o sistema de reconhecimento. Observou-se na implementação da fusão dos sensores que encontrar regiões de interesse usando LiDAR, e classificá-las usando visão artificial fornece um melhor resultado na hora de compará-lo com os resultados ao usar apenas câmera ou LiDAR. Obteve-se uma classificação com precisão de 100% para pedestres e 92,3% para carros, rodando em uma frequência de 6 Hz. A fusão dos sensores também forneceu um método para estimar a estrada mesmo quando esta tinha sombra ou faixas de cor. Em geral, a classificação baseada em visão artificial e LiDAR mostrou uma solução para detecção de objetos em várias escalas e mesmo para o problema da iluminação não uniforme do ambiente.

Page generated in 0.1353 seconds