• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 134
  • 42
  • 24
  • 14
  • 14
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 288
  • 288
  • 288
  • 63
  • 44
  • 43
  • 42
  • 36
  • 33
  • 33
  • 32
  • 31
  • 31
  • 28
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Model development of Time dynamic Markov chain to forecast Solar energy production / Modellutveckling av tidsdynamisk Markovkedja, för solenergiprognoser

Bengtsson, Angelica January 2023 (has links)
This study attempts to improve forecasts of solar energy production (SEP), so that energy trading companies can propose more accurate bids to Nord Pool. The aim ismake solar energy a more lucrative business, and therefore lead to more investments in this green energy form. The model that is introduced is a hidden Markov model (HMM) that we call a Time-dynamic Markov-chain (TDMC). The TDMC is presented in general, but applied to the energy sector SE4 in south of Sweden. A simple linear regression model is used to compare with the performance of the TDMC model. Regarding the mean absolute error (MAE) and the root-mean-square error (RMSE), the TDMC model outperforms a simple linear regression; both when the training data is relatively fresh and also when the training data has not been updated in over 300 days. A paired t-test also shows a non-significant deviation from the true SEP per day, at the 0.05 significance level, when simulating the first two months of 2023 with the TDMC model. The simple linear regression model, however, shows a significant difference from reality, in comparison.
182

Feature Analysis in Online Signature Verification on Digital Whiteboard : An analysis on the performance of handwritten signature authentication using local and global features with Hidden Markov models / Feature-analys inom online signaturigenkänning på digitala whiteboards : En analys av hur lokala och globala features presterar i dolda Markovmodeller

Olander Sahlén, Simon January 2018 (has links)
The usage of signatures for authentication is widely accepted, and remains one of the most familiar biometric in our society. Efforts to digitalise and automate the verification of these signatures are hot topics in the field of Machine Learning, and a plethora of different tools and methods have been developed and adapted for this purpose. The intention of this report is to study the authentication of handwritten signatures on digital whiteboards, and how to most effectively set up a dual verification system based on Hidden Markov models (HMMs) and global aggregate features such as average speed. The aim is to gauge which features are Suitable for determining that a signature is in fact genuine Suitable for rejecting forgeries Unsuitable for gauging the authenticity of a signature all together In addition, we take a look at the configuration of the HMMs themselves, in order to find good configurations for The number of components used in the model What type of covariance to use The best threshold to draw the line between a genuine signature and a forgery For the research, we collected a total of 200 signatures and 400 forgeries, gathered from 10 different people on digital whiteboards. We concluded that the best configurations of our HMMs had 11 components, used a full covariance model, and observed about five features, where pressure, angle and speed were the most important. Among the global features, we discarded 11 out of 35 due to either strong correlation with other features, or contained too little discriminatory information. The strongest global features were the ones pertaining to speed, acceleration, direction, and curvature. Using the combined verification we obtained an EER of 7 %, which is in the typical range of contemporary studies. We conclude that the best way to combine global feature verification with local HMM verification is to perform both separately, and only accept signatures that are admissible by both, with a tolerance level for the global and local verifications of 1.2 and 2.5 standard deviations, respectively. / Användandet av signaturer för autentisering är allmänt accepterat, och är fortfarande den mest använda biometriken i vårt samhälle. Arbetet med att digitalisera och automatisera verifieringen av dessa signaturer är ett populärt ämne inom maskininlärning, och en uppsjö av olika verktyg och metoder har utvecklats och anpassats för detta ändamål. Avsikten med denna studie är att bestämma hur man mest framgångsrikt kan inrätta ett verifikationssystem för handskrivna signatures på digitala whiteboards baserat på dolda Markovmodeller (HMMs) och globalt aggregerade attribut. Syftet är att bedöma vilka features som är Lämpliga för att bestämma huruvida en signatur är äkta Lämpliga för att avvisa förfalskningar Olämpliga för att mäta äktheten hos en signatur över huvud taget Utöver detta studerar vi HMM-konfigurationen själv, i syfte att hitta bra konfigurationer för Antalet komponenter som används i modellen Vilken typ av kovarians som ger bäst resultat Det bästa tröskelvärdet vid vilken att dra gränsen för huruvida en signatur är äkta eller förfalskad För forskningen samlade vi totalt in 200 signaturer och 400 förfalskningar från 10 olika personer med hjälp av digitala whiteboards. Vi drog slutsatsen att de bästa konfigurationerna hade 11 komponenter, använde komplett kovarians, och använde cirka fem features, där tryck, vinkel och hastighet var det viktigaste. Bland våra globala features kastade vi 11 av 35 på grund av att de antingen korrelerade för starkt med andra features, eller på grund av att de innehöll för lite information för att utröna huruvida en signatur var äkta eller ej. Våra bästa globala features var de som hänförde sig till hastighet, acceleration, riktning och krökning. Genom att använda den kombinerade verifieraren fick vi en EER på 7 %, vilket är i linje med liknande studier. Vi drog även slutsatsen att det bästa sättet att kombinera global verifiering med lokal HMM-verifiering är att utföra dem separat och endast acceptera signaturer som godkänns av bägge två. Den bästa toleransnivån för den globala och lokala verifieraren var 1,2 och 2,5 standardavvikelser, respektive.
183

Intention recognition in human machine collaborative systems

Aarno, Daniel January 2007 (has links)
Robotsystem har använts flitigt under de senaste årtiondena för att skapa automationslösningar i ett flertal områden. De flesta nuvarande automationslösningarna är begränsade av att uppgifterna de kan lösa måste vara repetitiva och förutsägbara. En av anledningarna till detta är att dagens robotsystem saknar förmåga att förstå och resonera om omvärlden. På grund av detta har forskare inom robotik och artificiell intelligens försökt att skapa intelligentare maskiner. Trots att stora framsteg har gjorts då det gäller att skapa robotar som kan fungera och interagera i en mänsklig miljö så finns det för nuvarande inget system som kommer i närheten av den mänskliga förmågan att resonera om omvärlden. För att förenkla problemet har vissa forskare föreslagit en alternativ lösning till helt självständiga robotar som verkar i mänskliga miljöer. Alternativet är att kombinera människors och maskiners förmågor. Exempelvis så kan en person verka på en avlägsen plats, som kanske inte är tillgänglig för personen i fråga på grund av olika orsaker, genom att använda fjärrstyrning. Vid fjärrstyrning skickar operatören kommandon till en robot som verkar som en förlängning av operatörens egen kropp. Segmentering och identifiering av rörelser skapade av en operatör kan användas för att tillhandahålla korrekt assistans vid fjärrstyrning eller samarbete mellan människa och maskin. Assistansen sker ofta inom ramen för virtuella fixturer där eftergivenheten hos fixturen kan justeras under exekveringen för att tillhandahålla ökad prestanda i form av ökad precision och minskad tid för att utföra uppgiften. Den här avhandlingen fokuserar på två aspekter av samarbete mellan människa och maskin. Klassificering av en operatörs rörelser till ett på förhand specificerat tillstånd under en manipuleringsuppgift och assistans under manipuleringsuppgiften baserat på virtuella fixturer. Den specifika tillämpningen som behandlas är manipuleringsuppgifter där en mänsklig operatör styr en robotmanipulator i ett fjärrstyrt eller samarbetande system. En metod för att följa förloppet av en uppgift medan den utförs genom att använda virtuella fixturer presenteras. Istället för att följa en på förhand specificerad plan så har operatören möjlighet att undvika oväntade hinder och avvika från modellen. För att möjliggöra detta estimeras kontinuerligt sannolikheten att operatören följer en viss trajektorie (deluppgift). Estimatet används sedan för att justera eftergivenheten hos den virtuella fixturen så att ett beslut om hur rörelsen ska fixeras kan tas medan uppgiften utförs. En flerlagers dold Markovmodell (eng. layered hidden Markov model) används för att modellera mänskliga färdigheter. En gestemklassificerare som klassificerar en operatörs rörelser till olika grundläggande handlingsprimitiver, eller gestemer, evalueras. Gestemklassificerarna används sedan i en flerlagers dold Markovmodell för att modellera en simulerad fjärrstyrd manipuleringsuppgift. Klassificeringsprestandan utvärderas med avseende på brus, antalet gestemer, typen på den dolda Markovmodellen och antalet tillgängliga träningssekvenser. Den flerlagers dolda Markovmodellen tillämpas sedan på data från en trajektorieföljningsuppgift i 2D och 3D med en robotmanipulator för att ge både kvalitativa och kvantitativa resultat. Resultaten tyder på att den flerlagers dolda Markovmodellen är väl lämpad för att modellera trajektorieföljningsuppgifter och att den flerlagers dolda Markovmodellen är robust med avseende på felklassificeringar i de underliggande gestemklassificerarna. / Robot systems have been used extensively during the last decades to provide automation solutions in a number of areas. The majority of the currently deployed automation systems are limited in that the tasks they can solve are required to be repetitive and predicable. One reason for this is the inability of today’s robot systems to understand and reason about the world. Therefore the robotics and artificial intelligence research communities have made significant research efforts to produce more intelligent machines. Although significant progress has been made towards achieving robots that can interact in a human environment there is currently no system that comes close to achieving the reasoning capabilities of humans. In order to reduce the complexity of the problem some researchers have proposed an alternative to creating fully autonomous robots capable of operating in human environments. The proposed alternative is to allow fusion of human and machine capabilities. For example, using teleoperation a human can operate at a remote site, which may not be accessible for the operator for a number of reasons, by issuing commands to a remote agent that will act as an extension of the operator’s body. Segmentation and recognition of operator generated motions can be used to provide appropriate assistance during task execution in teleoperative and human-machine collaborative settings. The assistance is usually provided in a virtual fixture framework where the level of compliance can be altered online in order to improve the performance in terms of execution time and overall precision. Acquiring, representing and modeling human skills are key research areas in teleoperation, programming-by-demonstration and human-machine collaborative settings. One of the common approaches is to divide the task that the operator is executing into several sub-tasks in order to provide manageable modeling. This thesis is focused on two aspects of human-machine collaborative systems. Classfication of an operator’s motion into a predefined state of a manipulation task and assistance during a manipulation task based on virtual fixtures. The particular applications considered consists of manipulation tasks where a human operator controls a robotic manipulator in a cooperative or teleoperative mode. A method for online task tracking using adaptive virtual fixtures is presented. Rather than executing a predefined plan, the operator has the ability to avoid unforeseen obstacles and deviate from the model. To allow this, the probability of following a certain trajectory sub-task) is estimated and used to automatically adjusts the compliance of a virtual fixture, thus providing an online decision of how to fixture the movement. A layered hidden Markov model is used to model human skills. A gestem classifier that classifies the operator’s motions into basic action-primitives, or gestemes, is evaluated. The gestem classifiers are then used in a layered hidden Markov model to model a simulated teleoperated task. The classification performance is evaluated with respect to noise, number of gestemes, type of the hidden Markov model and the available number of training sequences. The layered hidden Markov model is applied to data recorded during the execution of a trajectory-tracking task in 2D and 3D with a robotic manipulator in order to give qualitative as well as quantitative results for the proposed approach. The results indicate that the layered hidden Markov model is suitable for modeling teleoperative trajectory-tracking tasks and that the layered hidden Markov model is robust with respect to misclassifications in the underlying gestem classifiers. / QC 20101102
184

Exploiting Cyclostationarity for Radio Environmental Awareness in Cognitive Radios

Kim, Kyou Woong 09 July 2008 (has links)
The tremendous ongoing growth of wireless digital communications has raised spectrum shortage and security issues. In particular, the need for new spectrum is the main obstacle in continuing this growth. Recent studies on radio spectrum usage have shown that pre-allocation of spectrum bands to specific wireless communication applications leads to poor utilization of those allocated bands. Therefore, research into new techniques for efficient spectrum utilization is being aggressively pursued by academia, industry, and government. Such research efforts have given birth to two concepts: Cognitive Radio (CR) and Dynamic Spectrum Access (DSA) network. CR is believed to be the key enabling technology for DSA network implementation. CR based DSA (cDSA) networks utilizes white spectrum for its operational frequency bands. White spectrum is the set of frequency bands which are unoccupied temporarily by the users having first rights to the spectrum (called primary users). The main goal of cDSA networks is to access of white spectrum. For proper access, CR nodes must identify the right cDSA network and the absence of primary users before initiating radio transmission. To solve the cDSA network access problem, methods are proposed to design unique second-order cyclic features using Orthogonal Frequency Division Multiplexing (OFDM) pilots. By generating distinct OFDM pilot patterns and measuring spectral correlation characteristics of the cyclostationary OFDM signal, CR nodes can detect and uniquely identify cDSA networks. For this purpose, the second-order cyclic features of OFDM pilots are investigated analytically and through computer simulation. Based on analysis results, a general formula for estimating the dominant cycle frequencies is developed. This general formula is used extensively in cDSA network identification and OFDM signal detection, as well as pilot pattern estimation. CR spectrum awareness capability can be enhanced when it can classify the modulation type of incoming signals at low and varying signal-to-noise ratio. Signal classification allows CR to select a suitable demodulation process at the receiver and to establish a communication link. For this purpose, a threshold-based technique is proposed which utilizes cycle-frequency domain profile for signal detection and feature extraction. Hidden Markov Models (HMMs) are proposed for the signal classifier. The spectrum awareness capability of CR can be undermined by spoofing radio nodes. Automatic identification of malicious or malfunctioning radio signal transmitters is a major concern for CR information assurance. To minimize the threat from spoofing radio devices, radio signal fingerprinting using second-order cyclic features is proposed as an approach for Specific Emitter Identification (SEI). The feasibility of this approach is demonstrated through the identification of IEEE 802.11a/g OFDM signals from different Wireless Local Area Network (WLAN) card manufactures using HMMs. / Ph. D.
185

An integrated approach to feature compensation combining particle filters and Hidden Markov Models for robust speech recognition

Mushtaq, Aleem 19 September 2013 (has links)
The performance of automatic speech recognition systems often degrades in adverse conditions where there is a mismatch between training and testing conditions. This is true for most modern systems which employ Hidden Markov Models (HMMs) to decode speech utterances. One strategy is to map the distorted features back to clean speech features that correspond well to the features used for training of HMMs. This can be achieved by treating the noisy speech as the distorted version of the clean speech of interest. Under this framework, we can track and consequently extract the underlying clean speech from the noisy signal and use this derived signal to perform utterance recognition. Particle filter is a versatile tracking technique that can be used where often conventional techniques such as Kalman filter fall short. We propose a particle filters based algorithm to compensate the corrupted features according to an additive noise model incorporating both the statistics from clean speech HMMs and observed background noise to map noisy features back to clean speech features. Instead of using specific knowledge at the model and state levels from HMMs which is hard to estimate, we pool model states into clusters as side information. Since each cluster encompasses more statistics when compared to the original HMM states, there is a higher possibility that the newly formed probability density function at the cluster level can cover the underlying speech variation to generate appropriate particle filter samples for feature compensation. Additionally, a dynamic joint tracking framework to monitor the clean speech signal and noise simultaneously is also introduced to obtain good noise statistics. In this approach, the information available from clean speech tracking can be effectively used for noise estimation. The availability of dynamic noise information can enhance the robustness of the algorithm in case of large fluctuations in noise parameters within an utterance. Testing the proposed PF-based compensation scheme on the Aurora 2 connected digit recognition task, we achieve an error reduction of 12.15% from the best multi-condition trained models using this integrated PF-HMM framework to estimate the cluster-based HMM state sequence information. Finally, we extended the PFC framework and evaluated it on a large-vocabulary recognition task, and showed that PFC works well for large-vocabulary systems also.
186

Engineering system design for automated space weather forecast : designing automatic software systems for the large-scale analysis of solar data, knowledge extraction and the prediction of solar activities using machine learning techniques

Alomari, Mohammad Hani January 2009 (has links)
Coronal Mass Ejections (CMEs) and solar flares are energetic events taking place at the Sun that can affect the space weather or the near-Earth environment by the release of vast quantities of electromagnetic radiation and charged particles. Solar active regions are the areas where most flares and CMEs originate. Studying the associations among sunspot groups, flares, filaments, and CMEs is helpful in understanding the possible cause and effect relationships between these events and features. Forecasting space weather in a timely manner is important for protecting technological systems and human life on earth and in space. The research presented in this thesis introduces novel, fully computerised, machine learning-based decision rules and models that can be used within a system design for automated space weather forecasting. The system design in this work consists of three stages: (1) designing computer tools to find the associations among sunspot groups, flares, filaments, and CMEs (2) applying machine learning algorithms to the associations' datasets and (3) studying the evolution patterns of sunspot groups using time-series methods. Machine learning algorithms are used to provide computerised learning rules and models that enable the system to provide automated prediction of CMEs, flares, and evolution patterns of sunspot groups. These numerical rules are extracted from the characteristics, associations, and time-series analysis of the available historical solar data. The training of machine learning algorithms is based on data sets created by investigating the associations among sunspots, filaments, flares, and CMEs. Evolution patterns of sunspot areas and McIntosh classifications are analysed using a statistical machine learning method, namely the Hidden Markov Model (HMM).
187

Structures Markoviennes cachées et modèles à corrélations conditionnelles dynamiques : extensions et applications aux corrélations d'actifs financiers / Hidden Markov Models and dynamic conditional correlations models : extensions et application to stock market time series

Charlot, Philippe 25 November 2010 (has links)
L'objectif de cette thèse est d'étudier le problème de la modélisation des changements de régime dans les modèles a corrélations conditionnelles dynamiques en nous intéressant plus particulièrement a l'approche Markov-switching. A la différence de l'approche standard basée sur le modèle à chaîne de Markov caché (HMM) de base, nous utilisons des extensions du modèle HMM provenant des modèles graphiques probabilistes. Cette discipline a en effet proposé de nombreuses dérivations du modèle de base permettant de modéliser des structures complexes. Cette thèse se situe donc a l'interface de deux disciplines: l'économétrie financière et les modèles graphiques probabilistes.Le premier essai présente un modèle construit a partir d'une structure hiérarchique cachée markovienne qui permet de définir différents niveaux de granularité pour les régimes. Il peut être vu comme un cas particulier du modèle RSDC (Regime Switching for Dynamic Correlations). Basé sur le HMM hiérarchique, notre modèle permet de capter des nuances de régimes qui sont ignorées par l'approche Markov-Switching classique.La seconde contribution propose une version Markov-switching du modèle DCC construite a partir du modèle HMM factorise. Alors que l'approche Markov-switching classique suppose que les tous les éléments de la matrice de corrélation suivent la même dynamique, notre modèle permet à tous les éléments de la matrice de corrélation d'avoir leur propre dynamique de saut. Markov-switching. A la différence de l'approche standard basée sur le modèle à chaîne de Markov caché (HMM) de base, nous utilisons des extensions du modèle HMM provenant des modèles graphiques probabilistes. Cette discipline a en effet propose de nombreuses dérivations du modèle de base permettant de modéliser des structures complexes. Cette thèse se situe donc a l'interface de deux disciplines: l'économétrie financière et les modèles graphiques probabilistes.Le premier essai présente un modèle construit a partir d'une structure hiérarchique cachée markovienne qui permet de définir différents niveaux de granularité pour les régimes. Il peut ^etre vu commeun cas particulier du modele RSDC (Regime Switching for Dynamic Correlations). Base sur le HMMhierarchique, notre modele permet de capter des nuances de regimes qui sont ignorees par l'approcheMarkov-Switching classique.La seconde contribution propose une version Markov-switching du modele DCC construite a partir dumodele HMM factorise. Alors que l'approche Markov-switching classique suppose que les tous les elementsde la matrice de correlation suivent la m^eme dynamique, notre modele permet a tous les elements de lamatrice de correlation d'avoir leur propre dynamique de saut.Dans la derniere contribution, nous proposons un modele DCC construit a partir d'un arbre dedecision. L'objectif de cet arbre est de relier le niveau des volatilites individuelles avec le niveau descorrelations. Pour cela, nous utilisons un arbre de decision Markovien cache, qui est une extension de HMM. / The objective of this thesis is to study the modelling of change in regime in the dynamic conditional correlation models. We focus particularly on the Markov-switching approach. Unlike the standard approach based on the Hidden Markov Model (HMM), we use extensions of HMM coming from probabilistic graphical models theory. This discipline has in fact proposed many derivations of the basic model to model complex structures. Thus, this thesis can be view at the interface of twodisciplines: financial econometrics and probabilistic graphical models.The first essay presents a model constructed from a hierarchical hidden Markov which allows to increase the granularity of the regimes. It can be seen as a special case of RSDC model (Regime Switching for Dynamic Correlations). Based on the hierarchical HMM, our model can capture nuances of regimes that are ignored by the classical Markov-Switching approach.The second contribution proposes a Markov-switching version of the DCC model that is built from the factorial HMM. While the classical Markov-switching approach assumes that all elements of the correlation matrix follow the same switching dynamic, our model allows all elements of the correlation matrix to have their own switching dynamic.In the final contribution, we propose a model DCC constructed based on a decision tree. The objective of this tree is to link the level of volatility with the level of individual correlations. For this, we use a hidden Markov decision tree, which is an extension of HMM.
188

Contribution à l'estimation de la durée de vie résiduelle des systèmes en présence d'incertitudes / Estimation of the remaining useful life of systems in the presence of uncertainties

Delmas, Adrien 08 April 2019 (has links)
La mise en place d’une politique de maintenance prévisionnelle est un défi majeur dans l’industrie qui tente de réduire le plus possible les frais relatifs à la maintenance. En effet, les systèmes sont de plus en plus complexes et demandent un suivi de plus en plus poussé afin de rester opérationnels et sécurisés. Une maintenance prévisionnelle nécessite d’une part d’évaluer l’état de dégradation des composants du système, et d’autre part de pronostiquer l’apparition future d’une panne. Plus précisément, il s’agit d’estimer le temps restant avant l’arrivée d’une défaillance, aussi appelé Remaining Useful Life ou RUL en anglais. L’estimation d’une RUL constitue un réel enjeu car la pertinence et l’efficacité des actions de maintenance dépendent de la justesse et de la précision des résultats obtenus. Il existe de nombreuses méthodes permettant de réaliser un pronostic de durée de vie résiduelle, chacune avec ses spécificités, ses avantages et ses inconvénients. Les travaux présentés dans ce manuscrit s’intéressent à une méthodologie générale pour estimer la RUL d’un composant. L’objectif est de proposer une méthode applicable à un grand nombre de cas et de situations différentes sans nécessiter de modification majeure. De plus, nous cherchons aussi à traiter plusieurs types d’incertitudes afin d’améliorer la justesse des résultats de pronostic. Au final, la méthodologie développée constitue une aide à la décision pour la planification des opérations de maintenance. La RUL estimée permet de décider de l’instant optimal des interventions nécessaires, et le traitement des incertitudes apporte un niveau de confiance supplémentaire dans les valeurs obtenues. / Predictive maintenance strategies can help reduce the ever-growing maintenance costs, but their implementation represents a major challenge. Indeed, it requires to evaluate the health state of the component of the system and to prognosticate the occurrence of a future failure. This second step consists in estimating the remaining useful life (RUL) of the components, in Other words, the time they will continue functioning properly. This RUL estimation holds a high stake because the precision and accuracy of the results will influence the relevance and effectiveness of the maintenance operations. Many methods have been developed to prognosticate the remaining useful life of a component. Each one has its own particularities, advantages and drawbacks. The present work proposes a general methodology for component RUL estimation. The objective i to develop a method that can be applied to many different cases and situations and does not require big modifications. Moreover, several types of uncertainties are being dealt With in order to improve the accuracy of the prognostic. The proposed methodology can help in the maintenance decision making process. Indeed, it is possible to select the optimal moment for a required intervention thanks to the estimated RUL. Furthermore, dealing With the uncertainties provides additional confidence into the prognostic results.
189

Vers un système de capture du mouvement humain en 3D pour un robot mobile évoluant dans un environnement encombré / Toward a motion capture system in 3D for a mobile robot moving in a cluttered environment

Dib, Abdallah 24 May 2016 (has links)
Dans cette thèse nous intéressons à la conception d'un robot mobile capable d’analyser le comportement et le mouvement d’une personne en environnement intérieur et encombré, par exemple le domicile d’une personne âgée. Plus précisément, notre objectif est de doter le robot des capacités de perception visuelle de la posture humaine de façon à mieux maîtriser certaines situations qui nécessitent de comprendre l’intention des personnes avec lesquelles le robot interagit, ou encore de détecter des situations à risques comme les chutes ou encore d’analyser les capacités motrices des personnes dont il a la garde. Le suivi de la posture dans un environnement dynamique et encombré relève plusieurs défis notamment l'apprentissage en continue du fond de la scène et l'extraction la silhouette qui peut être partiellement observable lorsque la personne est dans des endroits occultés. Ces difficultés rendent le suivi de la posture une tâche difficile. La majorité des méthodes existantes, supposent que la scène est statique et la personne est toujours visible en entier. Ces approches ne sont pas adaptées pour fonctionner dans des conditions réelles. Nous proposons, dans cette thèse, un nouveau système de suivi capable de suivre la posture de la personne dans ces conditions réelles. Notre approche utilise une grille d'occupation avec un modèle de Markov caché pour apprendre en continu l'évolution de la scène et d'extraire la silhouette, ensuite un algorithme de filtrage particulaire hiérarchique est utilisé pour reconstruire la posture. Nous proposons aussi un nouvel algorithme de gestion d'occlusion capable d'identifier et d'exclure les parties du corps cachées du processus de l'estimation de la pose. Finalement, nous avons proposé une base de données contenant des images RGB-D avec la vérité-terrain dans le but d'établir une nouvelle référence pour l'évaluation des systèmes de capture de mouvement dans un environnement réel avec occlusions. La vérité-terrain est obtenue à partir d'un système de capture de mouvement à base de marqueur de haute précision avec huit caméras infrarouges. L'ensemble des données est disponible en ligne. La deuxième contribution de cette thèse, est le développement d'une méthode de localisation visuelle à partir d'une caméra du type RGB-D montée sur un robot qui se déplace dans un environnement dynamique. En effet, le système de capture de mouvement que nous avons développé doit équiper un robot se déplaçant dans une scène. Ainsi, l'estimation de mouvement du robot est importante pour garantir une extraction de silhouette correcte pour le suivi. La difficulté majeure de la localisation d'une caméra dans un environnement dynamique, est que les objets mobiles de la scène induisent un mouvement supplémentaire qui génère des pixels aberrants. Ces pixels doivent être exclus du processus de l'estimation du mouvement de la caméra. Nous proposons ainsi une extension de la méthode de localisation dense basée sur le flux optique pour isoler les pixels aberrants en utilisant l'algorithme de RANSAC. / In this thesis we are interested in designing a mobile robot able to analyze the behavior and movement of a a person in indoor and cluttered environment. Our goal is to equip the robot by visual perception capabilities of the human posture to better analyze situations that require understanding of person with which the robot interacts, or detect risk situations such as falls or analyze motor skills of the person. Motion capture in a dynamic and crowded environment raises multiple challenges such as learning the background of the environment and extracting the silhouette that can be partially observable when the person is in hidden places. These difficulties make motion capture difficult. Most of existing methods assume that the scene is static and the person is always fully visible by the camera. These approaches are not able to work in such realistic conditions. In this thesis, We propose a new motion capture system capable of tracking a person in realistic world conditions. Our approach uses a 3D occupancy grid with a hidden Markov model to continuously learn the changing background of the scene and to extract silhouette of the person, then a hierarchical particle filtering algorithm is used to reconstruct the posture. We propose a novel occlusion management algorithm able to identify and discards hidden body parts of the person from process of the pose estimation. We also proposed a new database containing RGBD images with ground truth data in order to establish a new benchmark for the assessment of motion capture systems in a real environment with occlusions. The ground truth is obtained from a motion capture system based on high-precision marker with eight infrared cameras. All data is available online. The second contribution of this thesis is the development of a new visual odometry method to localize an RGB-D camera mounted on a robot moving in a dynamic environment. The major difficulty of the localization in a dynamic environment, is that mobile objects in the scene induce additional movement that generates outliers pixels. These pixels should be excluded from the camera motion estimation process in order to produce accurate and precise localization. We thus propose an extension of the dense localization method based on the optical flow method to remove outliers pixels using the RANSAC algorithm.
190

Feature Extraction and Image Analysis with the Applications to Print Quality Assessment, Streak Detection, and Pedestrian Detection

Xing Liu (5929994) 02 January 2019 (has links)
Feature extraction is the main driving force behind the advancement of the image processing techniques infields suchas image quality assessment, objectdetection, and object recognition. In this work, we perform a comprehensive and in-depth study on feature extraction for the following applications: image macro-uniformity assessment, 2.5D printing quality assessment, streak defect detection, and pedestrian detection. Firstly, a set of multi-scale wavelet-based features is proposed, and a quality predictor is trained to predict the perceived macro-uniformity. Secondly, the 2.5D printing quality is characterized by a set of merits that focus on the surface structure.Thirdly, a set of features is proposed to describe the streaks, based on which two detectors are developed: the first one uses Support Vector Machine (SVM) to train a binary classifier to detect the streak; the second one adopts Hidden Markov Model (HMM) to incorporates the row dependency information within a single streak. Finally, a novel set of pixel-difference features is proposed to develop a computationally efficient feature extraction method for pedestrian detection.

Page generated in 0.0765 seconds