• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 74
  • 74
  • 14
  • 13
  • 12
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Comparative Study of Vision Camera-based Vibration Analysis with the Laser Vibrometer Method

Muralidharan, Pradeep Kumar, Yanamadala, Hemanth January 2021 (has links)
Vibration analysis is a method that studies patterns in vibration data and measures vibration levels. It is usually performed on time waveforms of the vibration signal directly and on thefrequency spectrum derived by applying the Fourier Transform on the time waveform. Conventional vibration analysis methods are either expensive, need a complicated setup, or both. Non-contact measurement systems, such as high-speed cameras coupled with computer vision and motion magnification methods, are suitable options for monitoring vibrations of any system. In this work, many classic and state-of-the-art computer vision tracking algorithms were compared. Low and high frame rate videos are used to evaluate their ability to track the oscillatory movement that characterizes vibrations. The trackers are benchmarked with literature and experimental study. Two sets of experiments were carried out in this work, one using a cantilever and another using a robot. The resonance frequencies obtained from the vision camera method are compared to the Laser vibrometer method, which is industry standard. The results show that the resonance frequencies of both methods are closer to each other. The limitations of the tracking algorithm-based approach used for vibration analysis were discussed at the end. Since the methods provided are generic, they may be easily modified for other relevant applications. / Vibrationsanalys är en metod som studerar mönster i vibrationsdata och mäter vibrationsnivåer. Det utförs vanligtvis på tidvågformer av vibrationssignalen direkt och på frekvensen, spektrum som härleds genom att applicera Fourier Transform på tidvågform. Konventionella vibrationsanalysmetoder är antingen dyra, kräver en komplicerad installation eller båda. Beröringsfria mätsystem, till exempel höghastighetskameror i kombination med datorsyn och rörelseförstoringsmetoder, är lämpliga alternativ för att övervaka vibrationer i alla system. I detta arbete jämfördes många klassiska och toppmoderna datorsynsspårningsalgoritmer. Videor med låg och hög bildhastighet används för att utvärdera deras förmåga att spåra den oscillerande rörelsen som kännetecknar vibrationer. Spårarna jämförs med litteratur och experimentell studie. I detta arbete utfördes två uppsättningar experiment, ett med en fribärare och ett annat med en robot. De resonans frekvenser som erhålls från visionkamerametoden jämförs med Laservibrometer metoden, som är branschstandard. Resultaten visar att resonansfrekvenserna för båda metoderna ligger närmare varandra. Begränsningarna av det spårningsalgoritmbaserade tillvägagångssättet som används för vibrationsanalys diskuterades i slutet. Eftersom de angivna metoderna är generiska kan de enkelt modifieras för andra relevanta applikationer.
42

Evaluation of the validity of IMU sensors measuring wrist angular velocity by comparison with an optical motion tracking system / Utvärdering av validiteten hos IMU-mätningar av handledshastighet genom jämförelse med ett optiskt mätsystem

Tesfaldet, Mogos Tseletu January 2020 (has links)
There is a need for objective methods for wrist angular velocity measurements for accurate risk assessments because there is a high frequency of musculoskeletal disorder in workers. The goal of this project was to validate the accuracy of inertial measurement unit sensors to measure the angular velocity. More specifically, the purpose of this master thesis project was to apply an alternative algorithm to compute the markers velocity, other than the one from the optical system that Jenny Wingqvist, and Josephine Lantz used. The project used an experimental data of 10 participants from the previous project done by Jenny Wingqvist and Josephine Lantz. To validate the accuracy, the data of angular velocity of the sensors was compared with the data of angular velocity of markers. The lowest mean value of the root mean square differences value was 23.5 degrees/s during flexion and deviation standard movements at 40 BPM (Beats Per Minute) and the maximum value was 110.5 degrees/s at 140 BPM. The mean value of the correlation coefficients between markers and sensors angular velocities in standard movements of flexion and deviation were 0.85, 0.88, and 0.89 at 40 BPM, 90 BPM, and 140 BPM, respectively. The smallest and the largest mean value of the absolute difference in 50th percentile was found in 40 BPM (19.4±11.3), and 140 BPM (51.2±28.5) respectively. The decorrelation coefficient between the subjects 50th percentile of the angular velocity was 0.91 for the standard movements. The upper limit of agreement for the standard movements was 78.36 degrees/s, while the lower limit of agreement was -13.76 degrees/s.  The results show that the error was too large, so there is a need of further research to measure the wrist angular velocity using IMU sensors.
43

Hybrid marker-less camera pose tracking with integrated sensor fusion

Moemeni, Armaghan January 2014 (has links)
This thesis presents a framework for a hybrid model-free marker-less inertial-visual camera pose tracking with an integrated sensor fusion mechanism. The proposed solution addresses the fundamental problem of pose recovery in computer vision and robotics and provides an improved solution for wide-area pose tracking that can be used on mobile platforms and in real-time applications. In order to arrive at a suitable pose tracking algorithm, an in-depth investigation was conducted into current methods and sensors used for pose tracking. Preliminary experiments were then carried out on hybrid GPS-Visual as well as wireless micro-location tracking in order to evaluate their suitability for camera tracking in wide-area or GPS-denied environments. As a result of this investigation a combination of an inertial measurement unit and a camera was chosen as the primary sensory inputs for a hybrid camera tracking system. After following a thorough modelling and mathematical formulation process, a novel and improved hybrid tracking framework was designed, developed and evaluated. The resulting system incorporates an inertial system, a vision-based system and a recursive particle filtering-based stochastic data fusion and state estimation algorithm. The core of the algorithm is a state-space model for motion kinematics which, combined with the principles of multi-view camera geometry and the properties of optical flow and focus of expansion, form the main components of the proposed framework. The proposed solution incorporates a monitoring system, which decides on the best method of tracking at any given time based on the reliability of the fresh vision data provided by the vision-based system, and automatically switches between visual and inertial tracking as and when necessary. The system also includes a novel and effective self-adjusting mechanism, which detects when the newly captured sensory data can be reliably used to correct the past pose estimates. The corrected state is then propagated through to the current time in order to prevent sudden pose estimation errors manifesting as a permanent drift in the tracking output. Following the design stage, the complete system was fully developed and then evaluated using both synthetic and real data. The outcome shows an improved performance compared to existing techniques, such as PTAM and SLAM. The low computational cost of the algorithm enables its application on mobile devices, while the integrated self-monitoring, self-adjusting mechanisms allow for its potential use in wide-area tracking applications.
44

Ανίχνευση και παρακολούθηση κίνησης σε δίκτυα καμερών

Ευσταθίου, Άρης 18 December 2013 (has links)
Η παρούσα διπλωματική εργασία μελετά την ανίχνευση και παρακολούθηση της κίνησης των ανθρώπων μέσα από δίκτυα καμερών. Σκοπός της παρούσας εργασίας είναι η υλοποίηση ενός συστήματος ανίχνευσης , παρακολούθησης εκ νέου ταυτοποίησης των ανθρώπων που διέρχονται μέσα από ένα δίκτυο καμερών καθώς και να προτείνει ένα μοντέλο για την κατανόηση της τοπολογίας του δικτύου των καμερών. Το κύριο πρόβλημα υποδιαιρείται σε τρία επιμέρους υπό – προβλήματα. Το πρώτο αφορά την ανίχνευση κίνησης. Το δεύτερο την παρακολούθηση των ανθρώπων και τέλος το τρίτο αφορά την αντιστοίχηση τους μεταξύ των καμερών. Σαν αποτέλεσμα στο τέλος έχουμε για κάθε άνθρωπο το μονοπάτι που διέγραψε μέσα στο δίκτυο. Η Ανίχνευση κίνησης υλοποιείται με αφαίρεση φόντου. Η παρακολούθηση υλοποιείται με δύο χαρακτηριστικά, αυτά του κέντρου μάζας και του χρωματικού ιστογράμματος. Η τοπολογία του δικτύου ανακαλύπτεται με ένα μοντέλο που καταγράφει σημεία εισόδου και εξόδου συσχετισμένα με την αντίστοιχη κάμερα από την οποία εισήλθαν ή στην οποία εξήλθαν αντίστοιχα οι άνθρωποι. Κατόπιν γίνεται αντιστοίχηση των σημείων αυτών στις κρίσιμες περιοχές της κάθε κάμερας και η πλειοψηφία των συσχετίσεων τους ορίζει την επικοινωνούσα , για αυτές τις περιοχές , κάμερα. Τέλος γίνεται η αντιστοίχηση των διαδρομών μεταξύ καμερών με έλεγχο χώρο-χρονικών χαρακτηριστικών και χαρακτηριστικών εμφάνισης. Το σύστημα υλοποιήθηκε σε Matlab και έτρεξε σε Intel i7 με συχνότητα 2.93 Ghz και 8GB μνήμης ram. Οι αλγόριθμοι λειτούργησαν ικανοποιητικά με πολύ καλά αποτελέσματα, και μπορούν να περάσουν ως είσοδοι σε πληθώρα εφαρμογών υψηλοτέρου επιπέδου που έχουν ως σκοπό την αναγνώριση της ανθρώπινης δραστηριότητας και την κατανόηση συμπεριφοράς. / This thesis deals with the detection and motion tracking through camera networks. Its purpose is to implement a system for monitoring human movement and perform re-identification in camera networks. It also proposes a model for discovering the topology of cameras network. The main problem is divided into three sub – problems. The first one deals with motion detection , the second one tracks every human located in the plane, and finally the third one has to do with the re-identification between the cameras. As a result we find and identify all human’s paths traced in the network. At first we start with detection that involves also background subtraction. The background is recovered in a dynamic way at every frame and involves median selection. Tracking is accomplished using two features, the centroid and the color histogram. Network topology is discovered from a model which reports entry and exit points associated with the corresponding camera. The system is implemented in Matlab and runs on Intel i7 with frequency 2.93 Ghz and 8GB of ram. The algorithms perform well producing very good results, and can be fed as inputs to a variety of applications that deal with problems related to higher level recognition of human activity and behavior understanding.
45

Analysis and simulation of multimodal cardiac images to study the heart function / Analyse et simulation des images multimodales du coeur pour l'étude de la fonction cardiaque

Prakosa, Adityo 21 January 2013 (has links)
Le travail de thèse porte sur l'analyse de la fonction électrique et mécanique du cœur afin d'étudier les effets de l'insuffisance cardiaque. Il débouche sur un ensemble d'outils qui peuvent aider le clinicien à mieux comprendre et traiter l'asynchronisme cardiaque, un des aspects de l'insuffisance cardiaque. Il a pour principal objectif de résoudre le problème inverse du couplage électro-cinématique : estimer l'électrophysiologie cardiaque sans avoir à effectuer des procédures invasives de cartographie cardiaque. Les séquences cardiaques acquises de manière non-invasive sont déjà largement utilisées dans les centres cliniques et pourraient permettre de caractériser l'électrophysiologie cardiaque sans procédure invasive. La première contribution de ce travail est l'évaluation d'une méthode de recalage non-linéaire appliquée sur des séquences cardiaques pour l'estimation du mouvement. La deuxième est une nouvelle approche de simulation de séquences synthétiques d'images cardiaque. Nous utilisons des séquences réelles et un modèle électromécanique du cœur pour créer des séquences synthétiques contrôlées. Le réalisme des séquences générées repose sur l'utilisation conjointe d'un modèle biophysique et d'images réelles lors de la simulation. Enfin, la troisième contribution concerne une méthode d'estimation de la carte d'activation électrique du cœur à partir d'images médicales. Pour ce faire, nous utilisons une base de données d'images synthétiques cardiaques personnalisée à chaque patient. Ces images et les cartes d'activation électrique utilisées lors de la simulation fournissent une base d'entrainement pour apprendre la relation électro-cinématique du cœur. / This thesis focuses on the analysis of the cardiac electrical and kinematic function for heart failure patients. An expected outcome is a set of computational tools that may help a clinician in understanding, diagnosing and treating patients suffering from cardiac motion asynchrony, a specific aspect of heart failure. Understanding the inverse electro-kinematic coupling relationship is the main task of this study. With this knowledge, the widely available cardiac image sequences acquired non-invasively at clinics could be used to estimate the cardiac electrophysiology (EP) without having to perform the invasive cardiac EP mapping procedures. To this end, we use real clinical cardiac sequence and a cardiac electromechanical model to create controlled synthetic sequence so as to produce a training set in an attempt to learn the cardiac electro-kinematic relationship. Creating patient-specific database of synthetic sequences allows us to study this relationship using a machine learning approach. A first contribution of this work is a non-linear registration method applied and evaluated on cardiac sequences to estimate the cardiac motion. Second, a new approach in the generation of the synthetic but virtually realistic cardiac sequence which combines a biophysical model and clinical images is developed. Finally, we present the cardiac electrophysiological activation time estimation from medical images using a patient-specific database of synthetic image sequences.
46

Mapeamento 3-D para robôs / 3-D mapping for robots

Baptista Júnior, Antonio 14 November 2013 (has links)
Na robótica, mapear o ambiente é tarefa importante, porque ela oferece informação para o planejamento e execução de movimentos do robô. Por este motivo, aqui são apresentados estudos que visam a construção de mapas 3-D e técnicas que auxiliam na tarefa de mapeamento. Quando são construídos mapas 3-D, é habilitado para outros pesquisadores e empresas de robótica a desenvolverem trabalhos de análise e planejamento de trajetórias em todos os seis graus de liberdade do corpo rígido que serve para modelar um robô móvel, robô manipulador ou robô móvel manipulador. Com uma representação do ambiente em 3-D, é aumentada a precisão do posicionamento do robô em relação ao ambiente e também o posicionamento de objetos que estão inseridos no campo de atuação do robô. Para solucionar o problema de mapeamento são apresentadas técnicas teóricas e suas aplicações em cada caso estudado. Nos experimentos realizados neste trabalho foi adotada a criação de mapas com grids (malhas) de ocupação. Vale lembrar, no entanto, que a construção de mapas por malhas de ocupação pressupõe o conhecimento do posicionamento do robô no ambiente. Neste trabalho foram conduzidos três experimentos e seus objetivos são redução de dados provenientes de falhas e redundâncias de informação com utilização de técnicas probabilísticas, detecção de movimento através da técnica de extração de fundo e mapeamento 3-D utilizando a técnica de ponto mais próximo. No experimento cujo o objetivo é reduzir os dados, foi possível reduzir para 4,43% a quantidade de pontos necessários para gerar a representação do ambiente com a utilização do algoritmo deste trabalho. O algoritmo de mapeamento 3-D feito com uso de modelos probabilísticos bem estabelecidos e disponíveis na literatura tem como base a probabilidade de eventos independentes e a proposta do trabalho envolvendo probabilidade a posteriori. O experimento de detecção de movimento foi gerado com a utilização da openCV e a tecnologia CUDA e utilizam a técnica do modelo de mistura gaussiana (GMM), foi analisado o tempo de processamento desempenhado por cada implementação e a qualidade do resultado obtido. Para obter uma representação precisa do ambiente foi conduzido o experimento que utiliza técnica iterativa do ponto mais próximo (ICP), para realização foi utilizado o sensor de movimento Kinect e os resultados apresentados não foram satisfatórios devido ao volume de dados adquiridos e a ausência de um sistema de estimativa da localização. / In robotics, map the environment is an important task, because it provides information for planning and executing movements of the robot. For this reason, studies presented here are aimed to build 3-D maps and techniques that aid in the task of mapping. When we build 3-D maps, we enable other researchers and robotics companies to develop analyzes and path planning in all six degrees of freedom rigid body that serves to model a mobile robot, manipulator or mobile robot manipulator.With a representation of the environment in 3-D, we increase the accuracy of the robot positioning in relation to the environment and also the positioning of objects that are inserted into the field of action of the robot. To solve the problem of mapping we presented theoretical techniques and their applications in each case studied.In the experiments in this work we adopted the creation of maps with grids of occupation. However, building grids of occupation assumes knowledge of the position of the robot on the environment.In this work we conducted three experiments and their aims are the reduction of data from failures and redundant information using probabilistic techniques, motion detection by background extraction technique and 3-D mapping technique using the closest point. In the experiment whose goal is to reduce the data has been further reduced to 4.43% the number of points required to generate the representation of the environment with the use of our algorithm.The algorithm of 3-D mapping done with probabilistic models available and well established in the literature is based on the probability of independent events and the proposed work involving the posterior probability.The motion detection experiment was performed with the use of openCV and CUDA technique using the Gaussian mixture model (GMM),and we analyzed the processing time and the quality of each implementation result.For an accurate representation of the environment was conducted the experiment using the technique of iterative closest point (ICP) was used to perform the motion sensor Kinect and the results were not satisfactory due to the volume of data acquired and the absence of a system location estimate.
47

Mapeamento 3-D para robôs / 3-D mapping for robots

Antonio Baptista Júnior 14 November 2013 (has links)
Na robótica, mapear o ambiente é tarefa importante, porque ela oferece informação para o planejamento e execução de movimentos do robô. Por este motivo, aqui são apresentados estudos que visam a construção de mapas 3-D e técnicas que auxiliam na tarefa de mapeamento. Quando são construídos mapas 3-D, é habilitado para outros pesquisadores e empresas de robótica a desenvolverem trabalhos de análise e planejamento de trajetórias em todos os seis graus de liberdade do corpo rígido que serve para modelar um robô móvel, robô manipulador ou robô móvel manipulador. Com uma representação do ambiente em 3-D, é aumentada a precisão do posicionamento do robô em relação ao ambiente e também o posicionamento de objetos que estão inseridos no campo de atuação do robô. Para solucionar o problema de mapeamento são apresentadas técnicas teóricas e suas aplicações em cada caso estudado. Nos experimentos realizados neste trabalho foi adotada a criação de mapas com grids (malhas) de ocupação. Vale lembrar, no entanto, que a construção de mapas por malhas de ocupação pressupõe o conhecimento do posicionamento do robô no ambiente. Neste trabalho foram conduzidos três experimentos e seus objetivos são redução de dados provenientes de falhas e redundâncias de informação com utilização de técnicas probabilísticas, detecção de movimento através da técnica de extração de fundo e mapeamento 3-D utilizando a técnica de ponto mais próximo. No experimento cujo o objetivo é reduzir os dados, foi possível reduzir para 4,43% a quantidade de pontos necessários para gerar a representação do ambiente com a utilização do algoritmo deste trabalho. O algoritmo de mapeamento 3-D feito com uso de modelos probabilísticos bem estabelecidos e disponíveis na literatura tem como base a probabilidade de eventos independentes e a proposta do trabalho envolvendo probabilidade a posteriori. O experimento de detecção de movimento foi gerado com a utilização da openCV e a tecnologia CUDA e utilizam a técnica do modelo de mistura gaussiana (GMM), foi analisado o tempo de processamento desempenhado por cada implementação e a qualidade do resultado obtido. Para obter uma representação precisa do ambiente foi conduzido o experimento que utiliza técnica iterativa do ponto mais próximo (ICP), para realização foi utilizado o sensor de movimento Kinect e os resultados apresentados não foram satisfatórios devido ao volume de dados adquiridos e a ausência de um sistema de estimativa da localização. / In robotics, map the environment is an important task, because it provides information for planning and executing movements of the robot. For this reason, studies presented here are aimed to build 3-D maps and techniques that aid in the task of mapping. When we build 3-D maps, we enable other researchers and robotics companies to develop analyzes and path planning in all six degrees of freedom rigid body that serves to model a mobile robot, manipulator or mobile robot manipulator.With a representation of the environment in 3-D, we increase the accuracy of the robot positioning in relation to the environment and also the positioning of objects that are inserted into the field of action of the robot. To solve the problem of mapping we presented theoretical techniques and their applications in each case studied.In the experiments in this work we adopted the creation of maps with grids of occupation. However, building grids of occupation assumes knowledge of the position of the robot on the environment.In this work we conducted three experiments and their aims are the reduction of data from failures and redundant information using probabilistic techniques, motion detection by background extraction technique and 3-D mapping technique using the closest point. In the experiment whose goal is to reduce the data has been further reduced to 4.43% the number of points required to generate the representation of the environment with the use of our algorithm.The algorithm of 3-D mapping done with probabilistic models available and well established in the literature is based on the probability of independent events and the proposed work involving the posterior probability.The motion detection experiment was performed with the use of openCV and CUDA technique using the Gaussian mixture model (GMM),and we analyzed the processing time and the quality of each implementation result.For an accurate representation of the environment was conducted the experiment using the technique of iterative closest point (ICP) was used to perform the motion sensor Kinect and the results were not satisfactory due to the volume of data acquired and the absence of a system location estimate.
48

1D LIDAR Speed and Motion for the Internet-of-Things : For Railroad Classification Yards / 1D LIDAR hastighet och rörelse för sakernas internet : För rangerbangårdar

Chancellor, Edward, Oikarinen, Kasper January 2021 (has links)
This thesis is an investigation into the feasibility of one-dimensional Light Detection and Ranging (LIDAR) sensors for tracking the position and motion of trains on railroad classification yards. Carefully monitoring railway traffic in these areas is important, in order to avoid accidents, optimise logistical operations and hence reduce delays. However, existing technologies for tracking trains on regular stretches of train-line, including Radio Frequency Identification (RFID) and Global Positioning System (GPS), have various drawbacks when applied to classification yards. As such, it is pertinent to investigate the extent to which simple LIDAR sensors could be used for this purpose, as part of a basic Internet of Things (IoT) system. To tackle this problem, we considered different ways of positioning the sensors around railway tracks. We then proposed a floating average algorithm for calculating a target object’s velocity using continuous LIDAR distance readings. To know when to apply the algorithm as a train is passing the sensor, we observed how the distance readings varied as a model train passed the sensor. The data was used to construct a Finite-state machine (FSM) that can fully describe the status of trains as they pass the sensor. In order to test our solution, we constructed a prototype sensor node implementing the FSM and evaluated its performance first with a model train and then on actual commuter trains on an outdoors train platform. We found that one-dimensional LIDAR sensors could feasibly be deployed to monitor the position and motion of trains with a high degree of consistency and accuracy. However, LIDAR may need to be corroborated with other types of technology such as RFID so that trains can be distinguished from other moving objects. / Detta projekt undersöker möjligheten att använda endimensionella Light Detection and Ranging (LIDAR) sensorer för att spåra läge och rörelse av tåg på rangerbangårdar. Att övervaka tågtrafik i dessa områden är viktigt för att undvika trafikolyckor, optimera logistiska operationer och därmed minska förseningar. Dagens teknik för att spåra tåg på vanliga tågspår, till exempel Radio Frequency Identification (RFID) och Global Positioning System (GPS), har flera begränsningar när de ska användas till rangerbangårdar. Följaktligen så är det relevant att undersöka till vilken grad enkla LIDAR sensorer kan tillämpas för detta ändamål som en del av ett Internet of Things (IoT) system. För att lösa detta problem, övervägde vi olika sätt att placera sensorerna kring tågspår. Därefter implementerade vi en glidande medelvärdealgoritm för att beräkna målobjektets hastighet genom att använda kontinuerliga LIDAR avståndsmätningar. För att kunna veta när algoritmen skulle tillämpas när riktiga tåg passerade sensorn, noterade vi först hur avståndsmätningarna varierade när ett modelltåg passerade sensorn. Mätningarna användes sedan för att konstruera en ändlig tillståndsmaskin (FSM) som kan fullständigt beskriva statusen av tåget när det åker förbi sensorn. För att testa vår lösning, tillverkade vi en sensornodprototyp med vår FSM implementerad och utvärderade först dess prestationsförmåga med ett modelltåg och sedan med riktiga pendeltåg.Vi observerade att endimensionella LIDAR sensorer kan användas för att övervaka läge och hastighet av tåg med hög precision och konsekventa resultat. Däremot visade sig att LIDAR ska med fördel kombineras med andra typer av teknologi, som till exempel RFID, för att urskilja tåg från andra objekt i rörelse.
49

Modèle et expériences pour la visite des musées en réalité augmentée sonore / Model and experience for audio augmented museums visit

Azough, Fatima-Zahra 16 May 2014 (has links)
Cette thèse consiste à explorer l’usage de la réalité augmenté sonore pour la visite de musées. Notre objectif est de proposer un audioguide permettant d’immerger le visiteur dans une scène sonore constituée de sons ambiants et de commentaires associés aux objets exposés, et de minimiser ses efforts pour découvrir ces objets et interagir avec cet environnement sonore. La première contribution de cette thèse concerne la mise en place d’une preuve de concept de SARIM (Sound Augmented Reality Interface for visiting Museum). Cette preuvede concept a été développée en utilisant des capteurs de position et d’orientation filaire et non filaire. La deuxième contribution concerne la modélisation de la visite augmentée par la dimension sonore. Après une étude des modèles existants, l’objectif est de concevoir un modèle comprenant une représentation du visiteur,du paysage sonore et de la navigation et offrant une grande flexibilité pour créer l’environnement sonore. Ce modèle a comme finalité de faciliter la conception de différents types de scénarios de visite sur la base de la notion de zone d’audibilité. La troisième contribution de cette thèse est le travail d’évaluation mené dans un environnement réel qu’est le musée des arts et métiers de Paris, et qui a permis de confirmer l’utilisabilité, ainsi que l’apport didactique et ludique que procure la réalité augmentée sonore en général et le système SARIM en particulier pour prolonger et enrichir la visite de musées. / The goal of this thesis is to explore the use of sound to enhance the museum visit. We aim to provide anaudioguide to immerse the visitor in a soundstage consisting of ambient sounds and comments associated withthe exhibits, and minimize its efforts to discover these objects and interact with the sound environment. Thefirst contribution of this thesis is the implementation of a proof of concept of SARIM (Sound AugmentedReality Interface for visiting Museum). This proof of concept was developed using position sensors andguidance wired and wirelessly. The second contribution concerns the modeling of the augmented visit by thesound dimension. After a review of existing models, the objective is to design a model that includes arepresentation of the visitor, the soundscape and navigation, offering the flexibility to create the soundenvironment. This model has the purpose of facilitating the design of different types of scenarios based on theconcept of audibility area. The Third contribution of this thesis is the evaluation conducted in a realenvironment what the Museum of Arts and Crafts in Paris, which confirmed the usability, as well as providingeducational and ludic impact of the audio augmented reality in general and the SARIM in particular to extendand enrich the museum visits.
50

Hand Motion Tracking System using Inertial Measurement Units and Infrared Cameras

O-larnnithipong, Nonnarit 07 November 2018 (has links)
This dissertation presents a novel approach to develop a system for real-time tracking of the position and orientation of the human hand in three-dimensional space, using MEMS inertial measurement units (IMUs) and infrared cameras. This research focuses on the study and implementation of an algorithm to correct the gyroscope drift, which is a major problem in orientation tracking using commercial-grade IMUs. An algorithm to improve the orientation estimation is proposed. It consists of: 1.) Prediction of the bias offset error while the sensor is static, 2.) Estimation of a quaternion orientation from the unbiased angular velocity, 3.) Correction of the orientation quaternion utilizing the gravity vector and the magnetic North vector, and 4.) Adaptive quaternion interpolation, which determines the final quaternion estimate based upon the current conditions of the sensor. The results verified that the implementation of the orientation correction algorithm using the gravity vector and the magnetic North vector is able to reduce the amount of drift in orientation tracking and is compatible with position tracking using infrared cameras for real-time human hand motion tracking. Thirty human subjects participated in an experiment to validate the performance of the hand motion tracking system. The statistical analysis shows that the error of position tracking is, on average, 1.7 cm in the x-axis, 1.0 cm in the y-axis, and 3.5 cm in the z-axis. The Kruskal-Wallis tests show that the orientation correction algorithm using gravity vector and magnetic North vector can significantly reduce the errors in orientation tracking in comparison to fixed offset compensation. Statistical analyses show that the orientation correction algorithm using gravity vector and magnetic North vector and the on-board Kalman-based orientation filtering produced orientation errors that were not significantly different in the Euler angles, Phi, Theta and Psi, with the p-values of 0.632, 0.262 and 0.728, respectively. The proposed orientation correction algorithm represents a contribution to the emerging approaches to obtain reliable orientation estimates from MEMS IMUs. The development of a hand motion tracking system using IMUs and infrared cameras in this dissertation enables future improvements in natural human-computer interactions within a 3D virtual environment.

Page generated in 0.1088 seconds