Spelling suggestions: "subject:"depth sensor"" "subject:"epth sensor""
11 |
Interaktive Initialisierung eines Echtzeit 3D-Trackings für Augmented Reality auf Smart Devices mit TiefensensorenNeges, Matthias, Siewert, Jan Luca January 2016 (has links)
Zusammenfassung
Heutige Ansätze des 3D-Trackings für die Registrierung in der realen Welt zum Einsatz von Augmented Reality lassen sich in modellbasierte und umgebungsbasierte Verfahren unterteilen. Umgebungsbasierte Verfahren nutzen den SLAM-Algorithmus zur Erzeugung dreidimensionaler Punktwolken der Umgebung in Echtzeit. Modellbasierte Verfahren finden Ihren Ursprung im Canny edge detector und nutzen aus den CAD-Modellen abgeleitete Kantenmodelle. Wird das modellbasierte Verfahren über Kantendetektion und das umgebungsbasierte Verfahren über 3DPunktewolken kombiniert, ergibt sich ein robustes, hybrides 3D-Tracking. Die entsprechenden Algorithmen der verschiedenen Verfahren sind in heute verfügbaren AR-Frameworks bereits implementiert. Der vorliegende Betrag zeigt zwar, welche Effizienz das hybride 3D-Tracking aufweist, jedoch auch die Problematik der erforderlichen geometrischen Ähnlichkeit von idealem CAD-Modell, bzw. Kantenmodell, und realem Objekt. Bei unterschiedlichen Montagestufen an verschiedenen Montagestationen und mit wechselnden Anwendern ist beispielsweise eine erneute Initialisierung erforderlich. Somit bedingt das hybride 3D-Tracking zahlreiche Kantenmodell, die zuvor aus der jeweiligen Montagestufe abgeleitet werden müssen. Hinzu kommen geometrische Abweichungen durch die Fertigung, die je nach Größe der branchenspezifischen Toleranzen keine hinreichend hohe Übereinstimmung mit den abgeleiteten Kantenmodellen aus den idealen CAD-Modellen aufweisen. Die Autoren schlagen daher den Einsatz parametrisch aufgebauter Mastermodelle vor, welche durch eine interaktive Initialisierung geometrisch Instanziiert werden. Zum Einsatz kommt hier ein mobiler Tiefensensor für Smart Devices, welcher mit Hilfe des Anwenders eine Relation der realen geometrischen Merkmale mit den Idealen des CAD-Modells ermöglicht. Des Weiteren wird in dem dargestellten Konzept die Nutzung von speziellen Suchalgorithmen basierend auf geometrischen Ähnlichkeiten vorgeschlagen, sodass eine Registrierung und Instanziierung auch ohne hinterlegtes Mastermodell ermöglicht wird. Der Beitrag fokussiert sich bei der Validierung auf die interaktive Initialisierung anhand eines konkreten anwendungsnahen Beispiels, da die Initialisierung die Grundlage für die weitere Entwicklung des Gesamtkonzeptes darstellt.
|
12 |
Uso de interfaces naturais na modelagem de objetos virtuaisOliveira, Fábio Henrique Monteiro 05 August 2013 (has links)
Fundação de Amparo a Pesquisa do Estado de Minas Gerais / The researches about gestural interfaces have been grown significantly. In particular,
after the development of sensors that can accurately capture bodily movements.
Consequently, several fields arise for the application of these technologies. Among
them stands the 3D modeling industry, which is characterized by having robust software.
However, these software often lack a facilitator human-computer interface. This
happens since the interaction is usually enabled by 2 degrees of freedom mouse. Due
to these limitations, common tasks such as rotate the scene viewpoint or move an object
are hardly assimilated by users. This discomfort with the usual and complex 3D
modeling software interface is one of the reasons that lead to quit its use. In this context,
Natural User Interfaces stand out by exploring the natural human gestures in a
better way, in order to promote a more intuitive interface. This work presents a system
that allows the user to perform 3D modeling using poses and hand gestures, providing
an interface with 3 degrees of freedom. An evaluation was conducted with 10 people,
to validate the strategy and application proposed. In this evaluation the participants
reported that the system has potential to become an innovative interface, despite its
limitations. Overall, the hand tracking approach to 3D modeling seems to be promising
and deserves further investigation. / As pesquisas na área de interfaces gestuais vêm crescendo significativamente. Em
especial, após o desenvolvimento de sensores que podem capturar movimentos corporais
com precisão. Como consequência, surgem diversos campos para a aplicação
destas tecnologias. Dentre eles, destaca-se o setor de modelagem 3D, o qual é marcado
por possuir programas robustos. Entretanto, estes são muitas vezes ausentes de
uma interface homem-computador facilitadora. Isto porque a interação, normalmente,
é viabilizada pelo mouse com 2 graus de liberdade. Devido a estas limitações, tarefas
frequentes como rotacionar o ponto de vista da cena e transladar um objeto são
assimiladas pelo usuário com dificuldade. Este desconforto perante a usual e complexa
interface dos programas para modelagem 3D é um dos fatores que culminam
na desistência de seu uso. Neste contexto, Natural User Interfaces se destacam, por
melhor explorar os gestos naturais humanos, a fim de promover uma interface mais
intuitiva. Neste trabalho é apresentado um sistema que permite ao usuário realizar a
modelagem 3D por meio de poses e gestos com as mãos provendo uma interface com
3 graus de liberdade. Uma avaliação foi conduzida com 10 pessoas, a fim de validar
a estratégia e a aplicação proposta. Nesta avaliação os participantes reportaram que
o sistema tem potencial para se tornar uma interface inovadora, a despeito de suas
limitações. Em geral, a abordagem de rastreamento das mãos para modelagem 3D
parece ser promissora e merece uma investigação mais aprofundada. / Mestre em Ciências
|
13 |
Approximating material area, volume,and velocity for belt conveyor systemapplications using 3D depth sensor technologyCenting, Viktor January 2023 (has links)
Time of Flight (ToF) technology describes products or systems which measure distance by calculating the distance emitted light travels before bouncing off its surroundings and ending back up at the system. Since the early 2000s, many advancements in the area of ToF systems have been made leading to much use for the ToF variant LiDAR. Alternative technologies are on the rise, one of which is 3D ToF depth sensors. This report explores ToF depth sensor technology within the setting of belt conveyor system (BCS)applications. More specifically, methods for area, volumetric, and velocity approximation are explored and a comparison is also made against LiDAR. The aim of the report is split. One part aims to compare the accuracy of a ToF depth sensor to a 2D LiDAR scanner. The second one is to propose algorithms that, using only a ToF depth sensor, calculate the volume of material transported by a BCS and approximate the velocity at which said material is traveling. Methods for testing include strictly experimental setups in a controlled environment where both technologies were used to collect data on selected scenes. Results indicate that ToF depth sensors can achieve accuracy equivalent to LiDAR sensors. ToF depth sensors can resolve the volume of objects with relatively good results and algorithms that are not computationally complex. By implementing a proposed algorithm, the velocity of material traveling on a BCS was able to be approximated with up to 99% accuracy. However, effects of common sources of error are present in the result and hence have to be considered moving forward. Therefore, this report also highlights future improvements to establish a more robust methodology and reduce errors. The results can be used to improve current BCS, such as increased range of functionality, reduced costs, and raised quality control while also aiding in the enabling of Industry 4.0 implementation.
|
14 |
Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large WorkspacesRizwan, Macknojia 21 March 2013 (has links)
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version.
The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors.
The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
|
15 |
Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large WorkspacesMacknojia, Rizwan 21 March 2013 (has links)
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version.
The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors.
The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
|
16 |
Room layout estimation on mobile devices / Création de plans d’intérieur avec une tabletteAngladon, Vincent 27 April 2018 (has links)
L’objectif de cette thèse CIFRE est d’étudier et de tirer parti des derniers appareils mobiles du marché pour générer des 3D des pièces observées. De nous jours, ces appareils intègrent un grand nombre de capteurs, tel que des capteurs inertiels, des cameras RGB, et depuis peu, des capteurs de profondeur. Sans compter la présence de l’écran tactile qui offre une interface pour interagir avec l’utilisateur. Un cas d’usage typique de ces modèles 3D est la génération de plans d’intérieur, ou de fichiers CAO 3D (conception assistée par ordinateur) appliques a l’industrie du bâtiment. Le modèle permet d’esquisser les travaux de rénovation d’un appartement, ou d’évaluer la fidélité d’un chantier en cours avec le modèle initial. Pour le secteur de l’immobilier, la génération automatique de plans et modèles 3D peut faciliter le calcul de la surface habitable et permet de proposer des visites virtuelles a d’éventuels acquéreurs. Concernant le grand public, ces modèles 3D peuvent être intégrés a des jeux en réalité mixte afin d’offrir une expérience encore plus immersive, ou pour des applications de réalité augmentée, telles que la décoration d’intérieur. La thèse a trois contributions principales. Nous commençons par montrer comment le problème classique de détection des points de fuite dans une image, peut être revisite pour tirer parti de l’utilisation de données inertielles. Nous proposons un algorithme simple et efficace de détection de points de fuite reposant sur l’utilisation du vecteur gravite obtenu via ces données. Un nouveau jeu de données contenant des photos avec des données inertielles est présenté pour l’évaluation d’algorithmes d’estimation de points de fuite et encourager les travaux ultérieurs dans cette direction. Dans une deuxième contribution, nous explorons les approches d’odométrie visuelle de l’état de l’art qui exploitent des capteurs de profondeur. Localiser l’appareil mobile en temps réel est fondamental pour envisager des applications reposant sur la réalité augmentée. Nous proposons une comparaison d’algorithmes existants développés en grande partie pour ordinateur de bureau, afin d’étudier si leur utilisation sur un appareil mobile est envisageable. Pour chaque approche considérée, nous évaluons la précision de la localisation et les performances en temps de calcul sur mobile. Enfin, nous présentons une preuve de concept d’application permettant de générer le plan d’une pièce, en utilisant une tablette du projet Tango, équipée d’un capteur RGB-D. Notre algorithme effectue un traitement incrémental des données 3D acquises au cours de l’observation de la pièce considérée. Nous montrons comment notre approche utilise les indications de l’utilisateur pour corriger pendant la capture le modèle de la pièce. / Room layout generation is the problem of generating a drawing or a digital model of an existing room from a set of measurements such as laser data or images. The generation of floor plans can find application in the building industry to assess the quality and the correctness of an ongoing construction w.r.t. the initial model, or to quickly sketch the renovation of an apartment. Real estate industry can rely on automatic generation of floor plans to ease the process of checking the livable surface and to propose virtual visits to prospective customers. As for the general public, the room layout can be integrated into mixed reality games to provide a better immersiveness experience, or used in other related augmented reality applications such room redecoration. The goal of this industrial thesis (CIFRE) is to investigate and take advantage of the state-of-the art mobile devices in order to automate the process of generating room layouts. Nowadays, modern mobile devices usually come a wide range of sensors, such as inertial motion unit (IMU), RGB cameras and, more recently, depth cameras. Moreover, tactile touchscreens offer a natural and simple way to interact with the user, thus favoring the development of interactive applications, in which the user can be part of the processing loop. This work aims at exploiting the richness of such devices to address the room layout generation problem. The thesis has three major contributions. We first show how the classic problem of detecting vanishing points in an image can benefit from an a-priori given by the IMU sensor. We propose a simple and effective algorithm for detecting vanishing points relying on the gravity vector estimated by the IMU. A new public dataset containing images and the relevant IMU data is introduced to help assessing vanishing point algorithms and foster further studies in the field. As a second contribution, we explored the state of-the-art of real-time localization and map optimization algorithms for RGB-D sensors. Real-time localization is a fundamental task to enable augmented reality applications, and thus it is a critical component when designing interactive applications. We propose an evaluation of existing algorithms for the common desktop set-up in order to be employed on a mobile device. For each considered method, we assess the accuracy of the localization as well as the computational performances when ported on a mobile device. Finally, we present a proof of concept of application able to generate the room layout relying on a Project Tango tablet equipped with an RGB-D sensor. In particular, we propose an algorithm that incrementally processes and fuses the 3D data provided by the sensor in order to obtain the layout of the room. We show how our algorithm can rely on the user interactions in order to correct the generated 3D model during the acquisition process.
|
17 |
Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large WorkspacesMacknojia, Rizwan January 2013 (has links)
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version.
The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors.
The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
|
18 |
Estimation de cartes d'énergie de hautes fréquences ou d'irrégularité de périodicité de la marche humaine par caméra de profondeur pour la détection de pathologiesNdayikengurukiye, Didier 04 1900 (has links)
Ce travail présente deux nouveaux systèmes simples
d'analyse de la marche humaine grâce à une caméra de profondeur
(Microsoft Kinect) placée devant un sujet marchant
sur un tapis roulant conventionnel, capables de détecter une marche
saine et celle déficiente. Le premier système repose sur le fait
qu'une marche normale présente typiquement un signal de profondeur
lisse au niveau de chaque pixel avec moins de hautes fréquences, ce qui
permet d'estimer une carte indiquant l'emplacement et l'amplitude
de l'énergie de haute fréquence (HFSE). Le second système analyse
les parties du corps qui ont un motif de mouvement
irrégulier, en termes de périodicité, lors de la marche. Nous
supposons que la marche d'un sujet sain présente partout dans le
corps, pendant les cycles de marche, un signal de profondeur
avec un motif périodique sans bruit. Nous estimons, à partir de la
séquence vidéo de chaque sujet, une carte montrant les zones
d'irrégularités de la marche (également appelées énergie de bruit
apériodique). La carte avec HFSE ou celle visualisant l'énergie de
bruit apériodique peut être utilisée comme un bon indicateur
d'une éventuelle pathologie, dans un outil de diagnostic précoce,
rapide et fiable, ou permettre de fournir des informations sur la
présence et l'étendue de la maladie ou des problèmes (orthopédiques,
musculaires ou neurologiques) du patient. Même si les
cartes obtenues sont informatives et très discriminantes pour une
classification visuelle directe, même pour un non-spécialiste, les
systèmes proposés permettent de détecter
automatiquement les individus en bonne santé et ceux avec des
problèmes locomoteurs. / This work presents two new and simple human gait analysis systems
based on a depth camera (Microsoft Kinect) placed
in front of a subject walking on a conventional treadmill, capable of
detecting a healthy gait from an impaired one. The first system
presented relies on the fact that a normal walk typically exhibits a
smooth motion (depth) signal, at each pixel with less high-frequency
spectral energy content than an abnormal walk. This permits to
estimate a map for that subject, showing the location and the
amplitude of the high-frequency spectral energy (HFSE). The second
system analyses the patient's body parts that have an irregular
movement pattern, in terms of periodicity, during walking. Herein we
assume that the gait of a healthy subject exhibits anywhere in the
human body, during the walking cycles, a depth signal with a periodic
pattern without noise. From each subject’s video sequence, we
estimate a saliency color map showing the areas of strong gait
irregularities also called aperiodic noise energy. Either the HFSE
or aperiodic noise energy shown in the map can be used as a good
indicator of possible pathology in an early, fast and reliable
diagnostic tool or to provide information about the presence and
extent of disease or (orthopedic, muscular or neurological) patient's
problems.
Even if the maps obtained are informative and highly discriminant for
a direct visual classification, even for a non-specialist, the
proposed systems allow us to automatically detect maps representing
healthy individuals and those representing individuals with
locomotor problems.
|
19 |
Placement of Controls in Construction Equipment Using Operators´Sitting Postures : Process and RecommendationsJalkebo, Charlotte January 2014 (has links)
An ergonomically designed work environment may decrease work related musculoskeletal disorders, lead to less sick leaves and increase production time for operators and companies all around the world. Volvo Construction Equipment wants to deepen the knowledge and investigate more carefully how operators are actually sitting whilst operating the machines, how this affects placement of controls and furthermore optimize controls placements accordingly. The purpose is to enhance their product development process by suggesting guidelines for control placement with improved ergonomics based on operators’ sitting postures. The goal is to deliver a process which identifies and transfers sitting postures to RAMSIS and uses them for control placement recommendations in the cab and operator environments. Delimitations concerns: physical ergonomics, 80% usability of the resulted process on the machine types, and the level of detail for controls and their placements. Research, analysis, interviews, test driving of machines, video recordings of operators and the ergonomic software RAMSIS has served as base for analysis. The analysis led to (i) the conclusion that sitting postures affect optimal ergonomic placement of controls, though not ISO-standards, (ii) the conclusion that RAMSIS heavy truck postures does not seem to correspond to Volvo CE’s operators’ sitting postures and (iii) and to an advanced engineering project process suitable for all machine types and applicable in the product development process. The result can also be used for other machines than construction equipment. The resulted process consists of three independent sub-processes with step by step explanations and recommendations of; (i) what information that needs to be gathered, (ii) how to identify and transfer sitting postures into RAMSIS, (iii) how to use RAMSIS to create e design aid for recommended control placement. The thesis also contains additional enhancements to Volvo CE’s product development process with focus on ergonomics. A conclusion is that the use of motion capture could not be verified to work for Volvo Construction Equipment, though it was verified that if motion capture works, the process works. Another conclusion is that the suggested body landmarks not could be verified that they are all needed for this purpose except for those needed for control placement. Though they are based on previous sitting posture identification in vehicles and only those that also occur in RAMSIS are recommended, and therefore they can be used. This thesis also questions the most important parameters for interior vehicle design (hip- and eye locations) and suggests that shoulder locations are just as important. The thesis concluded five parameters for control categorization, and added seven categories in addition to those mentioned in the ISO-standards. Other contradictions and loopholes in the ISO-standards were identified, highlighted and discussed. Suggestions for improving the ergonomic analyses in RAMSIS can also be found in this report. More future research mentioned is more details on control placement as well as research regarding sitting postures are suggested. If the resulted process is delimited to concern upper body postures, other methods for posture identification may be used.
|
Page generated in 0.0369 seconds