Spelling suggestions: "subject:"depth sensor"" "subject:"epth sensor""
1 |
Recognition and Registration of 3D Models in Depth Sensor DataGrankvist, Ola January 2016 (has links)
Object Recognition is the art of localizing predefined objects in image sensor data. In this thesis a depth sensor was used which has the benefit that the 3D pose of the object can be estimated. This has applications in e.g. automatic manufacturing, where a robot picks up parts or tools with a robot arm. This master thesis presents an implementation and an evaluation of a system for object recognition of 3D models in depth sensor data. The system uses several depth images rendered from a 3D model and describes their characteristics using so-called feature descriptors. These are then matched with the descriptors of a scene depth image to find the 3D pose of the model in the scene. The pose estimate is then refined iteratively using a registration method. Different descriptors and registration methods are investigated. One of the main contributions of this thesis is that it compares two different types of descriptors, local and global, which has seen little attention in research. This is done for two different scene scenarios, and for different types of objects and depth sensors. The evaluation shows that global descriptors are fast and robust for objects with a smooth visible surface whereas the local descriptors perform better for larger objects in clutter and occlusion. This thesis also presents a novel global descriptor, the CESF, which is observed to be more robust than other global descriptors. As for the registration methods, the ICP is shown to perform most accurately and ICP point-to-plane more robust.
|
2 |
Machine Vision Based Inspection: Case Studies on 2D Illumination Techniques and 3D Depth SensorsYAN, MICHAEL T 01 March 2012 (has links)
This paper investigates two distinct, but related, topics in machine vision. The first is the effect of lighting on the performance of a 2D vision-based inspection system. The lighting component of machine vision has often been overlooked; an attempt was made to quantify the impact on existing machine vision algorithms. The second topic explores the applications of a data-rich 3D vision sensor that is capable of providing depth data in a wide range of ambient lightning conditions for industrial applications. A focus is placed on inspection systems with the depth data provided by the sensor.
Three basic lighting geometries were compared quantitatively based on discriminant analysis in an inspection task that checked for the presence of J-clips on an aluminum carrier. Two different LabVIEW® machine vision algorithms were used to evaluate backlight, bright field and dark field illumination on their ability to minimize the span of the pass (clip present) and fail (clip absent) sample sets, as well as maximize the separation between these sample sets. Results showed that there were clear differences in performance with the different lighting geometries, with over a 30% change in performance. Although it has long been accepted that the choice of lighting for machine vision systems is not a trivial exercise, this paper provides a quantitative measure of the impact lighting has on the performance of feature-based machine vision.
The Microsoft Kinect® is a commercial vision sensor that can simultaneously provide a colour video stream, comparable to current webcam technologies, in addition to a depth stream that provides three-dimensional information of the camera’s field of view and is invariant to environmental lighting. An experiment was carried out to characterize the sensor’s accuracy and precision, and to evaluate its performance as an inspection system to determine the orientation of a wheel. Tests were also conducted to determine the effect that changes in the physical environment had on performance. These changes included camera height, lighting and surface material. Results of the experiment have shown that the sensor has an average precision of ±0.12 cm and average accuracy of 0.5 cm, both with less than a 30% change when varying physical features. A discriminant analysis was performed to measure inspection performance, which showed less than 30% change with set separation, but not for set span. No trends were apparent with the change in set span relating to the change in physical features. / Thesis (Master, Mechanical and Materials Engineering) -- Queen's University, 2012-02-29 18:33:20.505
|
3 |
A novel algorithm for human fall detection using height, velocity and position of the subject from depth mapsNizam, Y., Abdul Jamil, M.M., Mohd, M.N.H., Youseffi, Mansour, Denyer, Morgan C.T. 02 July 2018 (has links)
Yes / Human fall detection systems play an important role in our daily life, because falls are the main obstacle for elderly people to live independently and it is also a major health concern due to aging population. Different approaches are used to develop human fall detection systems for elderly and people with special needs. The three basic approaches include some sort of wearable devices, ambient based devices or non-invasive vision-based devices using live cameras. Most of such systems are either based on wearable or ambient sensor which is very often rejected by users due to the high false alarm and difficulties in carrying them during their daily life activities. This paper proposes a fall detection system based on the height, velocity and position of the subject using depth information from Microsoft Kinect sensor. Classification of human fall from other activities of daily life is accomplished using height and velocity of the subject extracted from the depth information. Finally position of the subject is identified for fall confirmation. From the experimental results, the proposed system was able to achieve an average accuracy of 94.81% with sensitivity of 100% and specificity of 93.33%. / Partly sponsored by Center for Graduate Studies. This work is funded under the project titled “Biomechanics computational modeling using depth maps for improvement on gait analysis”. Universiti Tun Hussein Onn Malaysia for provided lab components and GPPS (Project Vot No. U462) sponsor.
|
4 |
Interaktive Initialisierung eines Echtzeit 3D-Trackings für Augmented Reality auf Smart Devices mit TiefensensorenNeges, Matthias, Siewert, Jan Luca 10 December 2016 (has links) (PDF)
Zusammenfassung
Heutige Ansätze des 3D-Trackings für die Registrierung in der realen Welt zum Einsatz von Augmented Reality lassen sich in modellbasierte und umgebungsbasierte Verfahren unterteilen. Umgebungsbasierte Verfahren nutzen den SLAM-Algorithmus zur Erzeugung dreidimensionaler Punktwolken der Umgebung in Echtzeit. Modellbasierte Verfahren finden Ihren Ursprung im Canny edge detector und nutzen aus den CAD-Modellen abgeleitete Kantenmodelle. Wird das modellbasierte Verfahren über Kantendetektion und das umgebungsbasierte Verfahren über 3DPunktewolken kombiniert, ergibt sich ein robustes, hybrides 3D-Tracking. Die entsprechenden Algorithmen der verschiedenen Verfahren sind in heute verfügbaren AR-Frameworks bereits implementiert. Der vorliegende Betrag zeigt zwar, welche Effizienz das hybride 3D-Tracking aufweist, jedoch auch die Problematik der erforderlichen geometrischen Ähnlichkeit von idealem CAD-Modell, bzw. Kantenmodell, und realem Objekt. Bei unterschiedlichen Montagestufen an verschiedenen Montagestationen und mit wechselnden Anwendern ist beispielsweise eine erneute Initialisierung erforderlich. Somit bedingt das hybride 3D-Tracking zahlreiche Kantenmodell, die zuvor aus der jeweiligen Montagestufe abgeleitet werden müssen. Hinzu kommen geometrische Abweichungen durch die Fertigung, die je nach Größe der branchenspezifischen Toleranzen keine hinreichend hohe Übereinstimmung mit den abgeleiteten Kantenmodellen aus den idealen CAD-Modellen aufweisen. Die Autoren schlagen daher den Einsatz parametrisch aufgebauter Mastermodelle vor, welche durch eine interaktive Initialisierung geometrisch Instanziiert werden. Zum Einsatz kommt hier ein mobiler Tiefensensor für Smart Devices, welcher mit Hilfe des Anwenders eine Relation der realen geometrischen Merkmale mit den Idealen des CAD-Modells ermöglicht. Des Weiteren wird in dem dargestellten Konzept die Nutzung von speziellen Suchalgorithmen basierend auf geometrischen Ähnlichkeiten vorgeschlagen, sodass eine Registrierung und Instanziierung auch ohne hinterlegtes Mastermodell ermöglicht wird. Der Beitrag fokussiert sich bei der Validierung auf die interaktive Initialisierung anhand eines konkreten anwendungsnahen Beispiels, da die Initialisierung die Grundlage für die weitere Entwicklung des Gesamtkonzeptes darstellt.
|
5 |
Estimação e análise automática de parâmetros de postura ergonômica usando sensor de profundidade / Estimation and automatic analysis of ergonomic posture parameters using depth sensorQUINTANILHA, Darlan Bruno Pontes 19 February 2013 (has links)
Submitted by Rosivalda Pereira (mrs.pereira@ufma.br) on 2017-08-17T17:55:05Z
No. of bitstreams: 1
DarlanQuintanilha.pdf: 3786941 bytes, checksum: 02e764c0fb1f5831eefb5fb0b92e32de (MD5) / Made available in DSpace on 2017-08-17T17:55:05Z (GMT). No. of bitstreams: 1
DarlanQuintanilha.pdf: 3786941 bytes, checksum: 02e764c0fb1f5831eefb5fb0b92e32de (MD5)
Previous issue date: 2013-02-19 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPQ) / During a workday, a person can take many positions and require muscle strain that can
cause work-related musculoskeletal diseases (MSDs). In this situation, the joints will
become worn over a long period of time, causing fatigue, injuries, or in severe cases, can
lead to permanent deformation. In this sense, postural analysis is essential to evaluate the
activity of a person in a work environment, however the traditional monitoring methods
are manual, which can be exhausting, tedious and inefficient. An automated approach
using sensors depth, by contrast, can provide valuable information about the behavior
related to the activity of the person. In this sense, this work presents a methodology for
the purpose of assisting the professional use of the ergonomic assessment methods posture:
the 3DSSPP (Three Dimensional Static Strength Prediction Program) and RULA (Rapid
Upper Limb Assessment) using a depth sensor to extract information for accurate setting
of posture. The estimation and analysis of posture parameters based on two valuation
methods chosen presented good results, the RULA method showed an accuracy of 71.67%. / Durante uma jornada de trabalho, uma pessoa pode assumir inúmeras posturas e demandar
esforços musculares que podem causar distúrbios osteomusculares relacionados
ao trabalho (DORT). Nesta situação, as articulações ficarão desgastadas por um longo
período de tempo, causando fadiga, lesões ou, em casos graves, levará deformação permanente.
Nesse sentido, a análise postural é essencial para avaliar a atividade de uma pessoa
em um ambiente de trabalho, no entanto os métodos tradicionais de monitoramento são
manuais, podendo ser exaustivos, entediantes e ineficientes. Uma abordagem automatizada
utilizando sensores de profundidade, por outro lado, pode fornecer informações
valiosas sobre o comportamento relacionado à atividade da pessoa. Nesse sentido, este
trabalho apresenta uma metodologia com objetivo de auxiliar o profissional de ergonomia
na utilização de métodos de avaliação de postura: o 3DSSPP (Three Dimensional
Static Strength Prediction Programme) e o RULA (Rapid Upper Limb Assessment), utilizando
um sensor de profundidade para extração de informações precisas para definição
de parâmetros de postura. A estimação e análise de parâmetros de postura baseados
nos dois métodos de avaliação escolhidos apresentaram bons resultados, o método RULA
apresentou uma acurácia de 71,67%.
|
6 |
MONOCULAR POSE ESTIMATION AND SHAPE RECONSTRUCTION OF QUASI-ARTICULATED OBJECTS WITH CONSUMER DEPTH CAMERAYe, Mao 01 January 2014 (has links)
Quasi-articulated objects, such as human beings, are among the most commonly seen objects in our daily lives. Extensive research have been dedicated to 3D shape reconstruction and motion analysis for this type of objects for decades. A major motivation is their wide applications, such as in entertainment, surveillance and health care. Most of existing studies relied on one or more regular video cameras. In recent years, commodity depth sensors have become more and more widely available. The geometric measurements delivered by the depth sensors provide significantly valuable information for these tasks. In this dissertation, we propose three algorithms for monocular pose estimation and shape reconstruction of quasi-articulated objects using a single commodity depth sensor. These three algorithms achieve shape reconstruction with increasing levels of granularity and personalization. We then further develop a method for highly detailed shape reconstruction based on our pose estimation techniques.
Our first algorithm takes advantage of a motion database acquired with an active marker-based motion capture system. This method combines pose detection through nearest neighbor search with pose refinement via non-rigid point cloud registration. It is capable of accommodating different body sizes and achieves more than twice higher accuracy compared to a previous state of the art on a publicly available dataset.
The above algorithm performs frame by frame estimation and therefore is less prone to tracking failure. Nonetheless, it does not guarantee temporal consistent of the both the skeletal structure and the shape and could be problematic for some applications. To address this problem, we develop a real-time model-based approach for quasi-articulated pose and 3D shape estimation based on Iterative Closest Point (ICP) principal with several novel constraints that are critical for monocular scenario. In this algorithm, we further propose a novel method for automatic body size estimation that enables its capability to accommodate different subjects.
Due to the local search nature, the ICP-based method could be trapped to local minima in the case of some complex and fast motions. To address this issue, we explore the potential of using statistical model for soft point correspondences association. Towards this end, we propose a unified framework based on Gaussian Mixture Model for joint pose and shape estimation of quasi-articulated objects. This method achieves state-of-the-art performance on various publicly available datasets.
Based on our pose estimation techniques, we then develop a novel framework that achieves highly detailed shape reconstruction by only requiring the user to move naturally in front of a single depth sensor. Our experiments demonstrate reconstructed shapes with rich geometric details for various subjects with different apparels.
Last but not the least, we explore the applicability of our method on two real-world applications. First of all, we combine our ICP-base method with cloth simulation techniques for Virtual Try-on. Our system delivers the first promising 3D-based virtual clothing system. Secondly, we explore the possibility to extend our pose estimation algorithms to assist physical therapist to identify their patients’ movement dysfunctions that are related to injuries. Our preliminary experiments have demonstrated promising results by comparison with the gold standard active marker-based commercial system. Throughout the dissertation, we develop various state-of-the-art algorithms for pose estimation and shape reconstruction of quasi-articulated objects by leveraging the geometric information from depth sensors. We also demonstrate their great potentials for different real-world applications.
|
7 |
Uživatelské rozhraní založené na zpracování hloubkové mapy / Depth-Based User InterfaceKubica, Peter January 2013 (has links)
Conventional user interfaces are not always the most appropriate option of application controlling. The objective of this work is to study the issue of Kinect sensor data processing and to analyze the possibilities of application controlling through depth sensors. And consequently, using obtained knowledge, to design a user interface for working with multimedia content, which uses Kinect sensor for interaction with the user.
|
8 |
Evaluation d’un système de détection surfacique ‘Kinect V2’ dans différentes applications médicales / "Kinect V2" surface detection system evaluation for medical useNazir, Souha 18 December 2018 (has links)
Une des innovations technologiques majeures de ces dernières années a été le lancement des caméras de profondeur qui peuvent être utilisées dans un large spectre d’applications, notamment pour la robotique, la vision par ordinateur, l’automatisation, etc. Ces dispositifs ont ouvert de nouvelles opportunités pour la recherche scientifique appliquée au domaine médical. Dans le cadre de cette thèse, nous évaluerons l’apport potentiel de l’utilisation du capteur de profondeur grand public « Kinect V2 » dans l’optique de répondre à des problématiques cliniques actuelles en radiothérapie ainsi qu’en réanimation. Le traitement par radiothérapie étant administré sur plusieurs séances, l'un des objectifs clés de ce traitement est le positionnement quotidien du patient dont la précision est impactée par les mouvements respiratoires. D’autre part, les mouvements de la machine ainsi que les éventuels mouvements du patient peuvent entraîner des collisions machine/machine ou machine/patient. Nous proposons un système de détection surfacique pour la gestion des mouvements inter- et intrafractions en radiothérapie externe. Celui-ci est basé sur un algorithme rigide de recalage surfacique pour estimer la position de traitement et un système de détection de collisions en temps réel pour satisfaire les conditions de sécurité durant le traitement. Les résultats obtenus sont encourageants et montrent un bon accord avec les systèmes cliniques. Coté réanimation médicale, la recherche de nouveaux dispositifs non invasifs et sans contact tend à optimiser la prise en charge des patients. La surveillance non invasive de la respiration des patients sous ventilation spontanée est capitale pour les patients instables mais aucun système de suivi à distance n’existe à ce jour. Dans ce contexte, nous proposons un système de mesure sans contact capable de calculer les paramètres ventilatoires en observant les changements morphologiques de la zone thoracique des patients. La méthode développée donne une précision de mesures cliniquement acceptable. / In recent years, one of the major technological innovations has been the introduction of depth cameras that can be used in a wide range of applications, including robotics, computer vision, automation, etc. These devices have opened up new opportunities for scientific research applied to the medical field. In this thesis, we will evaluate the potential use of the "Kinect V2" depth camera in order to respond to current clinical issues in radiotherapy and resuscitation in intensive care unit.Given that radiotherapy treatment is administered over several sessions, one of the key task is to daily reposition the patient in the same way as during the planning session.The precision of such repositioning is impacted by the respiratory motion. On the other hand, the movements of the machine as well as the possible movements of the patient can lead to machine / machine or machine /patient collisions. We propose a surface detection system for the management of inter and intra-fraction motion in external radiotherapy. This system is based on a rigid surface registration algorithm to estimate the treatment position and a real-time collision detection system to ensure patient safety during the treatment.Obtained results are encouraging and show a good agreement with available clinical systems.Concerning medical resuscitation, there is a need for new non-invasive and non-contact devices in order to optimize patient care. Non-invasive monitoring of spontaneous breathing for unstable patients is crucial in the intensive care unit. In this context, we propose a non-contact measurement system capable of calculating the parameters of patient's ventilation by observing thoracic morphological movements. The developed method gives a clinically acceptable precision. Such system is the first to solve previously described issue.
|
9 |
Large scale audience interaction with a Kinect sensorSamini, Ali January 2012 (has links)
We present investigation and designing of a system that interacts with big audience, sitting in a dimmed theater environment. The goal is to automatically detect audiences and some of their actions. Test results indicate that because of low light condition we can’t rely on RGB camera footage in a dimmed environment. We use Microsoft Kinect Sensor to collect data from environment. Kinect is designed to be used with Microsoft Xbox 360 for gaming purposes. It has both RGB and Infrared depth camera. Change in amount of visible light doesn’t affect data from depth camera. Kinect is not a strong camera so it has limitations that we should deal with. Viewing angles of both cameras and depth range of Infrared camera are limited. Viewing angles of depth camera are 43° vertical and 57° horizontal. Most accurate range of depth camera is 1 meter to 4 meters from camera. Non-infrared reflective surfaces cause gaps in depth data. We evaluate possibility of using Kinect camera in a large environment with big audience. “Dome 3D theater” in Norrkoping Visualization Center C, is selected as environment to investigate and test the system. We ran some tests to find the best place and best height for camera to have most coverage. Our system works with optimized image processing algorithms that use 3D depth data instead of regular RGB or Grayscale image. We use “libfreenect”, Open Kinect library to get Kinect sensor up and running. C++ and OpenGL are used as programing languages and graphics interface, respectively. Open GLUT (OpenGL Utility Toolkit) is used for system’s user interface. It was not possible to use Dome environment for every test during the programming period so we recorded some depth footage and used for later tests. While evaluating the possibility of using Kinect in Dome environment, we realized that implementing a voting system would make a good demonstration and test application. Our system counts votes after audiences raise their hands to vote for something.
|
10 |
Room layout estimation on mobile devicesAngladon, Vincent 27 April 2018 (has links) (PDF)
Room layout generation is the problem of generating a drawing or a digital model of an existing room from a set of measurements such as laser data or images. The generation of floor plans can find application in the building industry to assess the quality and the correctness of an ongoing construction w.r.t. the initial model, or to quickly sketch the renovation of an apartment. Real estate industry can rely on automatic generation of floor plans to ease the process of checking the livable surface and to propose virtual visits to prospective customers. As for the general public, the room layout can be integrated into mixed reality games to provide a better immersiveness experience, or used in other related augmented reality applications such room redecoration. The goal of this industrial thesis (CIFRE) is to investigate and take advantage of the state-of-the art mobile devices in order to automate the process of generating room layouts. Nowadays, modern mobile devices usually come a wide range of sensors, such as inertial motion unit (IMU), RGB cameras and, more recently, depth cameras. Moreover, tactile touchscreens offer a natural and simple way to interact with the user, thus favoring the development of interactive applications, in which the user can be part of the processing loop. This work aims at exploiting the richness of such devices to address the room layout generation problem. The thesis has three major contributions. We first show how the classic problem of detecting vanishing points in an image can benefit from an a-priori given by the IMU sensor. We propose a simple and effective algorithm for detecting vanishing points relying on the gravity vector estimated by the IMU. A new public dataset containing images and the relevant IMU data is introduced to help assessing vanishing point algorithms and foster further studies in the field. As a second contribution, we explored the state of-the-art of real-time localization and map optimization algorithms for RGB-D sensors. Real-time localization is a fundamental task to enable augmented reality applications, and thus it is a critical component when designing interactive applications. We propose an evaluation of existing algorithms for the common desktop set-up in order to be employed on a mobile device. For each considered method, we assess the accuracy of the localization as well as the computational performances when ported on a mobile device. Finally, we present a proof of concept of application able to generate the room layout relying on a Project Tango tablet equipped with an RGB-D sensor. In particular, we propose an algorithm that incrementally processes and fuses the 3D data provided by the sensor in order to obtain the layout of the room. We show how our algorithm can rely on the user interactions in order to correct the generated 3D model during the acquisition process.
|
Page generated in 0.0408 seconds