• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A novel algorithm for human fall detection using height, velocity and position of the subject from depth maps

Nizam, Y., Abdul Jamil, M.M., Mohd, M.N.H., Youseffi, Mansour, Denyer, Morgan C.T. 02 July 2018 (has links)
Yes / Human fall detection systems play an important role in our daily life, because falls are the main obstacle for elderly people to live independently and it is also a major health concern due to aging population. Different approaches are used to develop human fall detection systems for elderly and people with special needs. The three basic approaches include some sort of wearable devices, ambient based devices or non-invasive vision-based devices using live cameras. Most of such systems are either based on wearable or ambient sensor which is very often rejected by users due to the high false alarm and difficulties in carrying them during their daily life activities. This paper proposes a fall detection system based on the height, velocity and position of the subject using depth information from Microsoft Kinect sensor. Classification of human fall from other activities of daily life is accomplished using height and velocity of the subject extracted from the depth information. Finally position of the subject is identified for fall confirmation. From the experimental results, the proposed system was able to achieve an average accuracy of 94.81% with sensitivity of 100% and specificity of 93.33%. / Partly sponsored by Center for Graduate Studies. This work is funded under the project titled “Biomechanics computational modeling using depth maps for improvement on gait analysis”. Universiti Tun Hussein Onn Malaysia for provided lab components and GPPS (Project Vot No. U462) sponsor.
2

Combining dense short range sensors and sparse long range sensors for mapping

Lin, Ismael January 2018 (has links)
Mapping is one of the main components of autonomous robots, and consist in the construction of a model of their environment based on the information gathered by different sensors over time. Those maps will have different attributes depending on the type of sensor used for the reconstruction. In this thesis we focus on RGBD cameras and LiDARs. The acquired data with cameras is dense, but the range is short and the construction of large scale and consistent maps is more challenging. LiDARs are the exact opposite, they give sparse data but can measure long ranges accurately and therefore support large scale mapping better. The thesis presents a method that uses both types of sensors with the purpose of combine their strengths and reduce their weaknesses. The evaluation of the system is done in an indoor environment, and with an autonomous robot. The result of the thesis shows a map that is robust in large environments and has dense information of the surroundings. / Kartläggning är en av huvudkomponenterna för autonoma robotar, och består av att bygga en modell av miljön utifrån informationen som samlats in av olika sensorer över tid. Dessa kartor kommer att ha olika attribut beroende på vilken typ av sensor som används för rekonstruktionen. I denna avhandling är fokus på RGBD-kameror och LiDARs. Datan från kameror är kompakt men kan bara mäta korta sträckor och det är utmanande att konstruera storskaliga och konsistenta kartor. LiDARs är exakt motsatta, de ger gles data men kan mäta långa avstånd noggrant och stödjer därför storskalig kartering bättre. Avhandlingen presenterar en metod som använder båda typerna av sensorer i syfte att kombinera deras styrkor och minska svagheterna. Utvärderingen av systemet sker i en inomhusmiljö och med en autonom robot. Resultatet av avhandlingen visar en karta som är robust i stora miljöer och har tät information om omgivningen.
3

Recognizing and Detecting Errors in Exercises using Kinect Skeleton Data

Pidaparthy, Hemanth 28 May 2015 (has links)
No description available.
4

HUMAN ACTIVITY TRACKING AND RECOGNITION USING KINECT SENSOR

Lun, Roanna January 2017 (has links)
No description available.
5

Sistema computacional de medidas de colorações humanas para exame médico de sudorese / Human coloring measures computer system for medical sweat test

Rodrigues, Lucas Cerqueira, 1988- 27 August 2018 (has links)
Orientador: Marco Antonio Garcia de Carvalho / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Tecnologia / Made available in DSpace on 2018-08-27T14:19:19Z (GMT). No. of bitstreams: 1 Rodrigues_LucasCerqueira_M.pdf: 3544177 bytes, checksum: ffa0c5e0ad4701affb1f2910bdd85ca4 (MD5) Previous issue date: 2015 / Resumo: Na pesquisa médica, o exame de sudorese é utilizado para destacar as regiões do corpo onde o paciente transpira, sendo estas úteis para o médico identificar possíveis lesões no sistema nervoso simpático. Os estudos acerca deste exame apontam a inexistência de um processo de identificação automática das regiões do corpo. Neste projeto, utilizou-se o Kinect® para ajudar nesta solução. Este dispositivo é capaz escanear objetos 3D e possui uma biblioteca para desenvolvimento de sistemas. Este trabalho tem o objetivo de construir um sistema computacional cujo propósito é desenvolver uma solução semi-automática para análise de imagens digitais provenientes de exames de sudorese. O sistema em foco permite classificar as regiões do corpo onde o paciente transpira, por intermédio de seu escaneamento 3D, utilizando o Kinect®, e gerar um relatório para o médico com as informações consolidadas de forma a realizar o diagnóstico com facilidade, rapidez e precisão. O projeto teve início em 2013, no laboratório IMAGELab da FT/UNICAMP em Limeira/SP e contou com o apoio de uma das equipes do Hospital das Clínicas da USP de Ribeirão Preto/SP que realiza os estudos sobre o Exame de Sudorese iodo-amido. A contribuição do trabalho consistiu na construção do aplicativo, que utiliza o algoritmo de segmentação de imagem K-Means para segmentação das regiões sobre a superfície do paciente, além do desenvolvimento do sistema que inclui o Kinect®. A aplicação validou-se por meio de experimentos em pacientes reais / Abstract: In medical research, the Sweat Test is used to highlight regions where the patient sweats, which are useful for the doctor to identify possible lesions on the sympathetic nervous system. Studies on this test indicate some difficulties in the automatic identification of body regions. In this project, we used the Kinect® device to help in this solution. Created by Microsoft®, the Kinect® is able to identify distance and has a library for systems development. This work aims to build a computer system intending to resolve some of the difficulties encountered during the research in the examination of sweating. The system created allows classify regions of the body where the patient sweats, through its 3D scanning, using the Kinect®, and export to the doctor the consolidated information in order to make a diagnosis quickly, easily and accurately. The project began in 2013 in ImageLab laboratory FT / UNICAMP in Limeira / SP and had the support of one of the USP Clinical Hospital teams in Ribeirão Preto / SP that performs studies on the Sweating Exam Iodine-Starch. The contribution to knowledge was in the software construction using the Kinect® and the image segmentation using K-Means algorithm for targeting regions on the surface of the patient. The application is validated by experiments on real patients / Mestrado / Tecnologia e Inovação / Mestre em Tecnologia
6

Depth based Sensor Fusion in Object Detection and Tracking

Sikdar, Ankita 01 June 2018 (has links)
No description available.
7

Analyse quantifiée de l'asymétrie de la marche par application de Poincaré

Brignol, Arnaud 08 1900 (has links)
La marche occupe un rôle important dans la vie quotidienne. Ce processus apparaît comme facile et naturel pour des gens en bonne santé. Cependant, différentes sortes de maladies (troubles neurologiques, musculaires, orthopédiques...) peuvent perturber le cycle de la marche à tel point que marcher devient fastidieux voire même impossible. Ce projet utilise l'application de Poincaré pour évaluer l'asymétrie de la marche d'un patient à partir d'une carte de profondeur acquise avec un senseur Kinect. Pour valider l'approche, 17 sujets sains ont marché sur un tapis roulant dans des conditions différentes : marche normale et semelle de 5 cm d'épaisseur placée sous l'un des pieds. Les descripteurs de Poincaré sont appliqués de façon à évaluer la variabilité entre un pas et le cycle complet de la marche. Les résultats montrent que la variabilité ainsi obtenue permet de discriminer significativement une marche normale d'une marche avec semelle. Cette méthode, à la fois simple à mettre en oeuvre et suffisamment précise pour détecter une asymétrie de la marche, semble prometteuse pour aider dans le diagnostic clinique. / Gait plays an important part in daily life. This process appears to be very easy and natural for healthy people. However, different kinds of diseases (neurological, muscular, orthopedic...) can impede the gait cycle to such an extent that gait becomes tedious or even infeasible. This project applied Poincare plot analysis to assess the gait asymmetry of a patient from a depth map acquired with a Kinect sensor. To validate the approach, 17 healthy subjects had to walk on a treadmill under different conditions : normal walk and with a 5 cm thick sole under one foot. Poincare descriptors were applied in such a way that they assess the variability between a step and the corresponding complete gait cycle. Results showed that variability significantly discriminates between a normal walk and a walk with a sole. This method seems promising for a clinical use as it is simple to implement and precise enough to assess gait asymmetry.
8

3D real time object recognition

Amplianitis, Konstantinos 01 March 2017 (has links)
Die Objekterkennung ist ein natürlicher Prozess im Menschlichen Gehirn. Sie ndet im visuellen Kortex statt und nutzt die binokulare Eigenschaft der Augen, die eine drei- dimensionale Interpretation von Objekten in einer Szene erlaubt. Kameras ahmen das menschliche Auge nach. Bilder von zwei Kameras, in einem Stereokamerasystem, werden von Algorithmen für eine automatische, dreidimensionale Interpretation von Objekten in einer Szene benutzt. Die Entwicklung von Hard- und Software verbessern den maschinellen Prozess der Objek- terkennung und erreicht qualitativ immer mehr die Fähigkeiten des menschlichen Gehirns. Das Hauptziel dieses Forschungsfeldes ist die Entwicklung von robusten Algorithmen für die Szeneninterpretation. Sehr viel Aufwand wurde in den letzten Jahren in der zweidimen- sionale Objekterkennung betrieben, im Gegensatz zur Forschung zur dreidimensionalen Erkennung. Im Rahmen dieser Arbeit soll demnach die dreidimensionale Objekterkennung weiterent- wickelt werden: hin zu einer besseren Interpretation und einem besseren Verstehen von sichtbarer Realität wie auch der Beziehung zwischen Objekten in einer Szene. In den letzten Jahren aufkommende low-cost Verbrauchersensoren, wie die Microsoft Kinect, generieren Farb- und Tiefendaten einer Szene, um menschenähnliche visuelle Daten zu generieren. Das Ziel hier ist zu zeigen, wie diese Daten benutzt werden können, um eine neue Klasse von dreidimensionalen Objekterkennungsalgorithmen zu entwickeln - analog zur Verarbeitung im menschlichen Gehirn. / Object recognition is a natural process of the human brain performed in the visual cor- tex and relies on a binocular depth perception system that renders a three-dimensional representation of the objects in a scene. Hitherto, computer and software systems are been used to simulate the perception of three-dimensional environments with the aid of sensors to capture real-time images. In the process, such images are used as input data for further analysis and development of algorithms, an essential ingredient for simulating the complexity of human vision, so as to achieve scene interpretation for object recognition, similar to the way the human brain perceives it. The rapid pace of technological advancements in hardware and software, are continuously bringing the machine-based process for object recognition nearer to the inhuman vision prototype. The key in this eld, is the development of algorithms in order to achieve robust scene interpretation. A lot of recognisable and signi cant e ort has been successfully carried out over the years in 2D object recognition, as opposed to 3D. It is therefore, within this context and scope of this dissertation, to contribute towards the enhancement of 3D object recognition; a better interpretation and understanding of reality and the relationship between objects in a scene. Through the use and application of low-cost commodity sensors, such as Microsoft Kinect, RGB and depth data of a scene have been retrieved and manipulated in order to generate human-like visual perception data. The goal herein is to show how RGB and depth information can be utilised in order to develop a new class of 3D object recognition algorithms, analogous to the perception processed by the human brain.
9

基於 RGBD 影音串流之肢體表情語言表現評估 / Estimation and Evaluation of Body Language Using RGBD Data

吳怡潔, Wu, Yi Chieh Unknown Date (has links)
本論文基於具備捕捉影像深度的RGBD影音串流裝置-Kinect感測器,在簡報場域中,作為擷取簡報者肢體動作、表情、以及語言表現模式的設備。首先我們提出在特定時段內的表現模式,可以經由大眾的評估,而具有喜歡/不喜歡的性質,我們將其分別命名為Period of Like(POL)以及Period of Dislike(POD)。論文中並以三種Kinect SDK所提供的影像特徵:動畫單元、骨架關節點、以及3D臉部頂點,輔以35位評估者所提供之評估資料,以POD/POL取出的特徵模式,分析是否具有一致性,以及是否可用於未來預測。最後將研究結果開發應用於原型程式,期許這樣的預測系統,能夠為在簡報中表現不佳而困擾的人們,提點其優劣之處,以作為後續改善之依據。 / In this thesis, we capture body movements, facial expressions, and voice data of subjects in the presentation scenario using RGBD-capable Kinect sensor. The acquired videos were accessed by a group of reviewers to indicate their preferences/aversions to the presentation style. We denote the two classes of ruling as Period of Like (POL) and Period of Dislike (POD), respectively. We then employ three types of image features, namely, animation units (AU), skeletal joints, and 3D face vertices to analyze the consistency of the evaluation result, as well as the ability to classify unseen footage based on the training data supplied by 35 evaluators. Finally, we develop a prototype program to help users to identify their strength/weakness during their presentation so that they can improve their skills accordingly.
10

Real-time Head Motion Tracking for Brain Positron Emission Tomography using Microsoft Kinect V2

Tsakiraki, Eleni January 2016 (has links)
The scope of the current research work was to evaluate the potential of the latest version of Microsoft Kinect sensor (Kinect v2) as an external tracking device for head motion during brain imaging with brain Positron Emission Tomography (PET). Head movements constitute a serious degradation factor in the acquired PET images. Although there are algorithms implementing motion correction using known motion data, the lack of effective and reliable motion tracking hardware has prevented their widespread adoption. Thus, the development of effective external tracking instrumentation is a necessity. Kinect was tested both for Siemens High-Resolution Research Tomograph (HRRT) and for Siemens ECAT HR PET system. The face Application Programming Interface (API) ’HD face’ released by Microsoft in June 2015 was modified and used in Matlab environment. Multiple experimental sessions took place examining the head tracking accuracy of kinect both in translational and rotational movements of the head. The results were analyzed statistically using one-sample Ttests with the significance level set to 5%. It was found that kinect v2 can track the head with a mean spatial accuracy of µ0 < 1 mm (SD = 0,8 mm) in the y-direction of the tomograph’s camera, µ0 < 3 mm (SD = 1,5 mm) in the z-direction of the tomograph’s camera and µ0 < 1 ◦ (SD < 1 ◦ ) for all the angles. However, further validation needs to take place. Modifications are needed in order for kinect to be used when acquiring PET data with the HRRT system. The small size of HRRT’s gantry (over 30 cm in diameter) makes kinect’s tracking unstable when the whole head is inside the gantry. On the other hand, Kinect could be used to track the motion of the head inside the gantry of the HR system.

Page generated in 0.0532 seconds