• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 112
  • 54
  • 25
  • 12
  • 11
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 244
  • 69
  • 39
  • 37
  • 34
  • 34
  • 29
  • 29
  • 28
  • 28
  • 25
  • 24
  • 23
  • 23
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Desenvolvimento de um aplicativo do Kinect para fins de intervenção com pacientes com hanseníase / Development of a Kinetic application for intervention purposes with leprosy patients

Evelin Cristina Cadrieskt Ribeiro Mello 16 October 2015 (has links)
Os sistemas interativos estão cada vez mais presentes como recursos tecnológicos utilizados pelo homem, sendo que para que a interação ocorra é necessária uma adaptação destes aparatos às reais necessidades do homem. Para garantir a qualidade de interação é preciso focar no princípio da usabilidade do sistema, desenvolvido por Jakob Nielsen, (1994) aprimorando com isso aspectos de acessibilidade, flexibilidade e eficiência no uso, para que o recurso tecnológico torne-se um objeto de modificação nesta relação. Objetivo: desenvolver um painel interativo utilizando a tecnologia Kinect e com isso fornecer informações sobre autocuidado e prevenção de incapacidades da hanseníase para pacientes e profissionais da saúde. Metodologia: Está baseada no modelo consensual que propõe uma solução para o problema de projeto e apresenta-se dividida em quatro fases: (1) projeto informacional; (2) projeto conceitual; (3) projeto preliminar e (4) projeto detalhado. Resultados: foi produzido um protótipo contendo imagens, texto e vídeos com informações sobre a Hanseníase. Este é composto por material coletado nos manuais produzidos pelo Ministério da Saúde para orientação de cuidados na Hanseníase e um vídeo inserido para demonstrar como seria o acesso a este recurso, acessados por meio dos movimentos dos membros superiores no qual a pessoa posiciona-se, em frente ao painel, a uma distância de 80 cm, e seleciona o que deseja ver com uma das mãos, que se torna a \"mão virtual\" movida na tela e seleciona o material instrucional. Os requisitos funcionais e não funcionais foram organizados contendo a caracterização das imagens de forma legível e nítida e opções de textos buscando a compreensão e o acesso da população. Foram desenvolvidos 16 vídeos que ensinam como realizar os exercícios para prevenir incapacidades e possíveis deformidades, estimulando assim o autocuidado. Conclusão: O desenvolvimento de material educacional sobre a Hanseníase que utilize as novas tecnologias é escasso e pouco explorado pelos profissionais da reabilitação na hanseníase. Investir em ações que visem tornar a pessoa mais informada sobre sua doença e segura sobre seu tratamento, pode contribuir para a autonomia em parte dos cuidados e as novas tecnologias podem funcionar como importante aliado neste processo. / The interactive systems are each time more present as technological resources used by men, being needed an adaptation of these devices to the men\'s real needs for an interaction to happen. To ensure the quality of the interaction, it is needed to focus on the system usability principle, developed by Jakob Nielsen (1994), improving with this the aspects of accessibility, flexibility and usage efficiency, so the technological resource become an object of modification on this relation. Objective: develop an interactive panel using the Kinect technology, and provide with this, information about self-care and prevention of disabilities about Leprosy. Methodology: based on the consensual model that proposes a solution to the project problem, and is presented divided in four phases: (1) informational project; (2) conceptual project; (3) preliminary project and (4) detailed project. Results: a prototype was produced containing images, texts and videos with information about Leprosy. This is composed by material collected on the guidelines created by the Health Ministry to orient on the case of Leprosy, and a video inserted to demonstrate how this resource would be accessed, with all the functions being accessed through movements of the upper body members, where the person positions herself, in front of the panel, on a 80 cm distance, and selects the content that wants to access with the hands, making the \"virtual hand\" on the screen to move and select the instructional material. The functional and non-functional requirements were organized containing the characterization of the images to be more readable and clear, and texts options seeking comprehension and access to the population. There were 16 videos developed to teach how to execute the exercises for preventing incapacities and possible deformities, stimulating with this the self-care. Conclusion: The development of educational material about Leprosy using the new technologies is scarce and little explored by professionals of rehabilitation in Leprosy. Investing in actions that aims to assist the people under treatment to become more informed about the disease, and secure about the treatment, may contribute to the autonomy in part of the care, and the new technologies can work as an important ally on this path.
42

Automatic assess of growing-finishing pigs\' weight through depth image analysis / Obtenção automática da massa de suínos em crescimento e terminação por meio da análise de imagens em profundidade

Isabella Cardoso Ferreira da Silva Condotta 02 February 2017 (has links)
A method of continuously monitoring weight would aid producers by ensuring all pigs are gaining weight and increasing the precision of marketing pigs thus saving money. Electronically monitoring weight without moving the pigs to the scale would eliminate a stress-generating source. Therefore, the development of methods for monitoring the physical conditions of animals from a distance appears as a necessity for obtaining data with higher quality. In pigs\' production, animals\' weighing is a practice that represents an important role in the control of the factors that affect the performance of the herd and it is an important factor on the production\'s monitoring. Therefore, this research aimed to extract weight data of pigs through depth images. First, a validation of 5 Kinect &reg; depth sensors was completed to understand the accuracy of the depth sensors. In addition, equations were generated to correct the dimensions\' data (length, area and volume) provided by these sensors for any distance between the sensor and the animals. Depth images and weights of finishing pigs (gilts and barrows) of three commercial lines (Landrace, Duroc and Yorkshire based) were acquired. Then, the images were analyzed with the MATLAB software (2016a). The pigs on the images were selected by depth differences and their volumes were calculated and then adjusted using the correction equation developed. Also, pigs\' dimensions were acquired for updating existing data. Curves of weight versus corrected volumes and corrected dimensions versus weight were adjusted. Equations for weight predictions through volume were adjusted for gilts and barrows and for each of the three commercial lines used. A reduced equation for all the data, without considering differences between sexes and genetic lines was also adjusted and compared with the individual equations using the Efroymson\'s algorithm. The result showed that there was no significant difference between the reduced equation and the individual equations for barrows and gilts (p<0.05), and the global equation was also no different than individual equations for each of the three sire lines (p<0.05). The global equation can predict weights from a depth sensor with an R2 of 0,9905. Therefore, the results of this study show that the depth sensor would be a reasonable approach to continuously monitor weights. / Um método de monitoramento contínuo da massa corporal de suínos auxiliaria os produtores, assegurando que todos os animais estão ganhando massa e aumentando a sua precisão de comercialização, reduzindo-se perdas. Obter eletronicamente a massa corporal sem mover os animais para a balança eliminaria uma fonte geradora de estresse. Portanto, o desenvolvimento de métodos para monitorar as condições físicas dos animais à distância se mostra necessário para a obtenção de dados com maior qualidade. Na produção de suínos, a pesagem dos animais é uma prática que representa um papel importante no controle dos fatores que afetam o desempenho do rebanho e o monitoramento da produção. Portanto, esta pesquisa teve como objetivo extrair, automaticamente, dados de massa de suínos por meio de imagens em profundidade. Foi feita, primeiramente, uma validação de 5 sensores de profundidade Kinect &reg; para compreender seu comportamento. Além disso, foram geradas equações para corrigir os dados de dimensões (comprimento, área e volume) fornecidos por estes sensores para qualquer distância entre o sensor e os animais. Foram obtidas imagens de profundidade e massas corporais de suínos e crescimento e terminação (fêmeas e machos castrados) de três linhagens comerciais (Landrace, Duroc e Yorkshire). Em seguida, as imagens foram analisadas com o software MATLAB (2016a). Os animais nas imagens foram selecionados por diferenças de profundidade e seus volumes foram calculados e depois ajustados utilizando a equação de correção desenvolvida. Foram coletadas, ainda, dimensões dos animais para atualização de dados existentes. Curvas de massa versus volumes corrigidos e de dimensões corrigidas versus massa, foram ajustadas. Equações para predição de massa a partir do volume foram ajustadas para os dois sexos e para as três linhagens comerciais. Uma equação reduzida, sem considerar as diferenças entre sexos e linhagens, também foi ajustada e comparada com as equações individuais utilizando o algoritmo de Efroymson. O resultado mostrou que não houve diferença significativa entre a equação reduzida e as equações individuais tanto para sexo (p <0,05), quanto para linhagens (p <0,05). A equação global pode predizer massas a partir do volume obtido com o sensor, com um R2 de 0,9905. Portanto, os resultados deste estudo mostram que o sensor de profundidade é uma abordagem razoável para monitorar as massas dos animais.
43

First Response to Emergency Situation in a Smart Environment using a Mobile Robot

Lazzaro, Gloria January 2015 (has links)
In recent years, the increase in the amount of elderly people has gained importance and significance and has become one of the major social challenges for most developed countries. More than one third of elderly fall at least once a year and often are not able to get up again unsupported, especially if they live alone. Smart homes can provide efficient and cost effective solutions, using technologies in order to sense the environment and helping to understand the occurrence of a possible dangerous situation. Robotic assistance is one of the most promising technologies for recognizing a fallen person and helping him/her in case of danger. This dissertation presents two methods, to detect first and then to recognize the presence or non-presence of a human being on the ground. The first method is based on Kinect depth image, thresholding and blob analysis for detecting human presence. While, the second is a GLCM feature-based method, evaluated from two different classifiers, namely Support Vector Machine (SVM) and Artificial Neural Network (ANN) for recognizing human from non-human. Results show that SVM and ANN can classify the presence of a person with 76.5 and 85.6 of accuracy, respectively. This shows that these methods can potentially be used to recognize the presence or absence of fallen human lying on the floor.
44

Determining the Quality of Human Movement using Kinect Data

Thati, Satish Kumar, Mareedu, Venkata Praneeth January 2017 (has links)
Health is one of the most important elements in every individual’s life. Even though there is much advancement in science, the quality of healthcare has never been up to the mark. This appears to be true especially in the field of Physiotherapy. Physiotherapy is the analysis of human joints and bodies and providing remedies for any pains or injuries that might have affected the physiology of a body. To give patients a top notch quality health analysis and treatment, either the number of doctors should increase, or there should be an alternative replacement for a doctor. Our Master Thesis is aimed at developing a prototype which can aid in providing healthcare of high standards to the millions.  Methods: Microsoft Kinect SDK 2.0 is used to develop the prototype. The study shows that Kinect can be used both as Marker-based and Marker less systems for tracking human motion. The degree angles formed from the motion of five joints namely shoulder, elbow, hip, knee and ankle were calculated. The device has infrared, depth and colour sensors in it. Depth data is used to identify the parts of the human body using pixel intensity information and the located parts are mapped onto RGB colour frame.  The image resulting from the Kinect skeleton mode was considered as the images resulting from the markerless system and used to calculate the angle of the same joints. In this project, data generated from the movement tracking algorithm for Posture Side and Deep Squat Side movements are collected and stored for further evaluation.  Results: Based on the data collected, our system automatically evaluates the quality of movement performed by the user. The system detected problems in static posture and Deep squat based on the feedback on our system by Physiotherapist.
45

Development of a Multimodal Human-computer Interface for the Control of a Mobile Robot

Jacques, Maxime 07 June 2012 (has links)
The recent advent of consumer grade Brain-Computer Interfaces (BCI) provides a new revolutionary and accessible way to control computers. BCI translate cognitive electroencephalography (EEG) signals into computer or robotic commands using specially built headsets. Capable of enhancing traditional interfaces that require interaction with a keyboard, mouse or touchscreen, BCI systems present tremendous opportunities to benefit various fields. Movement restricted users can especially benefit from these interfaces. In this thesis, we present a new way to interface a consumer-grade BCI solution to a mobile robot. A Red-Green-Blue-Depth (RGBD) camera is used to enhance the navigation of the robot with cognitive thoughts as commands. We introduce an interface presenting 3 different methods of robot-control: 1) a fully manual mode, where a cognitive signal is interpreted as a command, 2) a control-flow manual mode, reducing the likelihood of false-positive commands and 3) an automatic mode assisted by a remote RGBD camera. We study the application of this work by navigating the mobile robot on a planar surface using the different control methods while measuring the accuracy and usability of the system. Finally, we assess the newly designed interface’s role in the design of future generation of BCI solutions.
46

Coordinated Landing and Mapping with Aerial and Ground Vehicle Teams

Ma, Yan 17 September 2012 (has links)
Micro Umanned Aerial Vehicle~(UAV) and Umanned Ground Vehicle~(UGV) teams present tremendous opportunities in expanding the range of operations for these vehicles. An effective coordination of these vehicles can take advantage of the strengths of both, while mediate each other's weaknesses. In particular, a micro UAV typically has limited flight time due to its weak payload capacity. To take advantage of the mobility and sensor coverage of a micro UAV in long range, long duration surveillance mission, a UGV can act as a mobile station for recharging or battery swap, and the ability to perform autonomous docking is a prerequisite for such operations. This work presents an approach to coordinate an autonomous docking between a quadrotor UAV and a skid-steered UGV. A joint controller is designed to eliminate the relative position error between the vehicles. The controller is validated in simulations and successful landing is achieved in indoor environment, as well as outdoor settings with standard sensors and real disturbances. Another goal for this work is to improve the autonomy of UAV-UGV teams in positioning denied environments, a very common scenarios for many robotics applications. In such environments, Simultaneous Mapping and Localization~(SLAM) capability is the foundation for all autonomous operations. A successful SLAM algorithm generates maps for path planning and object recognition, while providing localization information for position tracking. This work proposes an SLAM algorithm that is capable of generating high fidelity surface model of the surrounding, while accurately estimating the camera pose in real-time. This algorithm improves on a clear deficiency of its predecessor in its ability to perform dense reconstruction without strict volume limitation, enabling practical deployment of this algorithm on robotic systems.
47

Human detection and action recognition using depth information by Kinect

Xia, Lu, active 21st century 10 July 2012 (has links)
Traditional computer vision algorithms depend on information taken by visible-light cameras. But there are inherent limitations of this data source, e.g. they are sensitive to illumination changes, occlusions and background clutter. Range sensors give us 3D structural information of the scene and it’s robust to the change of color and illumination. In this thesis, we present a series of approaches which are developed using the depth information by Kinect to address the issues regarding human detection and action recognition. Taking the depth information, the basic problem we consider is to detect humans in the scene. We propose a model based approach, which is comprised of a 2D head contour detector and a 3D head surface detector. We propose a segmentation scheme to segment the human from the surroundings based on the detection point and extract the whole body of the subject. We also explore the tracking algorithm based on our detection result. The methods are tested on a dataset we collected and present superior results over the existing algorithms. With the detection result, we further studied on recognizing their actions. We present a novel approach for human action recognition with histograms of 3D joint locations (HOJ3D) as a compact representation of postures. We extract the 3D skeletal joint locations from Kinect depth maps using Shotton et al.’s method. The HOJ3D computed from the action depth sequences are reprojected using LDA and then clustered into k posture visual words, which represent the prototypical poses of actions. The temporal evolutions of those visual words are modeled by discrete hidden Markov models (HMMs). In addition, due to the design of our spherical coordinate system and the robust 3D skeleton estimation from Kinect, our method demonstrates significant view invariance on our 3D action dataset. Our dataset is composed of 200 3D sequences of 10 indoor activities performed by 10 individuals in varied views. Our method is real-time and achieves superior results on the challenging 3D action dataset. We also tested our algorithm on the MSR Action3D dataset and our algorithm outperforms existing algorithm on most of the cases. / text
48

Joint color-depth restoration with kinect depth camera and its applications to image-based rendering and hand gesture recognition

Wang, Chong, 王翀 January 2014 (has links)
abstract / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
49

Indoor 3D Mapping using Kinect / Kartering av inomhusmiljöer med Kinect

Bengtsson, Morgan January 2014 (has links)
In recent years several depth cameras have emerged on the consumer market, creating many interesting possibilities forboth professional and recreational usage. One example of such a camera is the Microsoft Kinect sensor originally usedwith the Microsoft Xbox 360 game console. In this master thesis a system is presented that utilizes this device in order to create an as accurate as possible 3D reconstruction of an indoor environment. The major novelty of the presented system is the data structure based on signed distance fields and voxel octrees used to represent the observed environment. / Under de senaste åren har flera olika avståndskameror lanserats på konsumentmarkanden. Detta har skapat många intressanta applikationer både i professionella system samt för underhållningssyfte. Ett exempel på en sådan kamera är Microsoft Kinect som utvecklades för Microsofts spelkonsol Xbox 360. I detta examensarbete presenteras ett system som använder Kinect för att skapa en så exakt rekonstruktion i 3D av en innomhusmiljö som möjligt. Den främsta innovationen i systemet är en datastruktur baserad på signed distance fields (SDF) och octrees, vilket används för att representera den rekonstruerade miljön.
50

Development of a Multimodal Human-computer Interface for the Control of a Mobile Robot

Jacques, Maxime 07 June 2012 (has links)
The recent advent of consumer grade Brain-Computer Interfaces (BCI) provides a new revolutionary and accessible way to control computers. BCI translate cognitive electroencephalography (EEG) signals into computer or robotic commands using specially built headsets. Capable of enhancing traditional interfaces that require interaction with a keyboard, mouse or touchscreen, BCI systems present tremendous opportunities to benefit various fields. Movement restricted users can especially benefit from these interfaces. In this thesis, we present a new way to interface a consumer-grade BCI solution to a mobile robot. A Red-Green-Blue-Depth (RGBD) camera is used to enhance the navigation of the robot with cognitive thoughts as commands. We introduce an interface presenting 3 different methods of robot-control: 1) a fully manual mode, where a cognitive signal is interpreted as a command, 2) a control-flow manual mode, reducing the likelihood of false-positive commands and 3) an automatic mode assisted by a remote RGBD camera. We study the application of this work by navigating the mobile robot on a planar surface using the different control methods while measuring the accuracy and usability of the system. Finally, we assess the newly designed interface’s role in the design of future generation of BCI solutions.

Page generated in 0.0246 seconds