• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 112
  • 54
  • 25
  • 12
  • 11
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 244
  • 69
  • 39
  • 37
  • 34
  • 34
  • 29
  • 29
  • 28
  • 28
  • 25
  • 24
  • 23
  • 23
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Bezdotykové řízení stacionárního manipulátoru

Slaný, Vlastimil January 2014 (has links)
This diploma thesis describe design and implementation of a touchless control of a stationary manipulator. Part of this thesis is to create of custom library for controlling stationary manipulator.
12

Využití platformy Kinect pro marketing

Weigl, Libor January 2013 (has links)
No description available.
13

Telepresence using Kinect and an animated robotic face : An experimental study regarding the sufficiency of using a subset of the CANDIDE-3 model and the Microsoft Kinect Face Tracking device for capturing and animating the most typical facial expressions / Telepresence med hjälp av Kinect och ett animerat robotiserat ansikte : En experimentell studie om hur väl en delmängd av CANDIDE-3-modellen och Microsoft Kinects ansiktsigenkänning kan fånga och animera de mest typiska ansiktsuttrycken

Linder, Johannes, Gudmandsen, Magnus January 2013 (has links)
This Bachelor’s Thesis in Computer Science investigates the use of the parameterised facial animation model named CANDIDE-3 (J. Ahlberg, 2001) in Telepresence communication with relatively cheap hardware. An experimental study was conducted to evaluate how well an implementation using the Microsoft Kinect Face Tracking device could capture and animate the 6 classical emotional states: joy, sadness, surprise, anger, fear and disgust. A total of 80 test candidates took part in a survey where they were to try and classify the emotional states of images of photographed and animated faces. The animated faces were created using the prototype system built for the purpose of the survey and rendered onto the robotic Furhat face (Al Moubayed, S., Skantze, G., Beskow, J., Stefanov, K., & Gustafson, J, 2012). Results showed that a person’s emotional state is preserved very well through the animation technique used, and for some basic emotions, like joy or sadness, the animation could even amplify the emotional state for the viewer. However, the 6 Action Units captured from the Kinect device were not enough to sufficiently distinguish between even some of most the basic emotional states (e.g. disgust, anger). / Denna kandidatuppsats inom Datateknik undersöker hur väl den parametriserade animationsmodellen CANDIDE-3 (J.Ahlberg, 2001) kan användas för att visa ansiktsuttryck inom Telepresence-sammanhang med hjälp av relativt billig hårdvara. En experimentell studie utfördes för att undersöka hur väl en implementation som använder Microsoft Kinects ansiktsigenkänning kunde fånga och animera de 6 klassiska ansiktsuttrycken: glädje, sorg, förvåning, ilska, rädsla och avsky. Totalt deltog 80 personer i undersökningen där deras uppgift var att klassificera känslomässiga tillstånd från fotograferade och animerade ansikten. De animerade ansiktena skapades med hjälp av det prototypsystem som byggdes i undersökningens syfte och renderades på det robotiserade Furhat-ansiktet (Al Moubayed, S., Skantze, G., Beskow, J., Stefanov, K., & Gustafson, J, 2012). Resultat visade att en persons känslomässiga tillstånd väldigt väl bevaras genom animationstekniken som används, och för några grundläggande känslor, såsom glädje och sorg, kunde animationen till och med förstärka det känslomässiga tillståndet för åskådaren. De 6 AU-enheterna som fångas av Kinect-enheten var dock inte tillräckliga för att särskilja till och med några av de mest grundläggande känslomässiga tillstånden (såsom avsky, ilska).
14

Design and Teleoperative Control of Humanoid Robot Upper Body for Task-driven Assistance

Stevens, Michael Alexander 28 May 2013 (has links)
Both civilian and defense industry rely heavily on robotics which continues to gain a more prominent role. To exemplify, defense strategies in Middle East have relied upon robotic drones and teleoperative assistant robots for mission oriented tasks. These operations have been crucial in saving the lives of soldiers and giving us the edge in mitigating disasters. Future assistive robotics will have direct human interaction and will reside in normal human environments. As the advancement in technology continues to occur, there will be a focus towards eliminating the direct human control and replacing it with higher level autonomy. Further, advancements in electronics and electromechanical components will reduce the cost and makes the assistive robotics accessible to the masses. This thesis focuses on robotic teleoperation technology and the future high level control of assistant robotics. A dexterous 16 degree of freedom hand with bend sensors for precise joint positions was designed, modeled, fabricated and characterized. The design features a unique motor actuation mechanism that was 3D printed to reduce the cost and increase the modularity. The upper body was designed to be biomimetic with dimensions similar to that of a typical six foot tall male. The upper body of the humanoid consists of a 4 degree of freedom shoulder and upper arm with direct feedback at each joint. A theoretical nonlinear switching controller was designed to control these 4 degrees of freedom. The entire system was teleoperative controlled with an Xbox Kinect that tracks the skeletal points of a user and emulates these 3D points to the joints of humanoid upper body. This allows for a direct user control over a robotic assistive upper body with nothing more than a human emulating the desired movements. / Master of Science
15

Investigations of stereo setup for Kinect

Manuylova, Ekaterina January 2012 (has links)
The main purpose of this work is to investigate the behavior of the recently released by Microsoft company the Kinect sensor, which contains the properties that go beyond ordinary cameras. Normally, in order to create a 3D reconstruction of the scene two cameras are required. Whereas, the Kinect device, due to the properties of the Infrared projector and sensor allows to create the same type of the reconstruction using only one device. However, the depth images, which are generated by the Infrared laser projector and monochrome sensor in Kinect can contain undefined values. Therefore, in addition to other investigations this project contains an idea how to improve the quality of the depth images. However, the base aim of this work is to perform a reconstruction of the scene based on the color images using pair of Kinects which will be compared with the results generated by using depth information from one Kinect. In addition, the report contains the information how to check that all the performed calculations were done correctly. All  the algorithms which were used in the project as well as the achieved results will be described and discussed in the separate chapters in the current report.
16

Key body pose detection and movement assessment of fitness performances

Fernandez de Dios, Pablo January 2015 (has links)
Motion segmentation plays an important role in human motion analysis. Understanding the intrinsic features of human activities represents a challenge for modern science. Current solutions usually involve computationally demanding processing and achieve the best results using expensive, intrusive motion capture devices. In this thesis, research has been carried out to develop a series of methods for affordable and effective human motion assessment in the context of stand-up physical exercises. The objective of the research was to tackle the needs for an autonomous system that could be deployed in nursing homes or elderly people's houses, as well as rehabilitation of high profile sport performers. Firstly, it has to be designed so that instructions on physical exercises, especially in the case of elderly people, can be delivered in an understandable way. Secondly, it has to deal with the problem that some individuals may find it difficult to keep up with the programme due to physical impediments. They may also be discouraged because the activities are not stimulating or the instructions are hard to follow. In this thesis, a series of methods for automatic assessment production, as a combination of worded feedback and motion visualisation, is presented. The methods comprise two major steps. First, a series of key body poses are identified upon a model built by a multi-class classifier from a set of frame-wise features extracted from the motion data. Second, motion alignment (or synchronisation) with a reference performance (the tutor) is established in order to produce a second assessment model. Numerical assessment, first, and textual feedback, after, are delivered to the user along with a 3D skeletal animation to enrich the assessment experience. This animation is produced after the demonstration of the expert is transformed to the current level of performance of the user, in order to help encourage them to engage with the programme. The key body pose identification stage follows a two-step approach: first, the principal components of the input motion data are calculated in order to reduce the dimensionality of the input. Then, candidates of key body poses are inferred using multi-class, supervised machine learning techniques from a set of training samples. Finally, cluster analysis is used to refine the result. Key body pose identification is guaranteed to be invariant to the repetitiveness and symmetry of the performance. Results show the effectiveness of the proposed approach by comparing it against Dynamic Time Warping and Hierarchical Aligned Cluster Analysis. The synchronisation sub-system takes advantage of the cyclic nature of the stretches that are part of the stand-up exercises subject to study in order to remove out-of-sequence identified key body poses (i.e., false positives). Two approaches are considered for performing cycle analysis: a sequential, trivial algorithm and a proposed Genetic Algorithm, with and without prior knowledge on cyclic sequence patterns. These two approaches are compared and the Genetic Algorithm with prior knowledge shows a lower rate of false positives, but also a higher false negative rate. The GAs are also evaluated with randomly generated periodic string sequences. The automatic assessment follows a similar approach to that of key body pose identification. A multi-class, multi-target machine learning classifier is trained with features extracted from previous motion alignment. The inferred numerical assessment levels (one per identified key body pose and involved body joint) are translated into human-understandable language via a highly-customisable, context-free grammar. Finally, visual feedback is produced in the form of a synchronised skeletal animation of both the user's performance and the tutor's. If the user's performance is well below a standard then an affine offset transformation of the skeletal motion data series to an in-between performance is performed, in order to prevent dis-encouragement from the user and still provide a reference for improvement. At the end of this thesis, a study of the limitations of the methods in real circumstances is explored. Issues like the gimbal lock in the angular motion data, lack of accuracy of the motion capture system and the escalation of the training set are discussed. Finally, some conclusions are drawn and future work is discussed.
17

Showing the Point: Understanding and Representing Deixis over Surfaces

2013 February 1900 (has links)
Deictic gestures, which often manifest as pointing, are an important part of interpersonal communication over shared artifacts on surfaces, such as a map on a table. However, in computer-supported distributed settings, deictic gestures can be difficult to see and understand. This problem can be solved through visualizing hands and arms above distributed surfaces, but current solutions are computationally and programmatically expensive, rely on a limited understanding of how gestures are executed and used, and remain largely unevaluated with regards to their effectiveness. This dissertation describes a solution to these problems in four parts: 1. Qualitative observational studies, both laboratory-based and in the wild, that lead to a greater understanding of how gestures are made over surfaces and what parts of a gesture are important to represent. In particular, these observations identified the height of a gesture as a characteristic not well-supported in distributed groupware. 2. A description of the design space available for representing gestures and candidate designs for showing the height of a gesture in distributed groupware. 3. Experimental evaluations of embodiments that include the representation of gesture height. 4. A toolkit for facilitating the capture and representation of gestures in distributed groupware. This work is the first to describe how deictic gestures are made over surfaces and how to visualize these gestures in distributed settings. The KinectArms Toolkit is the first toolkit to allow developers to add rich arm and hand representations to groupware without undue cost or development effort. This work is important because it provides researchers, designers, and developers with new tools for understanding and supporting communication in distributed settings.
18

Humanoid Robot Behavior Emulation and Representation

Zheng, Yu-An 12 September 2012 (has links)
The objective of the thesis is utilizing body sensing technology to develop a more intuitive and convenient way to control robots. The idea is to build a body sensing control system based on Kinect framework. Through Kinect, users from different age groups can achieve the desired purposes through motion demonstration without complicated programming. The system can accurately calculate angle change from users¡¦ gestures in a motion and identify key-postures which can compose an emulation motion similar to the presenting one. In other words, from analyzing these key postures, the demonstrated behaviors are able to be represented internally. Therefore, the system, consisting of a kinematics computational module and a representation algorithm, not only provides the function of behavior emulation but also behavior representation. By representation algorithm, the system extracts the features of combined behaviors. Besides, with the modular programming methodology, different behaviors can be reorganized to generate new behaviors based on the set of key poses represented by the extracted features. The application implemented in this system is within the OpenNI and NITE environment. OpenNI is used to retrieve information that the Kinect captured. NITE is used to track the user skeleton. The system is demonstrated by a play of ¡§Tai-Ji-Advancer¡¨ and at http://www.youtube.com/watch?v=cSYS49JKVAA.
19

Design and Realization of the Gesture-Interaction System Based on Kinect

Xu, Jie January 2014 (has links)
In the past 20 years humans have mostly used a mouse to interact with computers. However, with the rapidly growing use of computers, a need for alternative means of interaction has emerged. With the advent of Kinect, a brand-new way of human- computer interaction has been introduced. It allows the use of gestures - the most natural body-language - to communicate with computers, helping us get rid of traditional constraints and providing an intuitive method for executing operations. This thesis presents how to design and implement a program to help people interact with computers, without the traditional mouse, and with the support and help of a Kinect device (an XNA Game framework with Microsoft Kinect SDK v1.7). For dynamic gesture recognition, the Hidden Markov Model (HMM) and Dynamic Time Warping (DTW), are suggested. The use of DTW is being motivated by experimental analysis. A dynamic-gesture-recognition program is developed, based on DTW, to help computers recognize customized gestures by users. The experiment also shows that DTW can have rather good performance. As for further development, the use of the XNA Game 4.0 framework, which integrates the Kinect body tracking into DTW gesture recognition technologies, is introduced. Finally, a functional test is conducted on the interaction system. In addition to summarizing the results, the thesis also discusses what can be improved in the future.
20

Bezdotykové řízení polohy koncového efektoru manipulátoru

Kolaja, Josef January 2015 (has links)
The main aim of this diploma thesis is to design and implement the position control of the end effector of robotic arm with five degrees of freedom. The end effector reproduces the position of the operator's hand in the real time. The Kinect sensor is used for tracking position of the operator. The selected problems from the field of human body tracking, robotic arms and inverse kinematics are under discussion in the first part of the diploma thesis. The second part contains detailed description of the designed solution, which consists of the tracking of operator's palm, calculating inverse kinematic problem and development of the control software.

Page generated in 0.0371 seconds