• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 22
  • 8
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Cost-Efficient Video Interactions for Virtual Training Environment

Gasparyan, Arsen 28 June 2007 (has links)
No description available.
12

Three-Dimensional Hand Tracking and Surface-Geometry Measurement for a Robot-Vision System

Liu, Chris Yu-Liang 17 January 2009 (has links)
Tracking of human motion and object identification and recognition are important in many applications including motion capture for human-machine interaction systems. This research is part of a global project to enable a service robot to recognize new objects and perform different object-related tasks based on task guidance and demonstration provided by a general user. This research consists of the calibration and testing of two vision systems which are part of a robot-vision system. First, real-time tracking of a human hand is achieved using images acquired from three calibrated synchronized cameras. Hand pose is determined from the positions of physical markers and input to the robot system in real-time. Second, a multi-line laser camera range sensor is designed, calibrated, and mounted on a robot end-effector to provide three-dimensional (3D) geometry information about objects in the robot environment. The laser-camera sensor includes two cameras to provide stereo vision. For the 3D hand tracking, a novel score-based hand tracking scheme is presented employing dynamic multi-threshold marker detection, a stereo camera-pair utilization scheme, marker matching and labeling using epipolar geometry and hand pose axis analysis, to enable real-time hand tracking under occlusion and non-uniform lighting environments. For surface-geometry measurement using the multi-line laser range sensor, two different approaches are analyzed for two-dimensional (2D) to 3D coordinate mapping, using Bezier surface fitting and neural networks, respectively. The neural-network approach was found to be a more viable approach for surface-geometry measurement worth future exploration for its lower magnitude of 3D reconstruction error and consistency over different regions of the object space.
13

Three-Dimensional Hand Tracking and Surface-Geometry Measurement for a Robot-Vision System

Liu, Chris Yu-Liang 17 January 2009 (has links)
Tracking of human motion and object identification and recognition are important in many applications including motion capture for human-machine interaction systems. This research is part of a global project to enable a service robot to recognize new objects and perform different object-related tasks based on task guidance and demonstration provided by a general user. This research consists of the calibration and testing of two vision systems which are part of a robot-vision system. First, real-time tracking of a human hand is achieved using images acquired from three calibrated synchronized cameras. Hand pose is determined from the positions of physical markers and input to the robot system in real-time. Second, a multi-line laser camera range sensor is designed, calibrated, and mounted on a robot end-effector to provide three-dimensional (3D) geometry information about objects in the robot environment. The laser-camera sensor includes two cameras to provide stereo vision. For the 3D hand tracking, a novel score-based hand tracking scheme is presented employing dynamic multi-threshold marker detection, a stereo camera-pair utilization scheme, marker matching and labeling using epipolar geometry and hand pose axis analysis, to enable real-time hand tracking under occlusion and non-uniform lighting environments. For surface-geometry measurement using the multi-line laser range sensor, two different approaches are analyzed for two-dimensional (2D) to 3D coordinate mapping, using Bezier surface fitting and neural networks, respectively. The neural-network approach was found to be a more viable approach for surface-geometry measurement worth future exploration for its lower magnitude of 3D reconstruction error and consistency over different regions of the object space.
14

Near touch interactions: understanding grab and release actions.

Balali Moghaddam, Aras 17 August 2012 (has links)
In this work, I present empirically validated techniques to realize gesture and touch interaction using a novel near touch tracking system. This study focuses on identifying the intended center of action for grab and release gestures close to an interactive surface. Results of this experiment inform a linear model that can approximate the intended location of grab and release actions with an accuracy of R^2 = 0.95 for horizontal position and R^2 = 0.84 for vertical position. I also present an approach for distinguishing which hand was used to perform the interaction. These empirical model data and near touch tracking system contributions provide new opportunities for natural and intuitive hand interactions with computing surfaces. / Graduate
15

3d Hand Tracking In Video Sequences

Tokatli, Aykut 01 September 2005 (has links) (PDF)
The use of hand gestures provides an attractive alternative to cumbersome interface devices such as keyboard, mouse, joystick, etc. Hand tracking has a great potential as a tool for better human-computer interaction by means of communication in a more natural and articulate way. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures and hand tracking. In this study, a real-time hand tracking system is developed. Mainly, it is image-based hand tracking and based on 2D image information. For separation and identification of finger parts, coloured markers are used. In order to obtain 3D tracking, a stereo vision approach is used where third dimension is obtained by depth information. In order to see results in 3D, a 3D hand model is developed and Java 3D is used as the 3D environment. Tracking is tested on two different types of camera: a cheap USB web camera and Sony FCB-IX47AP camera, connected to the Matrox Meteor frame grabber with a standard Intel Pentium based personal computer. Coding is done by Borland C++ Builder 6.0 and Intel Image Processing and Open Source Computer Vision (OpenCV) library are used as well. For both camera types, tracking is found to be robust and efficient where hand tracking at ~8 fps could be achieved. Although the current progress is encouraging, further theoretical as well as computational advances are needed for this highly complex task of hand tracking.
16

From Human to Robot Grasping

Romero, Javier January 2011 (has links)
Imagine that a robot fetched this thesis for you from a book shelf. How doyou think the robot would have been programmed? One possibility is thatexperienced engineers had written low level descriptions of all imaginabletasks, including grasping a small book from this particular shelf. A secondoption would be that the robot tried to learn how to grasp books from yourshelf autonomously, resulting in hours of trial-and-error and several bookson the floor.In this thesis, we argue in favor of a third approach where you teach therobot how to grasp books from your shelf through grasping by demonstration.It is based on the idea of robots learning grasping actions by observinghumans performing them. This imposes minimum requirements on the humanteacher: no programming knowledge and, in this thesis, no need for specialsensory devices. It also maximizes the amount of sources from which therobot can learn: any video footage showing a task performed by a human couldpotentially be used in the learning process. And hopefully it reduces theamount of books that end up on the floor. This document explores the challenges involved in the creation of such asystem. First, the robot should be able to understand what the teacher isdoing with their hands. This means, it needs to estimate the pose of theteacher's hands by visually observing their in the absence of markers or anyother input devices which could interfere with the demonstration. Second,the robot should translate the human representation acquired in terms ofhand poses to its own embodiment. Since the kinematics of the robot arepotentially very different from the human one, defining a similarity measureapplicable to very different bodies becomes a challenge. Third, theexecution of the grasp should be continuously monitored to react toinaccuracies in the robot perception or changes in the grasping scenario.While visual data can help correcting the reaching movement to the object,tactile data enables accurate adaptation of the grasp itself, therebyadjusting the robot's internal model of the scene to reality. Finally,acquiring compact models of human grasping actions can help in bothperceiving human demonstrations more accurately and executing them in a morehuman-like manner. Moreover, modeling human grasps can provide us withinsights about what makes an artificial hand design anthropomorphic,assisting the design of new robotic manipulators and hand prostheses. All these modules try to solve particular subproblems of a grasping bydemonstration system. We hope the research on these subproblems performed inthis thesis will both bring us closer to our dream of a learning robot andcontribute to the multiple research fields where these subproblems arecoming from. / QC 20111125
17

Tracking and modelling motion for biomechanical analysis

Aristidou, Andreas January 2010 (has links)
This thesis focuses on the problem of determining appropriate skeletal configurations for which a virtual animated character moves to desired positions as smoothly, rapidly, and as accurately as possible. During the last decades, several methods and techniques, sophisticated or heuristic, have been presented to produce smooth and natural solutions to the Inverse Kinematics (IK) problem. However, many of the currently available methods suffer from high computational cost and production of unrealistic poses. In this study, a novel heuristic method, called Forward And Backward Reaching Inverse Kinematics (FABRIK), is proposed, which returnsvisually natural poses in real-time, equally comparable with highly sophisticated approaches. It is capable of supporting constraints for most of the known joint types and it can be extended to solve problems with multiple end effectors, multiple targets and closed loops. FABRIK wascompared against the most popular IK approaches and evaluated in terms of its robustness and performance limitations. This thesis also includes a robust methodology for marker prediction under multiple marker occlusion for extended time periods, in order to drive real-time centre of rotation (CoR) estimations. Inferred information from neighbouring markers has been utilised, assuming that the inter-marker distances remain constant over time. This is the firsttime where the useful information about the missing markers positions which are partially visible to a single camera is deployed. Experiments demonstrate that the proposed methodology can effectively track the occluded markers with high accuracy, even if the occlusion persists for extended periods of time, recovering in real-time good estimates of the true joint positions. In addition, the predicted positions of the joints were further improved by employing FABRIK to relocate their positions and ensure a fixed bone length over time. Our methodology is tested against some of the most popular methods for marker prediction and the results confirm that our approach outperforms these methods in estimating both marker and CoR positions. Finally, an efficient model for real-time hand tracking and reconstruction that requires a minimumnumber of available markers, one on each finger, is presented. The proposed hand modelis highly constrained with joint rotational and orientational constraints, restricting the fingers and palm movements to an appropriate feasible set. FABRIK is then incorporated to estimate the remaining joint positions and to fit them to the hand model. Physiological constraints, such as inertia, abduction, flexion etc, are also incorporated to correct the final hand posture. A mesh deformation algorithm is then applied to visualise the movements of the underlying hand skeleton for comparison with the true hand poses. The mathematical framework used for describing and implementing the techniques discussed within this thesis is Conformal GeometricAlgebra (CGA).
18

Rozpoznávání objektů a gest v obraze / Recognition of Objects and Gestures in Image

Johanová, Daniela January 2015 (has links)
This thesis is focused on gesture recognition in video. The main purpose of this thesis was to create an algorithm and an application that can recognize selected gestures using a~video obtained through a~standard webcamera. The intention was to control an application program, such as video player. The approach used to achieve this goal was to exploit methods of feature extraction, tracking, and machine learning.
19

Využití gest v uživatelských rozhraních / Gestures in User Interfaces

Bednář, Luboš January 2012 (has links)
This master thesis deals with the use of gestures in user interfaces. The goal of this thesis is to create library for hand tracking and gesture recognition in real time. For hand tracking was choosen algorithm Flock of Features. Classification of gestures is done by using algorithm DTW. This thesis also contains stage design, design and implementation of a system that uses this library. Within the tests was tested control of various application using this library.
20

Reality Check : A review of design principles within emergent XR artefacts

Svedberg, Jonnie Juhani January 2020 (has links)
With the advent of novel digital interfaces such as augmented, mixed and virtual reality, the way we interact with digital artefacts is changing at a nearly reckless pace. The adoption rate within enterprise applications is racing, with mass adoption among consumers soon to follow. This paper aims to iterate a key question sometimes hidden within these rapid developments; are the practices used to develop these artefacts properly tested and evaluated as the best possible ones? In order to answer this, we will explore and evaluate how existing best practices adhere to empirical evidence, but also to experiment with potential avenues of alternative design methodologies. Once adequate conclusions are reached, they will be utilized to design a prototype/proof of concept to showcase just how aninterface/interaction made with the new considerations in mind can differ from those made with contemporary design principles. Upon evaluation of thi sexperimental prototype which utilized the user’s hands as physical, tactile feedback for interactions, respondents were overall positive to this method of interaction, despite some discomfort from the limitations imposed by this specific technical approach. Due to this, it is strongly suggested that development of XR artefacts might often be designed around these technical limitations instead of a truly best practice. This is why we heavily implore both further testing and experimentation as time goes on, since emergent technologies might lack these limitations and therefore enable richer, better interaction methods and experiences within XR.

Page generated in 0.0888 seconds