• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2105
  • 299
  • 272
  • 267
  • 174
  • 110
  • 57
  • 40
  • 36
  • 36
  • 34
  • 30
  • 25
  • 19
  • 16
  • Tagged with
  • 4263
  • 805
  • 741
  • 642
  • 361
  • 343
  • 336
  • 335
  • 306
  • 297
  • 297
  • 296
  • 285
  • 278
  • 268
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Determining the Quality of Human Movement using Kinect Data

Thati, Satish Kumar, Mareedu, Venkata Praneeth January 2017 (has links)
Health is one of the most important elements in every individual’s life. Even though there is much advancement in science, the quality of healthcare has never been up to the mark. This appears to be true especially in the field of Physiotherapy. Physiotherapy is the analysis of human joints and bodies and providing remedies for any pains or injuries that might have affected the physiology of a body. To give patients a top notch quality health analysis and treatment, either the number of doctors should increase, or there should be an alternative replacement for a doctor. Our Master Thesis is aimed at developing a prototype which can aid in providing healthcare of high standards to the millions.  Methods: Microsoft Kinect SDK 2.0 is used to develop the prototype. The study shows that Kinect can be used both as Marker-based and Marker less systems for tracking human motion. The degree angles formed from the motion of five joints namely shoulder, elbow, hip, knee and ankle were calculated. The device has infrared, depth and colour sensors in it. Depth data is used to identify the parts of the human body using pixel intensity information and the located parts are mapped onto RGB colour frame.  The image resulting from the Kinect skeleton mode was considered as the images resulting from the markerless system and used to calculate the angle of the same joints. In this project, data generated from the movement tracking algorithm for Posture Side and Deep Squat Side movements are collected and stored for further evaluation.  Results: Based on the data collected, our system automatically evaluates the quality of movement performed by the user. The system detected problems in static posture and Deep squat based on the feedback on our system by Physiotherapist.
92

Estimating Position and Velocity of Traffic Participants Using Non-Causal Offline Algorithms

Johansson, Casper January 2019 (has links)
In this thesis several non-causal offline algorithms are developed and evaluated for a vision system used for pedestrian and vehicle traffic. The reason was to investigate if the performance increase of non-causal offline algorithms alone is enough to evaluate the performance of vision system. In recent years the vision systems have become one of the most important sensors for modern vehicles active security systems. The active security systems are becoming more important today and for them to work a good object detection and tracking in the vicinity of the vehicle is needed. Thus, the vision system needs to be properly evaluated. The problem is that modern evaluation techniques are limited to a few object scenarios and thus a more versatile evaluation technique is desired for the vision system. The focus of this thesis is to research non-causal offline techniques that increases the tracking performance without increasing the number of sensors. The Unscented Kalman Filter is used for state estimation and an unscented Rauch-Tung-Striebel smoother is used to propagate information backwards in time. Different motion models such as a constant velocity and coordinated turn are evaluated. Further assumptions and techniques such as tracking vehicles using fix width and estimating topography and using it as a measurement are evaluated. Evaluation shows that errors in velocity and the uncertainty of all the states are significantly reduced using an unscented Rauch-Tung-Striebel smoother. For the evaluated scenarios it can be concluded that the choice of motion model depends on scenarios and the motion of the tracked vehicle but are roughly the same. Further the results show that assuming fix width of a vehicle do not work and measurements using non-causal estimation of topography can significantly reduce the error in position, but further studies are recommended to verify this.
93

L’hypertexte et la lecture à l’écran : approches expérimentale et herméneutique / Hypertext and reading the screen : approaches experimental and hermeneutics

Koszowska-Nowakowska, Paulina 10 July 2013 (has links)
Notre recherche a pour but de mettre à l’épreuve nos manières de lire à l’écran, afin d’amorcer une nouvelle réflexion sur la lecture hypertextuelle et son traitement informatisé. Lire à l’écran d’un ordinateur, signifie de saisir un objet textuel et visuel très complexe. Nous avons choisi d’étudier la perception, la structure et la construction de la lecture hypertextuelle non-linéaire à l’aide d’un outil oculométrique, appelé Eye-Tracking. Dans cette exploration, nous passons de la perception visuelle humaine à la construction du sens. A partir de nos préoccupations de départ concernant les relations entre l’intertextualité et l’hypertexte, nous cherchons à savoir comment le lecteur d’un hypertexte construit sa lecture ; est-ce qu'il trouve de la linéarité dans cette lecture fragmentaire ?Cette thèse se situe au croisement de plusieurs disciplines scientifiques, tels que la sémiotique et l’oculométrie, tout en restant ancrée dans les Sciences de l'Information et de la Communication, c’est pourquoi nos mesures d’Eye-Tracking nécessitent plusieurs phases de traitement des données. Ce travail porte sur le regard, mais aussi sur le comportement humain, c'est pour cela que nous tentons à décrire au même temps les processus perceptifs et cognitifs observés.L’objectif de notre recherche est aussi de démontrer qu’un texte numérique est à l’origine du changement des rapports : auteur, texte, lecteur , où la notion de contexte et de l’intertextualité changent la dimension. Ce travail s’inscrit dans la continuité de travaux menés dans le domaine de l’hypertexte (Jean Clément, George Landow, Olivier Ertzscheid, Luc Dall’Armellina, Christian Vandendorpe, Jean-Pierre Balpe, Serge Bouchardon, Raja Fenniche, etc.), mais son originalité constitue la partie expérimentale réalisée avec le dispositif Eye-Tracking. / Our research intended to test the way we read on the screen, to initiate new thinking about reading hypertext and computer processing. Read the computer screen, means to grasp an object very complex textual and visual. We chose to study perception, structure and construction of hypertextual reading non-linear with an eye-tracking tool, called Eye-Tracking. In this exploration, we move from the human visual perception to the construction of meaning. From starting our concerns regarding the relationship between intertextuality and hypertext, we want to know how the reader constructs a hypertext reading, is that it is the linearity in this fragmentary reading? This thesis lies stands at the crossroads of several disciplines, such as semiotics and eye tracking, while remaining rooted in Information Science and Communication, which is why our measures Eye-Tracking require several phases data processing. This work focuses on the eyes, but also human behavior, that is why we are trying to describe at the same time the perceptual and cognitive processes observed.The purpose of our research is to demonstrate that digital text is the source of change reports: author, text, reader, where the notion of context and intertextuality change the dimension. This work is a continuation of work in the field of hypertext (Jean Clement, George Landow, Olivier Ertzscheid, Luc Dall'Armellina Christian Vandendorpe, Jean-Pierre Balpe, Serge Bouchardon Raja Fenniche, etc.). but his originality is made with the experimental device Eye-Tracking.
94

Realtime computer interaction via eye tracking

Dubey, Premnath January 2004 (has links)
Through eye tracking technology, scientists have explored the eyes diverse aspects and capabilities. There are many potential applications that benefit from eye tracking. Each benefit from advances in computer technology as this results in improved quality and decreased costs for eye-tracking systems.This thesis presents a computer vision-based eye tracking system for human computer interaction. The eye tracking system allows the user to indicate a region of interest in a large data space and to magnify that area, without using traditional pointer devices. Presented is an iris tracking algorithm adapted from Camshift; an algorithm originally designed for face or hand tracking. Although the iris is much smaller and highly dynamic. the modified Camshift algorithm efficiently tracks the iris in real-time. Also presented is a method to map the iris centroid, in video coordinates to screen coordinates; and two novel calibration techniques, four point and one-point calibration. Results presented show that the accuracy for the proposed one-point calibration technique exceeds the accuracy obtained from calibrating with four points. The innovation behind the one-point calibration comes from using observed eye scanning behaviour to constrain the calibration process. Lastly, the thesis proposes a non-linear visualisation as an eye-tracking application, along with an implementation.
95

Maintenance of behaviour when reinforcement becomes delayed

Costa, Daniel January 2009 (has links)
Doctor of Philosophy (Phd) / Despite an abundance of evidence demonstrating that the temporal relationship between events is a key factor in an organism learning an association between those events, a general theoretical account of temporal contiguity has remained elusive. A particular question that has received little attention is whether behaviour established with strong contiguity can be maintained when contiguity is weakened. The primary aims of this thesis were to examine the mechanisms underlying both the effects of contiguity on learning in rats and humans and the maintenance effect described above. The experiments reported in this thesis demonstrated that rats’ lever-pressing for food/sucrose acquired with immediate reinforcement persisted when a trace/delay that would have prevented acquisition was subsequently introduced, provided the lever was a valid signal for reinforcement. In classical conditioning with a 10-second trace, rats performed magazine-entry during lever-insertion (goal-tracking) instead of lever-pressing (sign-tracking); with zero-trace, rats both sign- and goal-tracked if lever-insertion time was 10 seconds, while goal-tracking dominated with 5-second lever-insertion time. Furthermore, while it was found that context-US associations may interfere with CS-US learning, context conditioning did not contribute to the retardation of sign-tracking in trace conditioning. Overall, these results are consistent with the theory that a localisable manipulandum that signals an appetitive outcome with strong contiguity acquires hedonic value, and that such hedonic value drives lever-pressing behaviour that is resistant to changes in the conditions of reinforcement. Human performance in a conditioned suppression task was inversely related to trace interval, but this apparent contiguity effect was at least partially mediated by the number of distractors during the trace interval, as predicted by Revusky’s concurrent interference theory. Furthermore, some transfer of conditioned suppression was observed when the trace was subsequently lengthened. Despite the different explanations proposed to account for rat and human performance in these experiments, the results suggest that the effects of contiguity on learning may be driven by similar underlying mechanisms across species.
96

Vehicle tracking using scale invariant features

Wang, Jue, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
Object tracking is an active research topic in computer vision and has appli- cation in several areas, such as event detection and robotics. Vehicle tracking is used in Intelligent Transport System (ITS) and surveillance systems. Its re- liability is critical to the overall performance of these systems. Feature-based methods that are used to represent distinctive content in visual frames are one approach to vehicle tracking. Existing feature-based tracking systems can only track vehicles under ideal conditions. They have difficulties when used under a variety of conditions, for example, during both the day and night. They are highly dependent on stable local features that can be tracked for a long time period. These local features are easily lost because of their local property and image noise caused by factors such as, headlight reflections and sun glare. This thesis presents a new approach, addressing the reliability issues mentioned above, tracking whole feature groups composed of feature points extracted with the Scale Invariant Feature Transform (SIFT) algorithm. A feature group in- cludes several features that share a similar property over a time period and can be tracked to the next frame by tracking individual feature points inside it. It is lost only when all of the features in it are lost in the next frame. We cre- ate these feature groups by clustering individual feature points using distance, velocity and acceleration information between two consecutive frames. These feature groups are then hierarchically clustered by their inter-group distance, velocity and acceleration information. Experimental results show that the pro- posed vehicle tracking system can track vehicles with the average accuracy of over 95%, even when the vehicles have complex motions in noisy scenes. It gen- erally works well even in difficult environments, such as for rainy days, windy days, and at night. We are surprised to find that our tracking system locates and tracks motor bikes and pedestrians. This could open up wider opportunities and further investigation and experiments are required to confirm the tracking performance for these objects. Further work is also required to track more com- plex motions, such as rotation and articulated objects with different motions on different parts.
97

A Single-Camera Gaze Tracker using Controlled Infrared Illumination

Wallenberg, Marcus January 2009 (has links)
<p>Gaze tracking is the estimation of the point in space a person is “looking at”. This is widely used in both diagnostic and interactive applications, such as visual attention studies and human-computer interaction. The most common commercial solution used to track gaze today uses a combination of infrared illumination and one or more cameras. These commercial solutions are reliable and accurate, but often expensive. The aim of this thesis is to construct a simple single-camera gaze tracker from off-the-shelf components. The method used for gaze tracking is based on infrared illumination and a schematic model of the human eye. Based on images of reflections of specific light sources in the surfaces of the eye the user’s gaze point will be estimated. Evaluation is also performed on both the software and hardware components separately, and on the system as a whole. Accuracy is measured in spatial and angular deviation and the result is an average accuracy of approximately one degree on synthetic data and 0.24 to 1.5 degrees on real images at a range of 600 mm.</p>
98

A study of terrestrial radio determination applications and technology : final report, contract no. DOT/TSC-1274

January 1978 (has links)
prepared by John E. Ward, Mark E. Connelly, Avram K. Tetewsky. / Final report / Bibliography: p. 188-193. / "July 31, 1978." -- "September, 1978."--Cover. "Submitted to: Transportation Systems Center, Department of Transportation, Kendall Square, Cambridge, MA 02142." / DOT-TSC-1274 M.I.T. Project. 84492
99

Multiple Object Tracking with Occlusion Handling

Safri, Murtaza 16 February 2010 (has links)
Object tracking is an important problem with wide ranging applications. The purpose is to detect object contours and track their motion in a video. Issues of concern are to be able to map objects correctly between two frames, and to be able to track through occlusion. This thesis discusses a novel framework for the purpose of object tracking which is inspired from image registration and segmentation models. Occlusion of objects is also detected and handled in this framework in an appropriate manner. The main idea of our tracking framework is to reconstruct the sequence of images in the video. The process involves deforming all the objects in a given image frame, called the initial frame. Regularization terms are used to govern the deformation of the shape of the objects. We use elastic and viscous fluid model as the regularizer. The reconstructed frame is formed by combining the deformed objects with respect to the depth ordering. The correct reconstruction is selected by parameters that minimize the difference between the reconstruction and the consecutive frame, called the target frame. These parameters provide the required tracking information, such as the contour of the objects in the target frame including the occluded regions. The regularization term restricts the deformation of the object shape in the occluded region and thus gives an estimate of the object shape in this region. The other idea is to use a segmentation model as a measure in place of the frame difference measure. This is separate from image segmentation procedure, since we use the segmentation model in a tracking framework to capture object deformation. Numerical examples are presented to demonstrate tracking in simple and complex scenes, alongwith occlusion handling capability of our model. Segmentation measure is shown to be more robust with regard to accumulation of tracking error.
100

Multiple Object Tracking with Occlusion Handling

Safri, Murtaza 16 February 2010 (has links)
Object tracking is an important problem with wide ranging applications. The purpose is to detect object contours and track their motion in a video. Issues of concern are to be able to map objects correctly between two frames, and to be able to track through occlusion. This thesis discusses a novel framework for the purpose of object tracking which is inspired from image registration and segmentation models. Occlusion of objects is also detected and handled in this framework in an appropriate manner. The main idea of our tracking framework is to reconstruct the sequence of images in the video. The process involves deforming all the objects in a given image frame, called the initial frame. Regularization terms are used to govern the deformation of the shape of the objects. We use elastic and viscous fluid model as the regularizer. The reconstructed frame is formed by combining the deformed objects with respect to the depth ordering. The correct reconstruction is selected by parameters that minimize the difference between the reconstruction and the consecutive frame, called the target frame. These parameters provide the required tracking information, such as the contour of the objects in the target frame including the occluded regions. The regularization term restricts the deformation of the object shape in the occluded region and thus gives an estimate of the object shape in this region. The other idea is to use a segmentation model as a measure in place of the frame difference measure. This is separate from image segmentation procedure, since we use the segmentation model in a tracking framework to capture object deformation. Numerical examples are presented to demonstrate tracking in simple and complex scenes, alongwith occlusion handling capability of our model. Segmentation measure is shown to be more robust with regard to accumulation of tracking error.

Page generated in 0.0366 seconds