Spelling suggestions: "subject:"lead cracking"" "subject:"lead fracking""
1 |
Cancellation of the vestibulo-ocular reflex during horizontal combined eye-head trackingHuebner, William Paul January 1991 (has links)
No description available.
|
2 |
Test Immersion in DomeTheater using Tracking deviceLiang, Liu January 2011 (has links)
Head tracking is an important way to interact with virtual objects in virtual world. The viewercan move or rotate his head to observe the 3D scene in dierent view. Normally head tracking isused in a cave or just on a at screen.Dome theater has a half sphere screen with multiple projectors together for showing the wholescene onto the big screen. The dome screen could give the viewer a very strong immersion feelingwhen head tracking inside dome theater and that is why we want to implement head tracking indome theater. The half sphere dome screen is so big that multiple projectors should be used forshooting the whole scene onto the big screen. Hence a cluster system is used for manipulating allthe projectors working smoothly. The display system of dome theater has no place for the headtracking part.This thesis tries to introduce a method to do head tracking in dome theater. The mainproblem is how to add head tracking in the display system in dome theater. Frame buer object(FBO) is used as the solution for this problem. The viewer's viewing frustum is created in framebuer object in order to render the 3D scene depending on the viewer's head position. The FBOtexture will then be attached onto a 3D sphere which simulates the dome sphere in virtual world.Since the viewing frustum is always created depending on the viewer's head position, the FBOtextures on the 3D sphere always can represent the 3D scene rendered depending on the viewer'shead position. Using the projectors to shoot the 3D scenes which is the 3D sphere attached by theFBO textures onto the dome screen. That is the main part of how to implement head tracking indome theater.This thesis forcus on rendering the 3D scene onto the dome screen depending on the viewer'shead position. The tracking device controlling part is out of this thesis's scope. VR Juggler (VRJ) is used as the framework in this project. Viewer's position setting and cluster setting are allsetted in the conguration file.
|
3 |
Comparison of Brain Strain Magnitudes Calculated Using Head Tracking Impact Parameters and Body Tracking Impact Parameters Obtained from 2D VideoLarsen, Kayla 03 May 2022 (has links)
Relying on signs and symptoms of head injury outcomes has shown to be unreliable in capturing the vulnerabilities associated with brain trauma (Karton & Hoshizaki, 2018). To accommodate the subjectivity of self-reported symptoms, data collection using sensor monitoring and video analysis combined with event reconstruction are used to objectively measure trauma exposure (Tator, 2013; Scorza & Cole, 2019; Hoshizaki et al., 2014). Athletes are instrumented with wireless sensors designed to measure head kinematics during play. However, these systems have not been widely adopted as they are expensive, face challenges with angular acceleration measures, and often require video confirmation to remove false positives. Video analysis of head impacts, in conjunction with physical event reconstruction and finite element (FE) modeling, is also used to calculate tissue level strain. This data collection method requires specialized equipment and expertise. Effective management of head trauma in sport requires an objective, accessible, and quantifiable tool that addresses the limitations associated with current measurement systems. The purpose of this research was to determine if a simplified version of video analysis and event reconstruction using impact characteristics (velocity, location, mass, and compliance) obtained from body tracking could yield similar measures of brain strain magnitude to the standard head tracking method. Ice hockey impacts (x36) that varied in terms of competition level, event type and maximum principal strain (MPS) were chosen for analysis. 2D videos of previously completed head reconstructions were reanalyzed and each event was reconstructed again in the laboratory using impact parameters obtained from body tracking. MPS values were calculated using finite element (FE) modeling and compared to the MPS values from events that were reconstructed using impact parameters obtained from head tracking. The relationship between head and body tracking MPS data and level of agreement between MPS categories were also assessed. Overall, a significant difference was observed between MPS magnitudes obtained using impact parameters from body and head tracking data from 2D video. When analyzed by event type, only shoulder and glass events demonstrated significant differences in MPS magnitudes. A strong linear relationship was depicted between the two data collection methods and moderate level of agreement between MPS categories was observed, demonstrating that impact characteristics obtained from body tracking and 2D video can be used to measure brain tissue strain.
|
4 |
Assessing Negative Side Effects in Virtual EnvironmentsMcGee, Michael K. 11 February 1998 (has links)
Virtual environment (VE) systems have been touted as exciting new technologies with many varied applications. Today VEs are used in telerobotics, training, simulation, medicine, architecture, and entertainment. The future use of VEs seems limited only by the creativity of its designers. However, as with any developing technology, some difficulties need to be overcome. Certain users of VEs experience negative side effects from being immersed into the graphically rendered virtual worlds. Some side effects that have been observed include: disorientation, headaches, and difficulties with vision. These negative side effects threaten the safety and effectiveness of VE systems.
Negative side effects have been found to develop in a variety of environments. The research focus on VE side effects thus far has been on the symptoms and not the causes. The main goals of this research is fourfold: 1) to compare a new measure for side effects with established ones; 2) begin analyzing the causes of side effects with an analysis of head-tracking; 3) to examine any adaptation that may occur within a session and between days of a session; and, 4) to examine possible predictors for users who may experience side effects.
An experiment was conducted using two different VEs with either head-tracking on or head-tracking off over four days. A questionnaire, a balance test, a vision test, and magnitude estimations of side effects were used to assess the incidence and severity of sickness experienced in the VEs. Other assessments, including a mental rotation test, perceptual style, and a questionnaire on pre-existing susceptibility to motion sickness were administered. All factors were analyzed to determine what their relationships were with the incidence and severity of negative side effects that result from immersion into the VEs.
Results showed that head-tracking induces more negative side effects than no head-tracking. The maze task environment induces more negative side effects than the office task environment. Adaptation did not occur from day to day throughout the four testing sessions. The incidence and severity of negative side effects increased at a constant rate throughout the 30 minute immersive VE sessions, but did not show any significant changes from day to day. No evidence was found for a predictor that would foretell who might be susceptible to motion sickness in VEs. / Master of Science
|
5 |
Latency and Distortion compensation in Augmented Environments using Electromagnetic trackersHimberg, Henry 17 December 2010 (has links)
Augmented reality (AR) systems are often used to superimpose virtual objects or information on a scene to improve situational awareness. Delays in the display system or inaccurate registration of objects destroy the sense of immersion a user experiences when using AR systems. AC electromagnetic trackers are ideally for these applications when combined with head orientation prediction to compensate for display system delays. Unfortunately, these trackers do not perform well in environments that contain conductive or ferrous materials due to magnetic field distortion without expensive calibration techniques. In our work we focus on both the prediction and distortion compensation aspects of this application, developing a “small footprint” predictive filter for display lag compensation and a simplified calibration system for AC magnetic trackers. In the first phase of our study we presented a novel method of tracking angular head velocity from quaternion orientation using an Extended Kalman Filter in both single model (DQEKF) and multiple model (MMDQ) implementations. In the second phase of our work we have developed a new method of mapping the magnetic field generated by the tracker without high precision measurement equipment. This method uses simple fixtures with multiple sensors in a rigid geometry to collect magnetic field data in the tracking volume. We have developed a new algorithm to process the collected data and generate a map of the magnetic field distortion that can be used to compensation distorted measurement data.
|
6 |
TerraVis: A Stereoscopic Viewer for Interactive Seismic Data VisualizationStoecker, Justin W 27 April 2011 (has links)
Accurate earthquake prediction is a difficult, unsolved problem that is central to the ambitions of many geoscientists. Understanding why earthquakes occur requires a profound understanding of many interrelated processes; our planet functions as a massive, complex system. Scientific visualization can be applied to such problems to improve understanding and reveal relationships between data. There are several challenges inherent to visualizing seismic data: working with large, high-resolution 3D and 4D data sets in a myriad of formats, integrating and rendering multiple models in the same space, and the need for real-time interactivity and intuitive interfaces. This work describes a product of the collaboration between computer science and geophysics. TerraVis is a real-time system that incorporates advanced visualization techniques for seismic data. The software can process and efficiently render digital elevation models, earthquake catalogs, fault slip distributions, moment tensor solutions, and scalar fields in the same space. In addition, the software takes advantage of stereoscopic viewing and head tracking for immersion and improved depth perception. During reconstruction efforts after the devastating 2010 earthquake in Haiti, TerraVis was demonstrated as a tool for assessing the risk of future earthquakes.
|
7 |
Robust Dynamic Orientation Sensing Using Accelerometers: Model-based Methods for Head Tracking in ARKeir, Matthew Stuart January 2008 (has links)
Augmented reality (AR) systems that use head mounted displays to overlay synthetic imagery on the user's view of the real world require accurate viewpoint tracking for quality applications. However, achieving accurate registration is one of the most significant unsolved problems within AR systems, particularly during dynamic motions in unprepared environments. As a result, registration error is a major issue hindering the more widespread growth of AR applications.
The main objective for this thesis was to improve dynamic orientation tracking of the head using low-cost inertial sensors. The approach taken within this thesis was to extend the excellent static orientation sensing abilities of accelerometers to a dynamic case by utilising a model of head motion.
Head motion is modelled by an inverted pendulum, initially for one degree of rotational freedom, but later this is extended to a more general two dimensional case by including a translational freedom of the centre of rotation. However, the inverted pendulum model consists of an unstable coupled set of differential equations which cannot be solved by conventional solution approaches.
A unique method is developed which consists of a highly accurate approximated analytical solution to the full non linear tangential ODE. The major advantage of the analytical solution is that it allows a separation of the unstable transient part of the solution from the stable solution. The analytical solution is written directly in terms of the unknown initial conditions. Optimal initial conditions are found that remove the unstable transient part completely by utilising the independent radial ODE. Thus, leaving the required orientation.
The methods are validated experimentally with data collected using accelerometers and a physical inverted pendulum apparatus. A range of tests were performed demonstrating the stability of the methods and solution over time and the robust performance to increasing signal frequency, over the range expected for head motion.
The key advantage of this accelerometer model-based method is that the orientation remains registered to the gravitational vector, providing a drift free solution that outperforms existing, state of the art, gyroscope based methods. This proof of concept, uses low-cost accelerometer sensors to show significant potential to improve head tracking in dynamic AR environments, such as outdoors.
|
8 |
Semi-Automating Forestry Machines : Motion Planning, System Integration, and Human-Machine Interaction / Delautomatisering av skogsmaskiner : Rörelseplanering, systemintegration och människa-maskin-interaktionWesterberg, Simon January 2014 (has links)
The process of forest harvesting is highly mechanized in most industrialized countries, with felling and processing of trees performed by technologically advanced forestry machines. However, the maneuvering of the vehicles through the forest as well as the control of the on-board hydraulic boom crane is currently performed through continuous manual operation. This complicates the introduction of further incremental productivity improvements to the machines, as the operator becomes a bottleneck in the process. A suggested solution strategy is to enhance the production capacity by increasing the level of automation. At the same time, the working environment for the operator can be improved by a reduced workload, provided that the human-machine interaction is adapted to the new automated functionality. The objectives of this thesis are 1) to describe and analyze the current logging process and to locate areas of improvements that can be implemented in current machines, and 2) to investigate future methods and concepts that possibly require changes in work methods as well as in the machine design and technology. The thesis describes the development and integration of several algorithmic methods and the implementation of corresponding software solutions, adapted to the forestry machine context. Following data recording and analysis of the current work tasks of machine operators, trajectory planning and execution for a specific category of forwarder crane motions has been identified as an important first step for short term automation. Using the method of path-constrained trajectory planning, automated crane motions were demonstrated to potentially provide a substantial improvement from motions performed by experienced human operators. An extension of this method was developed to automate some selected motions even for existing sensorless machines. Evaluation suggests that this method is feasible for a reasonable deviation of initial conditions. Another important aspect of partial automation is the human-machine interaction. For this specific application a simple and intuitive interaction method for accessing automated crane motions was suggested, based on head tracking of the operator. A preliminary interaction model derived from user experiments yielded promising results for forming the basis of a target selection method, particularly when combined with some traded control strategy. Further, a modular software platform was implemented, integrating several important components into a framework for designing and testing future interaction concepts. Specifically, this system was used to investigate concepts of teleoperation and virtual environment feedback. Results from user tests show that visual information provided by a virtual environment can be advantageous compared to traditional video feedback with regards to both objective and subjective evaluation criteria.
|
9 |
Camera based motion estimation and recognition for human-computer interactionHannuksela, J. (Jari) 09 December 2008 (has links)
Abstract
Communicating with mobile devices has become an unavoidable part of our daily life. Unfortunately, the current user interface designs are mostly taken directly from desktop computers. This has resulted in devices that are sometimes hard to use. Since more processing power and new sensing technologies are already available, there is a possibility to develop systems to communicate through different modalities. This thesis proposes some novel computer vision approaches, including head tracking, object motion analysis and device ego-motion estimation, to allow efficient interaction with mobile devices.
For head tracking, two new methods have been developed. The first method detects a face region and facial features by employing skin detection, morphology, and a geometrical face model. The second method, designed especially for mobile use, detects the face and eyes using local texture features. In both cases, Kalman filtering is applied to estimate the 3-D pose of the head. Experiments indicate that the methods introduced can be applied on platforms with limited computational resources.
A novel object tracking method is also presented. The idea is to combine Kalman filtering and EM-algorithms to track an object, such as a finger, using motion features. This technique is also applicable when some conventional methods such as colour segmentation and background subtraction cannot be used. In addition, a new feature based camera ego-motion estimation framework is proposed. The method introduced exploits gradient measures for feature selection and feature displacement uncertainty analysis. Experiments with a fixed point implementation testify to the effectiveness of the approach on a camera-equipped mobile phone.
The feasibility of the methods developed is demonstrated in three new mobile interface solutions. One of them estimates the ego-motion of the device with respect to the user's face and utilises that information for browsing large documents or bitmaps on small displays. The second solution is to use device or finger motion to recognize simple gestures. In addition to these applications, a novel interactive system to build document panorama images is presented.
The motion estimation and recognition techniques presented in this thesis have clear potential to become practical means for interacting with mobile devices. In fact, cameras in future mobile devices may, for the most of time, be used as sensors for self intuitive user interfaces rather than using them for digital photography.
|
10 |
Spatialisation of Binaural Audio With Head Tracking in First-Person Computer GamesBergsten, Patrik, Kihan, Mikita January 2023 (has links)
Audio in today’s first person computer games plays a vital role in informing players about their surroundings as well as general gameplay elements. Awareness of the direction, distance, and spatial placement of audio sources can be crucial for players in various contexts. Spatialising audio through stereo panning can pose challenges to players when it comes to accurately localising sound sources in front, behind, below, or above the player. Binaural audio is another technique used for spatialising audio by simulating how sound interacts with the head and ears before reaching the eardrums from a specific direction before the signal is rendered through the headphones. While binaural audio attempts to alleviate the front-back confusion, the cues from binaural audio are lost when the listener moves their head if some form of head tracking solution is not incorporated. Hence, this study’s research question is: ”How is localisation of audio sources in first-person computer games — while wearing headphones— helped by spatialising the soundscape in relation to head movement utilising head tracking technology?”. To be able to answer the research question a prototype of a game was developed following design science guidelines. The objective was to timely and accurately localise 14 invisible targets emitting sound in a virtual three-storey house. During one half of the test the spatilisation with head movements was inactive and the other half had it activated in order to compare the testers ability for localisation with and without the tool. The order for which half would have spatilisation activated was randomised for each test. The research strategy consisted of two types of experiments, blind — with 20 participants, and open — with five testers, that were conducted to measure and evaluate the participants’ head movement and performance in terms of accuracy and time of localising the targets by shooting them. For data collection this study used a mixed methods approach that included questionnaires with closed questions and semistructured interviews. Data about the testers’ performance was automatically logged during the test. The results from the first, blind experiment showed little head movement and no significant impact on localisation performance from the spatialisation. Consequently an open follow-up experiment was performed to discover if the blind experiment design affected the results. The results demonstrated a higher degree of head movement but corroborated the first test in no substantial effect on the testers’ accuracy or time when localising the targets. In summary, there could be found no positive or negative impact on one’s localisation of audio sources in first-person computer games —while wearing headphones— when spatialising the soundscape in relation to head movements by utilising head tracking technology. Additionally, some participants found the tool to be unfit for the genre that the prototype resembled and suggested that spatialisation of audio with head tracking could perhaps be better suited for other genres. This could serve as material for future research on the use of head tracking for spatialisation of audio in computer games.
|
Page generated in 0.0741 seconds