• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 15
  • 15
  • 15
  • 15
  • 9
  • 9
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Evaluating Cultural Learning in Virtual Environments

Champion, E. M. Unknown Date (has links)
No description available.
2

Task Performance with Space-time Cube Visualizations: Differences Between HoloLens and Desktop Users

Michael Saenz (5930819) 16 January 2019 (has links)
The researcher’s intent in this study was to understand users’ performance, specifically in terms of time, error and workload, in different display conditions while manipulating a space-time cube visualization. A convergent mixed-method design was applied to allow the researcher to better understand the research problems. In the study, time, error and perceived workload were investigated to test performance to detect if a display condition had a positive or negative influence on users’ abilities to perform a task. The qualitative data explored the differences in users’ experiences with the HoloLens and desktop<br>
3

Surface fitting for the modeling of plant leaves

Loch, B. Unknown Date (has links)
No description available.
4

Surface fitting for the modeling of plant leaves

Loch, B. Unknown Date (has links)
No description available.
5

Surface fitting for the modeling of plant leaves

Loch, B. Unknown Date (has links)
No description available.
6

Effective User Guidance through Augmented Reality Interfaces: Advances and Applications

Daniel S Andersen (8755488) 24 April 2020 (has links)
<div>Computer visualization can effectively deliver instructions to a user whose task requires understanding of a real world scene. Consider the example of surgical telementoring, where a general surgeon performs an emergency surgery under the guidance of a remote mentor. The mentor guidance includes annotations of the operating field, which conventionally are displayed to the surgeon on a nearby monitor. However, this conventional visualization of mentor guidance requires the surgeon to look back and forth between the monitor and the operating field, which can lead to cognitive load, delays, or even medical errors. Another example is 3D acquisition of a real-world scene, where an operator must acquire multiple images of the scene from specific viewpoints to ensure appropriate scene coverage and thus achieve quality 3D reconstruction. The conventional approach is for the operator to plan the acquisition locations using conventional visualization tools, and then to try to execute the plan from memory, or with the help of a static map. Such approaches lead to incomplete coverage during acquisition, resulting in an inaccurate reconstruction of the 3D scene which can only be addressed at the high and sometimes prohibitive cost of repeating acquisition.</div><div><br></div><div>Augmented reality (AR) promises to overcome the limitations of conventional out-of-context visualization of real world scenes by delivering visual guidance directly into the user's field of view, guidance that remains in-context throughout the completion of the task. In this thesis, we propose and validate several AR visual interfaces that provide effective visual guidance for task completion in the context of surgical telementoring and 3D scene acquisition.</div><div><br></div><div>A first AR interface provides a mentee surgeon with visual guidance from a remote mentor using a simulated transparent display. A computer tablet suspended above the patient captures the operating field with its on-board video camera, the live video is sent to the mentor who annotates it, and the annotations are sent back to the mentee where they are displayed on the tablet, integrating the mentor-created annotations directly into the mentee's view of the operating field. We show through user studies that surgical task performance improves when using the AR surgical telementoring interface compared to when using the conventional visualization of the annotated operating field on a nearby monitor. </div><div><br></div><div>A second AR surgical telementoring interface provides the mentee surgeon with visual guidance through an AR head-mounted display (AR HMD). We validate this approach in user studies with medical professionals in the context of practice cricothyrotomy and lower-limb fasciotomy procedures, and show improved performance over conventional surgical guidance. A comparison between our simulated transparent display and our AR HMD surgical telementoring interfaces reveals that the HMD has the advantages of reduced workspace encumbrance and of correct depth perception of annotations, whereas the transparent display has the advantage of reduced surgeon head and neck encumbrance and of annotation visualization quality. </div><div><br></div><div>A third AR interface provides operator guidance for effective image-based modeling and rendering of real-world scenes. During the modeling phase, the AR interface builds and dynamically updates a map of the scene that is displayed to the user through an AR HMD, which leads to the efficient acquisition of a five-degree-of-freedom image-based model of large, complex indoor environments. During rendering, the interface guides the user towards the highest-density parts of the image-based model which result in the highest output image quality. We show through a study that first-time users of our interface can acquire a quality image-based model of a 13m $\times$ 10m indoor environment in 7 minutes.</div><div><br></div><div>A fourth AR interface provides operator guidance for effective capture of a 3D scene in the context of photogrammetric reconstruction. The interface relies on an AR HMD with a tracked hand-held camera rig to construct a sufficient set of six-degrees-of-freedom camera acquisition poses and then to steer the user to align the camera with the prescribed poses quickly and accurately. We show through a study that first-time users of our interface are significantly more likely to achieve complete 3D reconstructions compared to conventional freehand acquisition. We then investigated the design space of AR HMD interfaces for mid-air pose alignment with an added ergonomics concern, which resulted in five candidate interfaces that sample this design space. A user study identified the aspects of the AR interface design that influence the ergonomics during extended use, informing AR HMD interface design for the important task of mid-air pose alignment.</div>
7

Interactive 3-D Modeling in Virtual Reality

Darius L. Bigbee (5930549) 15 May 2019 (has links)
Many applications have been developed for Virtual Reality (VR) during the new wave of VR technology. These new technologies make it possible to create 3D meshes in a virtual environment in real time. However, the usability of VR as a modelling tool is still a new area of research. This study’s research created a VR 3D modeling tool that will provide the user with tools to interactively generate and edit 3D meshes in real-time and teach the users how to create 3D models. The study had two groups of participants, one group used Autodesk Maya, and another used the VR modeling tool. All participants were from Purdue University and all data was collected in the Polytechnic Institute. Both groups were given a task to create a teacup with the time it took to complete it recorded. The VR tool was evaluated with a SUS (System Usability Scale). The participants provided feedback and rated how difficult it was to use the application. With the SUS, it was determined that the application did not meet the industry standard average score of 68. However, further analysis on users’ responses showed many areas to improve in the application. A few recommendations for future research include implementation of multi-selection, a undo and redo feature, and improvements of how the user interacts with the 3D meshes.
8

A USER-SPECIFIC APPROACH TO DEVELOP AN ADAPTIVE VR EXERGAME FOR INDIVIDUALS WITH SCI

Shanmugam Muruga Palaniappan (6858902) 15 August 2019 (has links)
<div> <div> <div> <p>Patients with Spinal Cord Injury (SCI) have limited time with supervised therapy in rehabilitation hospitals. This makes it imperative for them to continue regular therapy at home so they can maximize motor recovery especially for performing Activities of Daily Living (ADL). However, physical therapy can be tedious and frustrating leading to a lack of motivation. A novel upper extremity movement measurement tool was developed using a commercial VR system to rapidly and objectively measure an individual’s range of motion, velocity of movement on an individual gesture basis, and frequency of movements in a three-dimensional space. Further, an exergame with varied and customizable gameplay parameters was developed. Through the analysis of participant interaction with the exergame, we identified gameplay parameters that can be adjusted to affect the player’s perceived and physiological effort. We observed that VR has a significant motivational effect on range of motion of upper limbs in individuals with tetraplegia. The motion data and kernel density estimation is used to determine areas of comfort. Moreover, the system allowed calculation of joint torques through inverse kinematics and dynamics to serve as an analysis tool to gauge muscular effort. The system can provide an improved rehabilitation experience for persons with tetraplegia in home settings while allowing oversight by clinical therapists through analysis of mixed reality videos or it could be used as a supplement or alternative to conventional therapy. </p> </div> </div> </div>
9

A COMPARISON OF 3D SHAPE RECOGNITION IN COMPUTER AIDED DESIGN BETWEEN VIRTUAL REALITY AND CONVENTIONAL TWO DIMENSIONAL DISPLAYS

Syed Faaiz Hussain (8797649) 05 May 2020 (has links)
<p>The recent development of Virtual Reality technology, researchers are looking more into changing the way Virtual Reality is used in our daily lives in order to increase our productivity. One such application is the mapping of 3D spatial graphics in Computer Aided Design engineering where practitioners have been historically working on 3D models in a two dimensional environment. Researchers in Computer Graphics have proposed Virtual Reality as a more effective medium for CAD packages. This thesis carries out a user study to test whether or not 3D VR environments are more effective in relaying information to the users as compared to two dimensional displays such as computer screens by conducting a study to determine how users navigate and interact with complex CAD objects in the two different environments. The two environments make use of stereoscopic vision and monoscopic vision in order to compare the efficiency with which volunteers are able to notice subtle differences in objects. The motivation for this study stems from the fact that CAD in VR is largely an underdeveloped topic and the result of such a study could form a baseline and advocate for further research and development in this domain. The research question being addressed is “Does CAD in a three-dimensional Virtual Reality Environment(stereoscopic) allow for better understanding of shapes of complex assemblies as compared to CAD on two-dimensional (monoscopic) computer screens?” The findings of this study suggest that rather than just the display technique the kind of movements which objects undergo also contributes to the way users perceive the objects in 3D vs 2D spaces and uncover a set of directions which would be recommended for similar studies in the future.</p><div><br></div>
10

The God-like Interaction Framework: tools and techniques for communicating in mixed-space collaboration

Stafford, Aaron January 2008 (has links)
This dissertation presents the god-like interaction framework, consisting of tools and techniques for remote communication of situational and navigational information. The framework aims to facilitated intuitive and effective communication between a group of experts and remote field workers in the context of military, fire-fighting, and search and rescue.

Page generated in 0.1392 seconds