161 |
Augmentation In Visual Reality (avr)Zhang, Yunjun 01 January 2007 (has links)
Human eyes, as the organs for sensing light and processing visual information, enable us to see the real world. Though invaluable, they give us no way to "edit" the received visual stream or to "switch" to a different channel. The invention of motion pictures and computer technologies in the last century enables us to add an extra layer of modifications between the real world and our eyes. There are two major approaches to modifications that we consider here, offline augmentation and online augmentation. The movie industry has pushed offline augmentation to an extreme level; audiences can experience visual surprises that they have never seen in their real lives, even though it may take a few months or years for the production of the special visual effects. On the other hand, online augmentation requires that modifications be performed in real time. This dissertation addresses problems in both offline and online augmentation. The first offline problem addressed here is the generation of plausible video sequences after removing relatively large objects from the original videos. In order to maintain temporal coherence among the frames, a motion layer segmentation method is applied. From this, a set of synthesized layers is generated by applying motion compensation and a region completion algorithm. Finally, a plausibly realistic new video, in which the selected object is removed, is rendered given the synthesized layers and the motion parameters. The second problem we address is to construct a blue screen key for video synthesis or blending for Mixed Reality (MR) applications. As a well researched area, blue screen keying extracts a range of colors, typically in the blue spectrum, from a captured video sequence to enable the compositing of multiple image sources. Under ideal conditions with uniform lighting and background color, a high quality key can be generated through commercial products, even in real time. However, A Mixed Realty application typically involves a head-mounted display (HMD) with poor camera quality. This in turn requires the keying algorithm to be robust in the presence of noise. We conduct a three stage keying algorithm to reduce the noise in the key output. First a standard blue screen keying algorithm is applied to the input to get a noisy key; second the image gradient information and the corresponding region are compared with the result in the first step to remove noise in the blue screen area; and finally a matting approach is applied on the boundary of the key to improve the key quality. Another offline problem we address in this dissertation is the acquisition of correct transformation between the different coordinate frames in a Mixed Reality (MR) application. Typically an MR system includes at least one tracking system. Therefore the 3D coordinate frames that need to be considered include the cameras, the tracker, the tracker system and a world. Accurately deriving the transformation between the head-mounted display camera and the affixed 6-DOF tracker is critical for mixed reality applications. This transformation brings the HMD cameras into the tracking coordinate frame, which in turn overlaps with a virtual coordinate frame to create a plausible mixed visual experience. We carry out a non-linear optimization method to recover the camera-tracker transformation with respect to the image reprojection error. For online applications, we address a problem to extend the luminance range in mixed reality environments. We achieve this by introducing Enhanced Dynamic Range Video, a technique based on differing brightness settings for each eye of a video see-through head mounted display (HMD). We first construct a Video-Driven Time-Stamped Ball Cloud (VDTSBC), which serves as a guideline and a means to store temporal color information for stereo image registration. With the assistance of the VDTSBC, we register each pair of stereo images, taking into account confounding issues of occlusion occurring within one eye but not the other. Finally, we apply luminance enhancement on the registered image pairs to generate an Enhanced Dynamic Range Video.
|
162 |
Developing an Augmented Reality Visual Clutter Score Through Establishing the Applicability of Image Analysis Measures of Clutter and the Analysis of Augmented Reality User Interface PropertiesFlittner, Jonathan Garth 05 September 2023 (has links)
Augmented reality (AR) is seeing a rapid expansion into several domains due to the proliferation of more accessible and powerful hardware. While augmented reality user interfaces (AR UIs) allow the presentation of information atop the real world, this extra visual data potentially comes at a cost of increasing the visual clutter of the users' field of view, which can increase visual search time, error rates, and have an overall negative effect on performance. Visual clutter has been studied for existing display technologies, but there are no established measures of visual clutter for AR UIs which precludes the study of the effects of clutter on performance in AR UIs. The first objective of this research is to determine the applicability of extant image analysis measures of feature congestion, edge density, and sub-band entropy for measuring visual clutter in the head-worn optical see-through AR space and establish a relationship between image analysis measures of clutter and visual search time. These image analysis measures are specifically chosen to quantify clutter, as they can be applied to complex and naturalistic scenes, as is common to experience while using an optical see-through AR UI. The second objective is to examine the effects of AR UIs comprised of multiple apparent depths on user performance through the metric of visual search time. The third objective is to determine the effects of other AR UI properties such as target clutter, target eccentricity, target apparent depth and target total distance on performance as measured through visual search time. These results will then be used to develop a visual clutter score, which will rate different AR UIs against each other.
Image analysis measures for clutter of feature congestion, edge density, and sub-band entropy of clutter were correlated to visual search time when they were taken for the overall AR UI and when they were taken for a target object that a participant was searching for. In the case of an AR UI comprised of both projected and AR parts, image analysis measures were not correlated to visual search time for the constituent AR UI parts (projected or AR) but were still correlated to the overall AR UI clutter. Target eccentricity also had an effect on visual search time, while target apparent depth and target total distance from center did not. Target type and AR object percentage also had an effect on visual search time. These results were synthesized into a general model known as the "AR UI Visual Clutter Score Algorithm" using a multiple regression. This model can be used to compare different AR UIs to each other in order to identify the AR UI that is projected to have lower target visual search times. / Doctor of Philosophy / Augmented reality is a novel but growing technology. The ability to project visual information into the real-world comes with many benefits, but at the cost of increasing visual clutter. Visual clutter in existing displays has been shown to negatively affect visual search time, error rates, and general performance, but there are no established measures of visual clutter augmented reality displays, so it is unknown if visual clutter will have the same effects. The first objective of this research is to establish measures of visual clutter for augmented reality displays. The second objective is to better understand the unique properties of augmented reality displays, and how that may affect ease of use.
Measures of visual clutter were correlated to visual search time when they were taken for the augmented reality user interface, and when they were taken for a given target object within that a participant was searching for. It was also found that as targets got farther from the center of the field of view, visual search time increased, while the depth of a target from the user and the total distance a target was from the user did not. Study 1 also showed that target type and AR object percentage also had an effect on visual search time. Combining these results gives a model that can be used to compare different augmented reality user interfaces to each other.
|
163 |
Egocentric Depth Perception in Optical See-Through Augmented RealityJones, James Adam 11 August 2007 (has links)
Augmented Reality (AR) is a method of mixing computer-generated graphics with real-world environments. In AR, observers retain the ability to see their physical surroundings while additional (augmented) information is depicted as simulated graphical objects matched to the real-world view. In the following experiments, optical see-through head-mounted displays (HMDs) were used to present observers with both Augmented and Virtual Reality environments. Observers were presented with varied real, virtual, and combined stimuli with and without the addition of motion parallax. The apparent locations of the stimuli were then measured using quantitative methods of egocentric depth judgment. The data collected from these experiments were then used to determine how observers perceived egocentric depth with respect to both real-world and virtual objects.
|
164 |
Layered Space: Toward an Architecture of SuperimpositionSambuco, Adam J. 24 September 2018 (has links)
No description available.
|
165 |
Precueing Manual Tasks in Augmented and Virtual RealityLiu, Jen-Shuo January 2024 (has links)
Work on Virtual Reality (VR) and Augmented Reality (AR) task interaction and visualization paradigms has typically focused on providing information about the current task step (a cue) immediately before or during its performance. For sequential tasks that involve multiple steps, providing information about the next step (a precue) might also benefit the user. Some research has shown the advantages of simultaneously providing a cue and a precue in path-following tasks. We explore the use of precues in VR and AR for both path-following and object-manipulation tasks involving rotation. We address the effectiveness of different numbers and kinds of precues for different tasks. To achieve this, we conducted a series of user studies:
First, we investigate whether it would be possible to improve efficiency by precueing information about multiple upcoming steps before completing the current step in a planar path-following task. To accomplish this, we developed a VR user study comparing task completion time and subjective metrics for different levels and styles of precueing. Our task-guidance visualizations vary the precueing level (number of steps precued in advance) and style (whether the path to a target is communicated through a line to the target, and whether the place of a target is communicated through graphics at the target). Participants in our study performed best when given two to three precues for visualizations using lines to show the path to targets. However, performance degraded when four precues were used. On the other hand, participants performed best with only one precue for visualizations without lines, showing only the places of targets, and performance degraded when a second precue was given. In addition, participants performed better using visualizations with lines than ones without lines.
Second, we extend the idea of precueing information about multiple steps to a more complex task, whose subtasks involve moving to and picking up a physical object, moving that object to a designated place in the same plane while rotating it to a specific angle in the plane, and depositing it. We conducted two user studies to examine how people accomplish this task while wearing an AR headset, guided by different visualizations that cue and precue movement and rotation. Participants performed best when given movement information for two successive subtasks (one movement precue) and rotation information for a single subtask (no rotation precue). In addition, participants performed best when the visualization of how much to rotate was split across the manipulated object and its destination.
Third, we investigate whether and how much precued rotation information might improve user performance in AR. We consider two unimanual tasks: one requires a participant to make sequential rotations of a single physical object in a plane, and the other requires the participant to move their hand between multiple such objects to rotate them in the plane in sequence. We conducted a user study to explore these two tasks using circular arrows to communicate rotation. In the single-object task, we examined the impact of number of precues and visualization style on participant performance. Results show that precues could improve performance and that arrows with highlighted heads and tails, with each rotation destination aligned with the next origin, yielded the shortest completion time on average. In the multiple-object task, we explored whether rotation precues can be helpful in conjunction with movement precues. Here, using a rotation cue without rotation precues in conjunction with a movement cue and movement precues performed the best, implying that rotation precues were not helpful when movement was also required.
Fourth, we address sequential tasks involving 3DoF rotations and 3DoF translations in headset AR. In each step, a participant picks up a physical object, rotates it in 3D while translating it in 3D, and deposits it in a target 6DoF pose. We designed and compared two types of visualizations for cueing and precueing steps in such a task: Action-based visualizations show the actions needed to carry out a step and goal-based visualizations show the desired end state of a step. We conducted a user study to evaluate these visualizations and their efficacy for precueing. Participants performed better with goal-based visualizations than with action-based visualizations, and most effectively with goal-based visualizations aligned with the Euler axis. However, only a few of our participants benefited from precues, possibly because of the cognitive load of 3D rotations.
In summary, we showed that using precueing can improve the speed at which participants perform different types of tasks. In our VR path-following task, participants were able to benefit from two to three precues using lines to show the path to targets. In our object-manipulation task with 2DoF movement and 1DoF rotation, participants performed best when given movement information for two successive subtasks and rotation information for a single subtask. Further, in our later study focusing on rotation, we found that participants were able to use rotation precues in our single-object task, while in the multiple-object task, rotation precues were not beneficial to participants. Finally, in a study on a sequential 6DoF task, participants performed better with goal-based visualizations than with action-based visualizations.
|
166 |
Exploring the Benefits of the Integration of XR and BIM for Retrofitting ProjectsSermarini, John 01 January 2024 (has links) (PDF)
Rapidly changing population dynamics and increased energy needs have reduced demand for building renovation in favor of more wasteful complete demolition and reconstruction. This dissertation aims to enhance the accessibility and ease of use of challenging retrofitting methodologies to mitigate adverse effects of urbanization, increasing resource use, and aging building stock within the United States. Retrofitting is a process focused on upgrading a component or feature of a structure that was not initially constructed or manufactured, and it is often done to modernize, restore, or repurpose a structure. These renovations are difficult and costly to plan and implement, frequently contributing to eschewing them in favor of complete reconstruction. This research proposes a solution: integrating Extended Reality (XR) technology and Building Information Modeling (BIM) data into the retrofitting workflow. Individually and together, these technologies have been applied to construction work with great success, although this area has previously been predominantly confined to new construction. We present this concept applied to three retrofitting subprocesses: design, implementation training, and model building. For each component, a human-subject study evaluates the system’s effectiveness in improving the efficiency and accessibility of this technology in this new context. We found that when applied to design review, technological limitations of existing XR systems may limit their ability to separate from conventional means, but increasing emphasis on eye movement in the future system design should be prioritized depending on environmental factors. In implementation training, these systems can effectively improve the identification of relevant building components while reducing physical and cognitive demands. Investigation into augmenting human-robot collaboration is still ongoing, but early results indicate great potential in improving control and ease of use when performing tasks later needed to create building models for guiding retrofitting projects. This dissertation provides a foundation for XR-BIM technology applied to retrofitting and, with it, a positive outlook and recommendations for related future work.
|
167 |
The complex strategy : epistemology at the edge of chaosHartley, Adrian January 1999 (has links)
No description available.
|
168 |
Virtual environments for science education : a schools-based developmentCrosier, Joanna January 2000 (has links)
No description available.
|
169 |
THE OCULUS RIFT’S EFFECTS ON IMMERSION SURROUNDINGMORAL CHOICE : A study of modern VR technology and itseffects on a user’s spatial immersion in avirtual environment / OCULUS RIFTS PÅVERKAN PÅ IMMERSIONEN KRING MORALISKA VAL : En studie kring modern VR-teknologi och desspåverkan på en användares spatiala immersioni en virtuell miljöPereswetoff-Morath, Alexander January 2014 (has links)
This report is about VR and the effects the VR technology Oculus Rift may or may not have on the different kinds of immersion possible in virtual environments, or games. The report is based on the premise that modern games have evolved into more story based adventures with better graphics, often with moral choice as gameplay, and theories regarding new mediums and the dangers of not fully understanding them. It is also done in cooperation with a research team at Högskolan i Skövde, with a focus on moral dilemmas, and is using a virtual environment to test this combined effort. The game engine Unity is used to create a realistic environment and together with the Oculus Rift, is testing what kinds of effects the VR technology has on the users. 20 test participants have shared their experiences and the majority, independent of gaming experience, claims it has a positive effect.
|
170 |
Cosmology, Extraterrestrial Life, and the Development and Character of Western European Thought in the Seventeenth and Eighteenth CenturiesSimpson, Emily 08 1900 (has links)
Cosmology, as an all-encompassing theoretical construction of universal reality, serves as one of the best indicators for a variety of philosophical, scientific, and cultural values. Within any cosmological system, the question of extraterrestrial life is an important element. Mere existence or nonexistence, however, only exposes a small portion of the ideological significance behind the contemplation of life outside of earth. The manners by which both believers and disbelievers justify their opinions and the ways they characterize other worlds and their inhabitants show much more about the particular ideas behind such decisions and the general climate of thought surrounding those who consider the topic. By exploring both physical and abstract structures of the universe, and specifically concepts on the plurality of worlds and extraterrestrial life, Western European thought in the seventeenth and eighteenth centuries reveals not an era of pure advancement and modernization, but as a time of both tradition and change.
|
Page generated in 0.06 seconds