Spelling suggestions: "subject:"eality."" "subject:"ideality.""
261 |
The Real Blurred Lines: On Liminality in Horror and the Threatened Boundary Between the Real and the ImaginedWest, Brandon Charles 21 June 2017 (has links)
The horror genre is obsessed with being treated as fact rather than fiction. From movies that plaster their title screens with "Based on actual events" to urban legends that happened to a friend of a friend, the horror genre thrives on being treated as fact even when it is more often fiction. Yet horror does more than claim verisimilitude. Whereas some stories are content to pass as reality, other stories question whether a boundary between fiction and reality even exists. They give us monsters that become real when their names are spoken (Tales from the Darkside) and generally undermine the boundaries we take for granted. Wes Craven's New Nightmare, for instance, shows a malevolent being forcibly blending the characters' reality with the fiction they themselves created. But why are scary stories concerned with seeming real and undermining our notions of reality? To answer this, I draw on various horror films and philosophical and psychological notions of the self and reality. Ultimately, I argue, horror is a didactic genre obsessed with showing us reality as it is, not as we wish it to be. Horror confronts us not only with our mortality (as in slasher films) but also with the truth that fiction and reality are not the easily divided categories we often take them to be. / Master of Arts / The horror genre is obsessed with being treated as fact rather than fiction. From movies that plaster their title screens with “Based on actual events” to urban legends that happened to a friend of a friend, the horror genre thrives on being treated as fact even when it is more often fiction. Yet horror does more than claim verisimilitude. Whereas some stories are content to pass as reality, other stories question whether a boundary between fiction and reality even exists. They give us monsters that become real when their names are spoken (Tales from the Darkside) and generally undermine the boundaries we take for granted. Wes Craven’s New Nightmare, for instance, shows a malevolent being forcibly blending the characters’ reality with the fiction they themselves created. But why are scary stories concerned with seeming real and undermining our notions of reality? To answer this, I draw on various horror films and philosophical and psychological notions of the self and reality. Ultimately, I argue, horror is a didactic genre obsessed with showing us reality as it is, not as we wish it to be. Horror confronts us not only with our mortality (as in slasher films) but also with the truth that fiction and reality are not the easily divided categories we often take them to be.
|
262 |
Enhanced Avatar Control Using Neural NetworksAmin, H., Earnshaw, Rae A. January 1999 (has links)
No / This paper presents realistic avatar movements using a limited number of sensors. An inverse kinematics algorithm, SHAKF, is used to configure an articulated skeletal model, and a neural network is employed to predict the movement of joints not bearing sensors. The results show that the neural network is able to give a very close approximation to the actual rotation of the joints. This allows a substantial reduction in the number of sensors to configure an articulated human skeletal model.
|
263 |
The effects of motion on performance, presence, and sickness in a virtual environmentLanham, D. Susan 01 April 2000 (has links)
No description available.
|
264 |
Recovery from virtual environment exposure : assessment methods, expected time-course of symptoms and potential readaptation mechanismsChampney, Roberto K. 01 April 2003 (has links)
No description available.
|
265 |
Design and Perception of Diverse Virtual Avatars in Immersive EnvironmentsDo, Tiffany D 01 January 2024 (has links) (PDF)
Virtual humans and avatars have become integral components of immersive technologies, shaping various aspects of virtual worlds. This dissertation investigates the nuanced interactions between user demographics and virtual avatars, exploring their impact on perception, embodiment, and persuasive communication. The first study investigates the effectiveness of different speech fidelity levels in virtual humans, revealing gender-dependent perceptions of trustworthiness. Building on this, this dissertation introduces the \textit{Virtual Avatar Library for Inclusion and Diversity} (\textit{VALID}), offering a comprehensive resource for advancing racial diversity and inclusion in virtual environments. Through rigorous validation studies, we shed light on the perception of avatar characteristics worldwide, emphasizing the importance of accurately representing diverse demographics. This dissertation also examines the influence of matching avatar demographics to user demographics on sense of embodiment in virtual reality. We found significant effects of matched ethnicity and gender on sense of embodiment, revealing the importance of providing diverse representations for users in applications and experiments. Furthermore, we explored the intricate interactions between user demographics and avatar matching effects, revealing how some demographics may be disproportionally affected by unmatched avatars. Through a diverse array of experiments and validation studies, this dissertation provides valuable insights into the design, perception, and implications of virtual representations in immersive technologies and paves the way for more inclusive and effective virtual environments.
|
266 |
Shared reality in romantic relationships promotes meaning in life by reducing uncertaintyEnestrom, M. Catalina January 2023 (has links)
No description available.
|
267 |
Supporting Multi-User Interaction in Co-Located and Remote Augmented Reality by Improving Reference Performance and Decreasing Physical InterferenceOda, Ohan January 2016 (has links)
One of the most fundamental components of our daily lives is social interaction, ranging from simple activities, such as purchasing a donut in a bakery on the way to work, to complex ones, such as instructing a remote colleague how to repair a broken automobile. While we interact with others, various challenges may arise, such as miscommunication or physical interference. In a bakery, a clerk may misunderstand the donut at which a customer was pointing due to the uncertainty of their finger direction. In a repair task, a technician may remove the wrong bolt and accidentally hit another user while replacing broken parts due to unclear instructions and lack of attention while communicating with a remote advisor.
This dissertation explores techniques for supporting multi-user 3D interaction in augmented reality in a way that addresses these challenges. Augmented Reality (AR) refers to interactively overlaying geometrically registered virtual media on the real world. In particular, we address how an AR system can use overlaid graphics to assist users in referencing local objects accurately and remote objects efficiently, and prevent co-located users from physically interfering with each other. My thesis is that our techniques can provide more accurate referencing for co-located and efficient referencing for remote users and lessen interference among users.
First, we present and evaluate an AR referencing technique for shared environments that is designed to improve the accuracy with which one user (the indicator) can point out a real physical object to another user (the recipient). Our technique is intended for use in otherwise unmodeled environments in which objects in the environment, and the hand of the indicator, are interactively observed by a depth camera, and both users wear tracked see-through displays. This technique allows the indicator to bring a copy of a portion of the physical environment closer and indicate a selection in the copy. At the same time, the recipient gets to see the indicator's live interaction represented virtually in another copy that is brought closer to the recipient, and is also shown the mapping between their copy and the actual portion of the physical environment. A formal user study confirms that our technique performs significantly more accurately than comparison techniques in situations in which the participating users have sufficiently different views of the scene.
Second, we extend the idea of using a copy (virtual replica) of physical object to help a remote expert assist a local user in performing a task in the local user's environment. We develop an approach that uses Virtual Reality (VR) or AR for the remote expert, and AR for the local user. It allows the expert to create and manipulate virtual replicas of physical objects in the local environment to refer to parts of those physical objects and to indicate actions on them. The expert demonstrates actions in 3D by manipulating virtual replicas, supported by constraints and annotations. We performed a user study of a 6DOF alignment task, a key operation in many physical task domains. We compared our approach with another 3D approach that also uses virtual replicas, in which the remote expert identifies corresponding pairs of points to align on a pair of objects, and a 2D approach in which the expert uses a 2D tablet-based drawing system similar to sketching systems developed for prior work by others on remote assistance. The study shows the 3D demonstration approach to be faster than the others.
Third, we present an interference avoidance technique (Redirected Motion) intended to lessen the chance of physical interference among users with tracked hand-held displays, while minimizing their awareness that the technique is being applied. This interaction technique warps virtual space by shifting the virtual location of a user's hand-held display. We conducted a formal user study to evaluate Redirected Motion against other approaches that either modify what a user sees or hears, or restrict the interaction capabilities users have. Our study was performed using a game we developed, in which two players moved their hand-held displays rapidly in the space around a shared gameboard. Our analysis showed that Redirected Motion effectively and imperceptibly kept players further apart physically than the other techniques.
These interaction techniques were implemented using an extensible programming framework we developed for supporting a broad range of multi-user immersive AR applications. This framework, Goblin XNA, integrates a 3D scene graph with support for 6DOF tracking, rigid body physics simulation, networking, shaders, particle systems, and 2D user interface primitives.
In summary, we showed that our referencing approaches can enhance multi-user AR by improving accuracy for co-located users and increasing efficiency for remote users. In addition, we demonstrated that our interference-avoidance approach can lessen the chance of unwanted physical interference between co-located users, without their being aware of its use.
|
268 |
Réalité évoquée, des rêves aux simulations : un cadre conceptuel de la réalité au regard de la présence / Evoked reality, from dreams to simulationsa : conceptual framework of reality referring to presencePillai, Jayesh s. 20 June 2013 (has links)
Dans cette recherche, nous présentons le concept de «Réalité Évoquée» (« Evoked Reality ») afin d'essayer de relier différentes notions entourant la présence et la réalité au sein d'un cadre commun. Nous introduisons et illustrons le concept en tant que « illusion de la réalité » (Réalité Évoquée) qui évoque un « sentiment de présence » (Présence Évoquée) dans nos esprits. Nous distinguons les concepts de « Réalité Média-Évoquée » et « Réalité Auto-Évoquée » et nous les définissons clairement. Le concept de « Réalité Évoquée » nous permet d'introduire un modèle tripolaire de la réalité, qui remet en cause le modèle classique des deux pôles. Nous présentons également un modèle graphique appelé « Reality-Presence Map » (Carte Réalité-Présence) qui nous permet de localiser et d'analyser toutes les expériences cognitives concernant la présence et la réalité. Nous explorons également les qualia et la subjectivité de nos expériences de Réalité Évoquée. Deux expériences ont été réalisées : l'une dans le domaine de la Réalité Média-Évoquée et l'autre dans celui de l'Auto-Évoquée. Les expériences nous ont permis de valider nos hypothèses et de réaliser que nos recherches empiriques pouvaient encore être poussées plus loin encore. Enfin, nous illustrons les différentes implications et nous examinons les applications et les utilisations possibles de notre concept, en particulier dans le domaine de la recherche sur la présence. En outre, nous proposons d'étendre la recherche sur la présence au-delà du domaine de la réalité virtuelle et des moyens de communication et de l'étudier dans une perspective plus large que celle des sciences cognitives. Nous sommes convaincus que ce concept de Réalité Évoquée et le modèle proposé peuvent avoir des applications significatives dans l'étude de la présence et dans l'exploration des possibilités qui dépassent la réalité virtuelle. / In this research, we introduce the concept of "Evoked Reality" in an attempt to bring together various ideas on presence and reality onto a common platform. The concept we propose and illustrate is in fact an 'illusion of reality' (Evoked Realty) that simply evokes a 'sense of presence' (Evoked Presence) in our minds. We clearly define and differentiate between a Media-Evoked and a Self-Evoked Reality. That helped us introduce the Three Pole Reality Model that redefines the classical Two Pole Reality Model. We also present a graphical model called Reality-Presence Map, which would help us locate and analyse every possible cognitive experience relating to presence and reality. We also explore the qualia and subjectivity of our experiences of Evoked Reality. Two experiments were conducted, one in the area of Media-Evoked Reality and one in Self-Evoked Reality. The experiments in fact lead to fruitful conclusions regarding our hypotheses and help us understand what could be further empirically studied. Ultimately, we illustrate different implications and shed light on prospective applications and uses of our concept, especially in the area of research on presence. In addition, we strongly suggest that we must open up presence research beyond the domain of virtual reality and communication media, and examine it from a broader perspective of cognitive science. We strongly believe that this concept of Evoked Reality and the proposed model may have significant applications in the study of presence, and in exploring the possibilities beyond virtual reality.
|
269 |
Real Time Cross Platform Collaboration Between Virtual Reality & Mixed RealityJanuary 2017 (has links)
abstract: Virtual Reality (hereafter VR) and Mixed Reality (hereafter MR) have opened a new line of applications and possibilities. Amidst a vast network of potential applications, little research has been done to provide real time collaboration capability between users of VR and MR. The idea of this thesis study is to develop and test a real time collaboration system between VR and MR. The system works similar to a Google document where two or more users can see what others are doing i.e. writing, modifying, viewing, etc. Similarly, the system developed during this study will enable users in VR and MR to collaborate in real time.
The study of developing a real-time cross-platform collaboration system between VR and MR takes into consideration a scenario in which multiple device users are connected to a multiplayer network where they are guided to perform various tasks concurrently.
Usability testing was conducted to evaluate participant perceptions of the system. Users were required to assemble a chair in alternating turns; thereafter users were required to fill a survey and give an audio interview. Results collected from the participants showed positive feedback towards using VR and MR for collaboration. However, there are several limitations with the current generation of devices that hinder mass adoption. Devices with better performance factors will lead to wider adoption. / Dissertation/Thesis / Final Output / Keynote / Thesis Demo / Masters Thesis Computer Science 2017
|
270 |
Effective User Guidance through Augmented Reality Interfaces: Advances and ApplicationsDaniel S Andersen (8755488) 24 April 2020 (has links)
<div>Computer visualization can effectively deliver instructions to a user whose task requires understanding of a real world scene. Consider the example of surgical telementoring, where a general surgeon performs an emergency surgery under the guidance of a remote mentor. The mentor guidance includes annotations of the operating field, which conventionally are displayed to the surgeon on a nearby monitor. However, this conventional visualization of mentor guidance requires the surgeon to look back and forth between the monitor and the operating field, which can lead to cognitive load, delays, or even medical errors. Another example is 3D acquisition of a real-world scene, where an operator must acquire multiple images of the scene from specific viewpoints to ensure appropriate scene coverage and thus achieve quality 3D reconstruction. The conventional approach is for the operator to plan the acquisition locations using conventional visualization tools, and then to try to execute the plan from memory, or with the help of a static map. Such approaches lead to incomplete coverage during acquisition, resulting in an inaccurate reconstruction of the 3D scene which can only be addressed at the high and sometimes prohibitive cost of repeating acquisition.</div><div><br></div><div>Augmented reality (AR) promises to overcome the limitations of conventional out-of-context visualization of real world scenes by delivering visual guidance directly into the user's field of view, guidance that remains in-context throughout the completion of the task. In this thesis, we propose and validate several AR visual interfaces that provide effective visual guidance for task completion in the context of surgical telementoring and 3D scene acquisition.</div><div><br></div><div>A first AR interface provides a mentee surgeon with visual guidance from a remote mentor using a simulated transparent display. A computer tablet suspended above the patient captures the operating field with its on-board video camera, the live video is sent to the mentor who annotates it, and the annotations are sent back to the mentee where they are displayed on the tablet, integrating the mentor-created annotations directly into the mentee's view of the operating field. We show through user studies that surgical task performance improves when using the AR surgical telementoring interface compared to when using the conventional visualization of the annotated operating field on a nearby monitor. </div><div><br></div><div>A second AR surgical telementoring interface provides the mentee surgeon with visual guidance through an AR head-mounted display (AR HMD). We validate this approach in user studies with medical professionals in the context of practice cricothyrotomy and lower-limb fasciotomy procedures, and show improved performance over conventional surgical guidance. A comparison between our simulated transparent display and our AR HMD surgical telementoring interfaces reveals that the HMD has the advantages of reduced workspace encumbrance and of correct depth perception of annotations, whereas the transparent display has the advantage of reduced surgeon head and neck encumbrance and of annotation visualization quality. </div><div><br></div><div>A third AR interface provides operator guidance for effective image-based modeling and rendering of real-world scenes. During the modeling phase, the AR interface builds and dynamically updates a map of the scene that is displayed to the user through an AR HMD, which leads to the efficient acquisition of a five-degree-of-freedom image-based model of large, complex indoor environments. During rendering, the interface guides the user towards the highest-density parts of the image-based model which result in the highest output image quality. We show through a study that first-time users of our interface can acquire a quality image-based model of a 13m $\times$ 10m indoor environment in 7 minutes.</div><div><br></div><div>A fourth AR interface provides operator guidance for effective capture of a 3D scene in the context of photogrammetric reconstruction. The interface relies on an AR HMD with a tracked hand-held camera rig to construct a sufficient set of six-degrees-of-freedom camera acquisition poses and then to steer the user to align the camera with the prescribed poses quickly and accurately. We show through a study that first-time users of our interface are significantly more likely to achieve complete 3D reconstructions compared to conventional freehand acquisition. We then investigated the design space of AR HMD interfaces for mid-air pose alignment with an added ergonomics concern, which resulted in five candidate interfaces that sample this design space. A user study identified the aspects of the AR interface design that influence the ergonomics during extended use, informing AR HMD interface design for the important task of mid-air pose alignment.</div>
|
Page generated in 0.0531 seconds