• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 14
  • 4
  • 1
  • Tagged with
  • 117
  • 25
  • 23
  • 17
  • 12
  • 12
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Emotionally expressive avatars for collaborative virtual environments

Fabri, Marc January 2006 (has links)
When humans communicate with each other face-to-face, they frequently use their bodies to complement, contradict, substitute, or regulate what is being said. These non-verbal signals are important for understanding each other, particularly in respect of expressing changing moods and emotional states. In modem communication technologies such as telephone, email or instant messaging, these indicators are typically lost and communication is limited to the exchange of verbal messages, with little scope for expressing emotions. This thesis explores Collaborative Virtual Environments (CVEs) as an alternative communication technology potentially allowing interlocutors 1D express themselves emotionally in an efficient and effective way.CVE users are represented by three-dimensional, animated embodiments, referred to as "avatars", capable of showing emotional expressions. The avatar acts as an interaction device, providing information that would otherwise be difficult to mediate. Potential applications for such CVE systems are all areas where people cannot come together physically, but wish to discuss or collaborate on certain matters, for example in distance leaming, home working, or simply to chat with friends and colleagues. Further, CVEs could be used in the therapeutic intervention ofphobias and help address social impaianents such as autism. To investigate how emotions can efficiently and effectively be visualised in a CVE, an animated virtual head was designed to express, in a readily recognisable manner, the six universal emotions hdpfrness, sedness, anger, fiar, stnprise and disglt.fl. A controlled experiment was then conducted to investigate the virtual head model. Effectiveness was demonstrated through good recognition rates for most emotions, and efficiency was established since a reduced animation feature set was found to be sufficient to build core distinctive facial expressions. A set of exemplar facial expressions and guidelines for their use wasdeveloped. A second controlled experiment was then conducted to investigate the effect such an emotionally expressive, animated avatar has on users of a prototype CVE, the VirtNai Mtssmgtr (VM). The hypothesis was that introducing emotions into CVE interaction can be beneficial on many levels, namely the users' subjective experience, their involvement, and how they perceive and interact with each other. The design considerations for VM are outlined, and a newly developed methodological framework for evaluation is presented. Emotionol!J Expressi~ AvatarsfqrCo/laborati~ ViftualEmironments Marc Fabri The findings suggest that emotional expressiveness m avatars increases involvement in the interaction between eVE users, as well as their sense of being together, or copresence. This has a positive effect on their subjective experience. Further, empathy was identified as a key component for creating a more enjoyable experience and greater harmony between CVE users. The caveat is that emotionally expressive avatars may not be useful in all contexts or all types of CVEs as they may distract users from the task they are aiming to complete. Finally, a set of tentative design guidelines for emotionally expressive avatars in CVEs is derived from the work undertaken, covering the appearance and expressive abilities of avatars. These are aimed at CVE researchers and avatar designers.
2

Validating the authentic: seeing and knowing Titanic Belfast using augmented reality

Jackson, Helen January 2015 (has links)
This PhD with practice is an investigation into how mobile media, in their adoption of augmented reality (AR) visual methods, situate the practice of vision and system of envisioning in a locative-based experience. Using a model of transference between past and present as the basis for the design and practice of an AR-based locative media project, the thesis is an investigation into how the shifting intensities of flows between the real and the virtual create a system of signification, and how this system sustains or subverts a mode of experience. In doing so it aims to answer two research questions: • How do we read the AR image through this mode of location-based mobile augmented reality technology? • How does this reading of the AR image act upon the user to inform an embodied and phenomenological engagement with place? In its aim to determine the affordances and constraints of the AR image in those situations where what is seen via the AR technologies contributes to the aesthetics and politics of a place-making experience, the practice of this research situates its knowledge-base in the locative space of the Titanic Quarter in Belfast. Leveraging the potential of the birthplace of Titanic as the locus of an intervention to make visible the symbolic value ascribed to a particular geographical space, the project is also a counter to the recent urban redevelopment of this site that is criticised within this research for failing to addresses how space operates within a cultural imagination. In order to intervene in the cultural distance that has, it is argued, been created in the spatial imagination when experiencing this site, the practice of the technology deploys photographic archives as the digital informational layer to form part of the representational rhetoric connecting the present to the past. As such, the politics and aesthetics of the photograph operate through very deliberate strategies to inform the interpretive methodology in this thesis. The photograph, it is argued, can logistically and consciously engage aspects of vision through how it operates to order and demarcate both internal and external temporal dimensions. As a practice for vision, the photograph is thus understood to both create a visual temporal element of what is signified to endure, and imbue a quality of looking that is durational. While the written component of the thesis provides a knowledge-based method for understanding the visual system deployed by the technology, the practice component operates as a material visual practice on which to apply a reflexive visual cultures analysis of the visual system created. As a broad framework for new knowledge, this research identifies that positioning vision and what is made visible at the core practice of augmented realities, prioritises the actual and the present, rather then the imagined and the absent, and makes stable spatial and temporal practices through the application of stable spatial and temporal referents. Providing new knowledge about how Titanic Belfast becomes known through this new narrative logic, this thesis provides evidence that the reading and subsequent meanings generated by the locative project, are dependent on how the technology creates perceived tensions between authentication and validation. Engaging the user in a practice of seeing where the materiality of the urban space operates to validate what the photograph of Titanic already authenticates, is understood to illuminate the relationships between the past and the present, and enable a practice of Titanic Belfast that operates within the poetics of lived space.
3

User tracking methods for augmented reality applications in cultural heritage

Bostanci, Gazi Erkan January 2014 (has links)
Augmented Reality provides an entertaining means for displaying 3D reconstructions of ancient buildings in situ for cultural heritage. Finding the pose, position and orientation, of the user is crucial for such applications since this information will be used to define the viewpoint that will be used for rendering the models. Images acquired from a camera can be used as the background for such augmentations . To make the most out of this available information, these images can also be utilized to find a pose estimate. This thesis presents contributions for vision-based methods for estimating the pose of the user in both indoor and outdoor environments. First an evaluation of different feature detectors is presented, making use of spatial statistics to analyse the distribution of the features across the image, a property that is shown to affect the accuracy of the homography calculated from these features . An analysis of various filtering methods used for tracking was performed and an implementation of a SLAM system is presented. Due to several problems faced with this implementation, there is insufficient tracking accuracy due to linearity problems. An alternative, keyframe-based tracking algorithm is presented. Continuing with vision-based approaches, Kinect sensor was also used to find the pose of a user for in situ augmentations making use of the natural features in the environment. Skeleton-tracking was also found to be beneficial for such applications. The thesis then investigates combining the vision-based estimates with measurements from other sensors, GPS and DVIU, in order to improve the tracking accuracy in outdoor environments. The idea of using multiple models was investigated using a novel fuzzy rule-based approach to decide on the model that results in improved accuracy and faster convergence for the fusion filter. Finally, several AR applications are presented that make use of these methods . . The first one is for in situ augmentation for displaying historical columns and augmenting users , the second is a virtual visit to an ancient building and the third is a game which can also be played inside the augmentation of the building in the second application.
4

Navigation in desktop virtual environments

Sayers, Heather January 2004 (has links)
No description available.
5

Software architectures for collaborative virtual environments

Wilson, Shane January 2004 (has links)
No description available.
6

Illumination for mixed reality of complex-to-model scenes

Jacobs, Katrien January 2006 (has links)
No description available.
7

Discovering mixed realities, inventing design criteria for an action based mixed reality

Thomsen, Mette Ramsgard January 2004 (has links)
No description available.
8

Eye movement controlled synthetic depth of field blurring in stereographic displays of virtual environments

Brooker, Julian P. January 2003 (has links)
No description available.
9

Design and evaluation of a virtual / augmented reality system with kinaesthetic feedback

Bashir, Abdouslam M. January 2005 (has links)
No description available.
10

Extracting and visualising scenes from within recordings of CVEs

Drozd, Adam January 2004 (has links)
No description available.

Page generated in 0.0196 seconds