61 |
Adaptive network traffic management for multi user virtual environmentsOliver, Iain Angus January 2011 (has links)
Multi User Virtual Environments (MUVE) are a new class of Internet application with a significant user base. This thesis adds to our understanding of how MUVE network traffic fits into the mix of Internet traffic, and how this relates to the application's needs. MUVEs differ from established Internet traffic types in their requirements from the network. They differ from traditional data traffic in that they have soft real-time constraints, from game traffic in that their bandwidth requirements are higher, and from audio and video streaming traffic in that their data streams can be decomposed into elements that require different qualities of service. This work shows how real-time adaptive measurement based congestion control can be applied to MUVE streams so that they can be made more responsive to changes in network conditions than other real-time traffic and existing MUVE clients. It is shown that a combination of adaptive congestion control and differential Quality of Service (QoS) can increase the range of conditions under which MUVEs both get sufficient bandwidth and are Transport Control Protocol (TCP) fair. The design, implementation and evaluation of an adaptive traffic management system is described. The system has been implemented in a modified client, which allows the MUVE to be made TCP fair without changing the server.
|
62 |
Supporting mobile mixed-reality experiencesFlintham, Martin January 2009 (has links)
Mobile mixed-reality experiences mix physical and digital spaces, enabling participants to simultaneously inhabit a shared environment online and on the streets. These experiences take the form of games, educational applications and new forms of performance and art, and engender new opportunities for interaction, collaboration and play. As mobile mixed-reality experiences move out of the laboratory and into more public settings they raise new challenges concerning how to support these experiences in the wild. This thesis argues that mobile mixed-reality experiences in which artists retain creative control over the content and operation of each experience, particularly those that are deployed as theatrical performances, require dedicated support for content authoring and reactive orchestration tools and paradigms in order to be successfully and robustly operated in public settings. These requirements are examined in detail, drawing on the experience of supporting four publicly toured mobile mixed-reality experiences; Can You See Me Now?, Uncle Roy All Around You, I Like Frank in Adelaide and Savannah, which have provided a platform to practically develop, refine and evaluate new solutions to answer these challenges in the face of presenting the experiences to many thousands of participants over a four year period. This thesis presents two significant supporting frameworks. The ColourMaps system enables designers to author location-based content by directly colouring over maps; providing a simple, familiar and yet highly flexible approach to matching location-triggers to complex physical spaces. It provides support for multiple and specialised content layers, and the ability to configure and manage other aspects of an experience, including filtering inaccurate position data and underpinning orchestration tools. Second, the Orchestration framework supports the day-to-day operation of public experiences; providing dedicated control-room tools for monitoring that reveal the content landscape and historical events, intervention and improvisation techniques for steering and shaping each participant's experience as it unfolds both physically and virtually, and processes to manage a constant flow of participants.
|
63 |
Collaborative narrative generation in persistent virtual environmentsMadden, Neil January 2009 (has links)
This thesis describes a multi-agent approach to generating narrative based on the activities of participants in large-scale persistent virtual environments, such as massivelymultiplayer online role-playing games (MMORPGs). These environments provide diverse interactive experiences for large numbers of simultaneous participants. Involving such participants in an overarching narrative experience has presented challenges due to the difficulty of incorporating the individual actions of so many participants into a single coherent storyline. Various approaches have been adopted in an attempt to solve this problem, such as guiding players to follow pre-designed storylines, or giving them goals to achieve that advance the storyline, or by having developers (‘dungeon masters’) adapt the narrative to the real-time actions of players. However these solutions can be inflexible, and/or fail to take player interaction into account, or do so only at the collective level, for groups of players. This thesis describes a different approach, in which embodied witness-narrator agents observe participants’ actions in a persistent virtual environment and generate narrative from reports of those actions. The generated narrative may be published to external audiences, e.g., via community websites, Internet chatrooms, or SMS text messages, or fed back into the environment in real-time to embellish and enhance the ongoing experience with new narrative elements derived from participants’ own achievements. The design and implementation of this framework is described in detail, and compared to related work. Results of evaluating the framework, both technically, and through a live study, are presented and discussed.
|
64 |
Immersion and interaction : creating virtual 3D worlds for stage performancesPolydorou, Doros January 2011 (has links)
This thesis formulates an approach towards the creation of a gesture activated and body movement controlled real time virtual 3d world in a dance performance context. It investigates immersion and navigation techniques derived from modern video games and methodologies and proposes how they can be used to further involve a performer into a virtual space as well as simultaneously offer a stimulating visual spectacle for an audience. The argument presented develops through practice-based methodology and artistic production strategies in interdisciplinary and collaborative contexts. Two choreographic performance/installations are used as cases studies to demonstrate in practice the proposed methodologies. First, the interactive dance work Suna No Onna, created in collaboration with Birringer/Danjoux and the Dap Lab, investigates the use of interactive pre-rendered animations in a real time setting and in real time by incorporating wearable sensors in the performance. Secondly, the potentials offered by the sensor technology and real time rendering engines led to the “creation scene", a key scene in the choreographic installation UKIYO (Moveable Worlds). This thesis investigates the design, creation and interaction qualities of virtual 3d spaces by exploring the potentialities offered by a shared space, between an intelligent space and a dancer in a hybrid world. The methodology applied uses as a theoretical base the phenomenological approach of Merleau-Ponty and Mark Hansen‟s mixed reality paradigm proposing the concept of the “space schema", a system which replicates and embeds proprioception, perception and motility into the space fabric offering a world which “lives”, functions and interacts with the performer. The outcome of the research is the generation of an interactive, non-linear, randomized 3d virtual space that collaborates with a technologically embedded performer in creating a 3d world which evolves and transforms, driven by the performer‟s intention and agency. This research contributes to the field of interactive performance art by making transparent the methodology, the instruments and the code used, in a non-technical terminology, making it accessible for both team members with less technological expertise as well as artists aspiring to engage interactive 3d media promoting further experimentation and conceptual discussions.
|
65 |
Emergent narrative : towards a narrative theory of virtual realityLouchart, S. January 2007 (has links)
The recent improvements and developments on Intelligent Agents (IA), Artificial Intelligence (AI) and 3D visualisation, coupled with an increasing desire to integrate interactivity within virtual spaces bring concerns in regard to the articulation of narratives in such environments.
|
66 |
User-oriented markerless augmented reality framework based on 3D reconstruction and loop closure detectionGao, Yuqing January 2017 (has links)
An augmented reality (AR) system needs to track the user-view to perform an accurate augmentation registration. The present research proposes a conceptual marker-less, natural feature-based AR framework system, the process for which is divided into two stages - an offline database training session for the application developers, and an online AR tracking and display session for the final users. In the offline session, two types of 3D reconstruction application, RGBD-SLAM and SfM are integrated into the development framework for building the reference template of a target environment. The performance and applicable conditions of these two methods are presented in the present thesis, and the application developers can choose which method to apply for their developmental demands. A general developmental user interface is provided to the developer for interaction, including a simple GUI tool for augmentation configuration. The present proposal also applies a Bag of Words strategy to enable a rapid "loop-closure detection" in the online session, for efficiently querying the application user-view from the trained database to locate the user pose. The rendering and display process of augmentation is currently implemented within an OpenGL window, which is one result of the research that is worthy of future detailed investigation and development.
|
67 |
Does virtual haptic dissection improve student learning? : a multi-year comparative studyErolin, Caroline January 2016 (has links)
The past decade has seen the release of numerous software packages aimed at enhancing anatomical education. However, there has been little research undertaken by the manufacturers of these products into the benefit or otherwise of these packages for student learning. In addition, while many of the existing software packages include interactive three-dimensional models, none of them truly offer virtual dissection i.e. the cutting through anatomical layers with a haptic (tactile) interface. This study investigated the haptic ‘dissection’ of a three dimensional digital model of the hand and wrist in anatomy education at both undergraduate (UG) and postgraduate (PG) levels. The model was used as a teaching and revision aid both prior to and after dissection of a real cadaver. A haptic enabled version of the model, allowing for real-time cutting was compared with a non-haptic version, using instead a keyboard and mouse ‘point and click’ style interface. Both versions were tested on students of gross anatomy in relation to test results and student experience. The model was based upon Computerised Tomography (CT) and photographic slice data from the Visible Human Project female data set. It was segmented and reconstructed using Amira® 5.2.2. From here each structure was exported as a separate STL file and imported into Geomagic Freeform® Modelling TM. Once imported into Freeform® Modelling TM, the individual structures each required varying degrees of re-modelling where detail had been lost during the segmentation process. Some smaller structures such as the nerves, veins and arteries were modelled freehand. The final model could be dissected using FreeForm® ModellingTM, the same software in which it was created. Using FreeForm® ModellingTM as a prototype VR dissector, each anatomical structure could be selected and virtually ‘dissected’ with the PHANTOM® Desktop™ haptic tool. Three methods of interacting with the model were identified: 1) using a cutting tool to cut through the selected layer; 2) using a selection paintball to first select and then delete the layer; and 3) using planes to cut the selected structure in standard anatomical views. The study ran over five successive years and was split into three discreet phases. Phase one compared the results of PG students across control, non-haptic and haptic groups. Phase two compared the results of UG students between control and haptic groups. Phase three compared the results of UG students across control, non-haptic and haptic groups. Due to small group sizes and a largely non-normal distributions the results were analysed using Mann-Whitney U tests. Results for all phases indicate that use of the model, both through haptic and non-haptic interfaces produced some significantly improved test results. The non-haptic version of the model performing equal or better than those with access to the haptic version. This is likely due to cognitive load being adversely affected by the addition of the haptic device. Some students reported that the haptic device was not intuitive to use and took some time to get used to, if at all. No student used either version of the model for more than five hours, with over 40% using it for less than one hour. It is possible that with increased exposure to the haptic device students may find it easier and thus beneficial. The findings of this study indicate that when used for a short period of time only ( < 5 hours) the haptic device may impede rather than enhance learning.
|
68 |
Supporting spatial learning in virtual environmentsSykes, Jonathan Robert January 2003 (has links)
This thesis explores the acquisition of spatial knowledge as a means to support wayfinding in virtual environments. Specifically, the thesis presents an investigation into the potential benefits one might gain through the application of a variety of tools, each of which has been designed to support one of the three stages of cognitive map development - landmark-based representation, route-based representation, and survey-based representation (Siegel & White, 1975). Each tool has been evaluated with respect to improvements in wayfinding, and also in their support for environmental learning. Measures were taken of each tool used in isolation, and also when used together as a complete toolset. The between-subjects evaluation process involved 101 participants, randomly assigned to one of five conditions. Each participant was asked to navigate a virtual environment to locate three specific items. To evaluate wayfinding, participants were asked to perform the same task on six occasions within the same session. After discovering all items, a measure indicating route efficiency was recorded. On completing all six trials participants were asked to produce a map of the virtual environment. It was hypothesised that the presence of tools would improve the acquisition of spatial knowledge, and thus route efficiency and map production. Comparing the 'no-tool' and the 'all tool' conditions, a 2x6 repeated measures ANOVA found that when providing the tools concurrently there was a statistically significant improvement in the efficiency of route taken (F(1,38)=4.63, p<0.05). However, when evaluating the tools in isolation, no significant improvement in route efficiency was found. Also, no significant difference between conditions was identified when comparing the quality of maps produced by participants across conditions. The thesis concludes by arguing that the application of the complete toolset benefits wayfinding, although it is noted that the evidence does not support the hypothesis that this is caused by improved spatial learning.
|
69 |
Towards the development and understanding of collaborative mixed-reality learning spacesAlzahrani, Ahmed January 2017 (has links)
The current era of advanced display technologies, such as head-mounted displays,smart glasses and handheld devices, has supported the usage of mixed reality and augmented reality concepts in smart educational classrooms. These advanced technologies have enabled enhanced collaborations and interactive communication between distance learners and local learners. Whilst being present, immersed and engaged are key factors in both the real and virtual worlds, they play particularly important roles in improving students collaborative learning performance during learning activities. However, few empirical studies have considered how using such interfaces may affect learning outcomes and whether students truly feel fully immersed and engaged in such environments. Furthermore, the lack of support and a conceptual architecture for collaborative mixed and augmented reality group learning activities is still a shortcoming for distance learning and teaching, and a significant challenge to researchers. This study demonstrates a conceptual framework that supports group distance learning and teaching collaboration around learning activities using mixed, augmented and virtual reality technologies. The study also explores learning effectiveness based on the following factors: students presence, engagement and immersion in smart environments. To evaluate these factors, we utilise several existing frameworks that have been applied to our mixed reality platform, MiRTLE+, to help us examine the learning outcomes and teaching experience gained from using these environments. The study was divided into two experimental phases, and 40 samples were examined to assess and compare the affordances of mixed reality interfaces within various collaborative learning scenarios, applying a card game activity (Uno) as a learning task. By comparing the real collaborative learning activities with the two-dimensional web-based activity, we found that novices had slightly better learning performance than the individuals using the web-based activity and that both phases mimicked reality. Novices and experts also felt significantly more present and immersed in the MiRTLE+ learning scenarios (due to the impact of the augmented reality interfaces) than in the web-based scenario.
|
70 |
The design, implementation and evaluation of a desktop virtual reality for teaching numeracy concepts via virtual manipulativesDaghestani, L. January 2013 (has links)
Virtual reality offers new possibilities and new challenges for teaching and learning. For students in elementary mathematics, it has been suggested that virtual reality offers new ways of representing numeracy concepts in the form of virtual reality manipulatives. The main goal of this thesis is to investigate the effectiveness of using desktop virtual reality as a cognitive tool to enhance the conceptual understanding of numeracy concepts by elementary school children, specifically addition and subtraction. This research investigated the technical and educational aspects of virtual reality manipulatives for children beginning to learn numeracy by implementing a prototype mathematical virtual learning environment (MAVLE) application and exploring its educational effectiveness. This research provides three main contributions. First, the proposed design framework for the virtual reality model for cognitive learning. This framework provides an initial structure that can be further refined or revised to generate a robust design model for virtual reality learning environments. Second, the prototyping and implementation of a practical virtual reality manipulatives application ‘MAVLE’ for facilitating the teaching and learning processes of numeracy concepts (integer addition and subtraction) was proposed. Third, the evaluation of conceptual understanding of students’ achievements and the relationships among the navigational behaviours for the desktop virtual reality were examined, and their impacts on students’ learning experiences were noted. The successful development of the virtual reality manipulatives provides further confirmation for the high potential of virtual reality technology for instructional use. In short, the outcomes of this work express the feasibility and appropriateness of how virtual reality manipulatives are used in classrooms to support students’ conceptual understanding of numeracy concepts. Virtual reality manipulatives may be the most appropriate mathematics tools for the next generation. In conclusion, this research proposes a feasible virtual reality model for cognitive learning that can be used to guide the design of other virtual reality learning environments.
|
Page generated in 0.0481 seconds