Spelling suggestions: "subject:"immersive media"" "subject:"immersive pedia""
1 |
Performance Assessment of Networked Immersive Media in Mobile Health Applications with Emphasis on LatencyAdebayo, Emmanuel January 2021 (has links)
Cloud VR/AR/MR (Virtual Reality, Augmented Reality, and Mixed Reality) services representa high-level architecture that combines large scale computer resources in a data-center structurestyle set up to render VR/AR/MR services using a combination of very high bandwidth, ultralow latency, high throughput, latest 5G (5th Generation) mobile networks to the end users. VR refers to a three-dimensional computer-generated virtual environment made up ofcomputers, which can be explored by people for real time interaction. AR amplifies humanperception of the real world through overlapping of computer-generated graphics or interactivedata on a real-world image for enhanced experience. According to the Virtual Reality Society’s account of the history of VR, it started from the360-degree murals from the nineteenth century [18]. Historically, live application of AR wasdisplayed when Myron Kruger used a combination of video cameras and projector in aninteractive environment in 1974. In 1998, AR was put into live display with the casting of avirtual yellow line marker during an NFL game. However, personal, and commercial use ofVR/AR was made possible starting with release of a DIY (Do it Yourself) headset calledGoogle Cardboard in 2014 by Google, which made use of a smartphone for the VR experience.In 2014, Samsung also introduced Gear VR which officially started the competition for VRdevices. Subsequently In 2014, Facebook acquired Oculus VR with the major aim ofdominating the high-end spectrum of VR headset [18]. Furthermore, wider adoption of ARbecame enhanced with the introduction of Apple’s ARKit (Augmented Reality Kit) whichserves as a development framework for AR applications for iPhones and iPads [18]. The first application of VR devices in the health industry was made possible due to healthworkers’ need to visualize complex medical data during surgery and planning of surgery in1994. Since then, commercial production of VR devices and availability of advanced networkand faster broadband have increased the adoption of VR services in the healthcare industryespecially in planning of surgery and during surgery itself [16]. Overall, the wide availabilityof VR/AR terminals, displays, controllers, development kits, advanced network, and robustbandwidth have contributed to making VR and AR services to be of valuable and importanttechnologies in the area of digital entertainment, information, games, health, military and soon. However, the solutions or services needed for the technology required an advancedprocessing platform which in most cases is not cost efficient in single-use scenarios. The kind of devices, hardware, software required for the processing and presentation ofimmersive experiences is often expensive and dedicated to the current application itself.Technological improvement in realism and immersion means increase in cost of ownershipwhich often affected cost-benefit consideration, leading to slower adoption of the VR services[14] [15]. This is what has led to development of cloud VR services, a form of data-centerbased system, which serves as a means of providing VR services to end users from the cloudanywhere in the world, using its fast and stable transport networks. The content of the VR isstored in the cloud, after which the output in form of audio-visuals is coded and compressedusing suitable encoding technology, and thereafter transmitted to the terminals. The industrywide acceptance of the cloud VR services, and technology has made available access to payper-use-basis and hence access to high processing capability offered, which is used in iipresenting a more immersive, imaginative, and interactive experience to end users [11] [12].However, cloud VR services has a major challenge in form of network latency introduced fromcloud rendering down to the display terminal itself. This is most often caused by otherperformance indicators such as network bandwidth, coding technology, RTT (Return TripTime) and so on [19]. This is the major problem which this thesis is set to find out. The research methodology used was a combination of empirical and experimental method,using quantitative approach as it entails the generation of data in quantitative form availablefor quantitative analysis. The research questions are: Research Question 1 (RQ1): What are the latency related performance indicators ofnetworked immersive media in mobile health applications? Research Question 2 (RQ2): What are the suitable network structures to achieve an efficientlow latency VR health application? The answers gotten from the result analysis at the end of the simulation, show thatbandwidth, frame rate, and resolution are very crucial performance indicator to achieve theoptimal latency required for hitch-free cloud VR user experience, while the importance of otherindicators such as resolution and coding standard cannot be overemphasized. Combination ofedge and cloud architecture also proved to more efficient and effective for the achievement ofa low-latency cloud VR application functionality. Conclusively, the answer to research question one was that, the latency relatedperformance indicators of networked immersive media in mobile health applications arebandwidth, frame rate, resolution, coding technology. For research question two, suitablenetwork structures includes edge network, cloud network and combination of cloud and edgenetwork, but in order to achieve an optimally low-latency network for cloud VR mobile healthapplication in education, combination of edge and cloud network architecture is recommended
|
2 |
Immersive Media Environments for Special Education: Developing Agency in Communication for Youth with AutismJanuary 2013 (has links)
abstract: This dissertation describes the development of a state-of-the-art immersive media environment and its potential to motivate high school youth with autism to vocally express themselves. Due to the limited availability of media environments in public education settings, studies on the use of such systems in special education contexts are rare. A study called Sea of Signs utilized the Situated Multimodal Art Learning Lab (SMALLab), to present a custom-designed conversational scenario for pairs of youth with autism. Heuristics for building the scenario were developed following a 4-year design-based research approach that fosters social interaction, communication, and self-expression through embodied design. Sea of Signs implemented these heuristics through an immersive experience, supported by spatial and audio-visual feedback that helped clarify and reinforce students' vocal expressions within a partner-based conversational framework. A multiple-baseline design across participants was used to determine the extent to which individuals exhibited observable change as a result of the activity in SMALLab. Teacher interviews were conducted prior to the experimental phase to identify each student's pattern of social interaction, communication, and problem-solving strategies in the classroom. Ethnographic methods and video coding were used throughout the experimental phase to assess whether there were changes in (a) speech duration per session and per turn, (b) turn-taking patterns, and (c) teacher prompting per session. In addition, teacher interviews were conducted daily after every SMALLab session to further triangulate the nature of behaviors observed in each session. Final teacher interviews were conducted after the experimental phase to collect data on possible transfer of behavioral improvements into students' classroom lives beyond SMALLab. Results from this study suggest that the activity successfully increased independently generated speech in some students, while increasing a focus on seeking out social partners in others. Furthermore, the activity indicated a number of future directions in research on the nature of voice and discourse, rooted in the use of aesthetics and phenomenology, to augment, extend, and encourage developments in directed communication skills for youth with autism. / Dissertation/Thesis / Ph.D. Media Arts and Sciences 2013
|
3 |
Evaluation de la qualité de vidéos panoramiques synthétisées / Quality Evaluation for Stitched Panoramic VideosNabil mahrous yacoub, Sandra 27 November 2018 (has links)
La création des vidéos panoramiques de haute qualité pour des contenus immersifs en VR est généralement faite à l'aide d'un appareil doté de plusieurs caméras couvrant une scène cible. Malheureusement, cette configuration introduit à la fois des artefacts spatiaux et temporels dus à la différence entre les centres optiques et à la synchronisation imparfaite. Les mesures de qualité d'image traditionnelles ne peuvent pas être utilisées pour évaluer la qualité de ces vidéos, en raison de leur incapacité à capturer des distorsions géométriques. Dans cette thèse, nous proposons des méthodes pour l'évaluation objective des vidéos panoramiques basées sur le flux optique et la saillance visuelle. Nous validons cette métrique avec une étude centrée sur l'homme qui combine l'annotation d'erreurs percues et l'eye-tracking.Un défi important pour mesurer la qualité des vidéos panoramiques est le manque d'une vérité-terrain. Nous avons étudié l'utilisation des vidéos originales comme référence pour le panorama de sortie. Nous notons que cette approche n'est pas directement applicable, car chaque pixel du panorama final peut avoir une à $N$ sources correspondant à $N$ vidéos d'entrée avec des régions se chevauchant. Nous montrons que ce problème peut être résolu en calculant l'écart type des déplacements de tous les pixels sources à partir du déplacement du panorama en tant que mesure de la distorsion. Cela permet de comparer la différence de mouvement entre deux images données dans les vidéos originales et le mouvement dans le panorama final. Les cartes de saillance basées sur la perception humaine sont utilisées pour pondérer la carte de distorsion pour un filtrage plus précis.Cette méthode a été validée par une étude centrée sur l'homme utilisant une expérience empirique. L'expérience visait à déterminer si les humains et la métrique d'évaluation détectaient et mesuraient les mêmes erreurs, et à explorer quelles erreurs sont les plus importantes pour les humains lorsqu'ils regardent une vidéo panoramique.Les méthodes décrites ont été testées et validées et fournissent des résultats intéressants en ce qui concerne la perception humaine pour les mesures de qualité. Ils ouvrent également la voie à de nouvelles méthodes d'optimisation de l'assemblage vidéo, guidées par ces mesures de qualité. / High quality panoramic videos for immersive VR content are commonly created using a rig with multiple cameras covering a target scene. Unfortunately, this setup introduces both spatial and temporal artifacts due to the difference in optical centers as well as the imperfect synchronization. Traditional image quality metrics cannot be used to assess the quality of such videos, due to their inability to capture geometric distortions. In this thesis, we propose methods for the objective assessment of panoramic videos based on optical flow and visual salience. We validate this metric with a human-centered study that combines human error annotation and eye-tracking.An important challenge in measuring quality for panoramic videos is the lack of ground truth. We have investigated the use of the original videos as a reference for the output panorama. We note that this approach is not directly applicable, because each pixel in the final panorama can have one to N sources corresponding to N input videos with overlapping regions. We show that this problem can be solved by calculating the standard deviation of displacements of all source pixels from the displacement of the panorama as a measure of distortion. This makes it possible to compare the difference in motion between two given frames in the original videos and motion in the final panorama. Salience maps based on human perception are used to weight the distortion map for more accurate filtering.This method was validated with a human-centered study using an empirical experiment. The experiment was designed to investigate whether humans and the evaluation metric detect and measure the same errors, and to explore which errors are more salient to humans when watching a panoramic video.The methods described have been tested and validated and they provide interesting findings regarding human-based perception for quality metrics. They also open the way to new methods for optimizing video stitching guided by those quality metrics.
|
4 |
Aurora - a study on the guided meditation using immersive mediaDantas Silva, Juliana January 2019 (has links)
Meditation is a practice that promotes improvements in physical and mental health, according to previous studies. The proven benefits, such as relaxation and stress reduction, have attracted people's interest in initiating training. However, practice demands discipline, time, and dedication. Despite the diversity of techniques available for training, beginners may find it difficult to concentrate during the learning phase of meditation. Technological advances have enabled the emergence of devices that offer guided meditation for users. In that sense, designers engaged with creating products that become tools to enrich the personal experience of users. Virtual Reality (VR) is one of the tools adopted for this purpose. The use of VR to stimulate meditative practice has been a topic of research in the field of technological media. However, research focuses on the practice known as Mindfulness. Also, researchers focus their attention on 3D graphics design. Therefore, this study explored the possibility of designing immersive technologies for the practice of guided meditation. More specifically, to investigate the effects of using 360-degrees Virtual Reality videos supporting the practice of Relaxation Response meditation exercise, developed by Dr. Herbert Benson, and Contemplative Inquiry presented by Robert Butera. Furthermore, the study aimed to explore if technology improves the meditation experience. Adopting the theoretical approaches of Positive Technology and Research through Design, the Aurora and Pandora prototypes were designed to explore the themes. The results indicate that guided meditation presented through immersive videos can provoke desirable emotional responses in people who practice meditation, such as calm and relaxation. However, undesirable physical effects were observed in the participants. The discomfort of vision, stress, and irritation were examples. Also, the experiments showed that it was possible to improve the experience of meditative practices.
|
5 |
Implementation and Analysis of Co-Located Virtual Reality for Scientific Data VisualizationJordan M McGraw (8803076) 07 May 2020 (has links)
<div>Advancements in virtual reality (VR) technologies have led to overwhelming critique and acclaim in recent years. Academic researchers have already begun to take advantage of these immersive technologies across all manner of settings. Using immersive technologies, educators are able to more easily interpret complex information with students and colleagues. Despite the advantages these technologies bring, some drawbacks still remain. One particular drawback is the difficulty of engaging in immersive environments with others in a shared physical space (i.e., with a shared virtual environment). A common strategy for improving collaborative data exploration has been to use technological substitutions to make distant users feel they are collaborating in the same space. This research, however, is focused on how virtual reality can be used to build upon real-world interactions which take place in the same physical space (i.e., collaborative, co-located, multi-user virtual reality).</div><div><br></div><div>In this study we address two primary dimensions of collaborative data visualization and analysis as follows: [1] we detail the implementation of a novel co-located VR hardware and software system, [2] we conduct a formal user experience study of the novel system using the NASA Task Load Index (Hart, 1986) and introduce the Modified User Experience Inventory, a new user study inventory based upon the Unified User Experience Inventory, (Tcha-Tokey, Christmann, Loup-Escande, Richir, 2016) to empirically observe the dependent measures of Workload, Presence, Engagement, Consequence, and Immersion. A total of 77 participants volunteered to join a demonstration of this technology at Purdue University. In groups ranging from two to four, participants shared a co-located virtual environment built to visualize point cloud measurements of exploded supernovae. This study is not experimental but observational. We found there to be moderately high levels of user experience and moderate levels of workload demand in our results. We describe the implementation of the software platform and present user reactions to the technology that was created. These are described in detail within this manuscript.</div>
|
Page generated in 0.0661 seconds