• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 15
  • 10
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 173
  • 173
  • 64
  • 61
  • 59
  • 36
  • 33
  • 30
  • 30
  • 28
  • 20
  • 17
  • 14
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Faktoren zur Akzeptanz von Virtual Reality Anwendungen / Factors for the acceptance of virtual reality applications

von Eitzen, Ingo Martin January 2024 (has links) (PDF)
Immersive Technologien, wie Augmented und Virtual Reality, können bestehende Geschäftsmodelle entweder verbessern oder gefährden. Jedoch kann sich das förderliche Potential nur entfalten, wenn die Anwender:innen die Technologien akzeptieren und letztendlich auch nutzen. In dieser Arbeit wird beschrieben, was Akzeptanz ist und welche Einflussgrößen (Faktoren) für die Akzeptanz von Virtual Reality besonders relevant sind. Anschließend ist, basierend auf der diskutierten Fachliteratur, ein neuartiges, holistisches Akzeptanzmodell für Virtual Reality entworfen und mit drei Studien überprüft worden. In der ersten Studie wurden 129 Personen gebeten entweder in Augmented oder Virtual Reality ein Schulungsszenario oder ein Mini-Spiel auszuprobieren (2x2-Design). In beiden Anwendungen sollten Flaschen von einem virtuellen Fließband entfernt werden. Im Fokus der Untersuchung stand die Immersion, die Nützlichkeit, das empfundene Vergnügen (Hedonismus) und die Zufriedenheit. Die Ergebnisse ergaben zum einen, dass sich die Immersion zwischen Augmented und Virtual Reality unterscheidet, und zum anderen, dass das empfundene Vergnügen und die Nützlichkeit signifikante Prädiktoren für die Zufriedenheit darstellen. An der zweiten Studie nahmen 62 Personen teil. Sie wurden gebeten das Schulungsszenario erneut zu absolvieren, wobei dieses mit auditiven Inhalten und animierten Figuren angereicht wurde, sowie über eine etwas bessere Grafikqualität verfügte. Die Daten wurden mit den Virtual Reality Szenarien aus der ersten Studie verglichen, um den Einfluss der Präsenz auf den Hedonismus zu untersuchen. Obwohl kein relevanter Unterschied zwischen den Gruppen festgestellt wurde, konnte nachgewiesen werden, dass Präsenz Hedonismus signifikant vorhersagt. An der dritten Studie beteiligten sich insgesamt 35 Personen. Untersuchungsgegenstand der Studie war die virtuelle Darstellung der eigenen Person in der virtuellen Realität (Verkörperung) und dessen Einfluss auf den Hedonismus. Die Versuchspersonen wurden gebeten das Schulungsszenario erneut zu durch-laufen, wobei sie diesmal das Eingabegerät (Controller) der Visieranzeige (head-mounted display) zur Steuerung benutzen. In der ersten Studie erfolgte die Bedienung über eine Gestensteuerung. Die Analyse dieser Manipulation offenbarte keinerlei Auswirkungen auf die Verkörperung. Allerdings stellte die Verkörperung einen signifikanten Prädiktor für den Hedonismus dar. Im Anschluss an die Studien ist das Modell mit den Daten aus den Virtual Reality Gruppen der ersten Studie beurteilt worden, wobei es sich weitgehend bestätigt hat. Abschließend werden die Befunde in Bezug auf die Fachliteratur eingeordnet, mögliche Ursachen für die Ergebnisse diskutiert und weitere Forschungsbedarfe aufgezeigt. / Immersive technologies, such as augmented and virtual reality, can either improve or endanger existing business models. However, the beneficial potential can only unfold if users accept the technologies and ultimately use them. This paper describes what acceptance is and which influencing variables (factors) are particularly relevant for the acceptance of virtual reality. Subsequently, a novel, holistic acceptance model for virtual reality was designed based on the discussed literature and tested with three studies. In the first study, 129 subjects were asked to try out either a training scenario or a mini-game in augmented or virtual reality (2x2 design). In both applications bottles should be removed from a virtual assembly line. The study investigated immersion, usefulness, pleasure (hedonism) and satisfaction. The results revealed that immersion differs between augmented and virtual reality, plus that perceived pleasure and usefulness are significant predictors of satisfaction. In the second study, 62 persons participated. They were asked to complete the training scenario again, which was enriched with auditory content, animated figures and with slightly better graphics quality. The data were compared to the virtual reality scenarios from the first study to examine the impact of presence on hedonism. Although no relevant difference was found between the groups, presence was shown to significantly predict hedonism. A total of 35 subjects took part in the third study. The object of the study was the virtual representation of oneself (embodiment) in virtual reality and its influence on hedonism. The subjects were asked to go through the training scenario again, this time using the input device (controller) of the head-mounted display for control. In the first study, gesture control was used to operate the device instead. The analysis of this manipulation revealed no effects on embodiment. However, embodiment predicted hedonism significantly. Following the studies, the model has been assessed with the data from the virtual reality groups of the first study and has been largely confirmed. Finally, the findings are classified in relation to the literature, possible causes for the results are discussed, and further research needs are identified.
82

Automatic 3D human modeling: an initial stage towards 2-way inside interaction in mixed reality

Xiong, Yiyan 01 January 2014 (has links)
3D human models play an important role in computer graphics applications from a wide range of domains, including education, entertainment, medical care simulation and military training. In many situations, we want the 3D model to have a visual appearance that matches that of a specific living person and to be able to be controlled by that person in a natural manner. Among other uses, this approach supports the notion of human surrogacy, where the virtual counterpart provides a remote presence for the human who controls the virtual character's behavior. In this dissertation, a human modeling pipeline is proposed for the problem of creating a 3D digital model of a real person. Our solution involves reshaping a 3D human template with a 2D contour of the participant and then mapping the captured texture of that person to the generated mesh. Our method produces an initial contour of a participant by extracting the user image from a natural background. One particularly novel contribution in our approach is the manner in which we improve the initial vertex estimate. We do so through a variant of the ShortStraw corner-finding algorithm commonly used in sketch-based systems. Here, we develop improvements to ShortStraw, presenting an algorithm called IStraw, and then introduce adaptations of this improved version to create a corner-based contour segmentatiuon algorithm. This algorithm provides significant improvements on contour matching over previously developed systems, and does so with low computational complexity. The system presented here advances the state of the art in the following aspects. First, the human modeling process is triggered automatically by matching the participant's pose with an initial pose through a tracking device and software. In our case, the pose capture and skeletal model are provided by the Microsoft Kinect and its associated SDK. Second, color image, depth data, and human tracking information from the Kinect and its SDK are used to automatically extract the contour of the participant and then generate a 3D human model with skeleton. Third, using the pose and the skeletal model, we segment the contour into eight parts and then match the contour points on each segment to a corresponding anchor set associated with a 3D human template. Finally, we map the color image of the person to the 3D model as its corresponding texture map. The whole modeling process only take several seconds and the resulting human model looks like the real person. The geometry of the 3D model matches the contour of the real person, and the model has a photorealistic texture. Furthermore, the mesh of the human model is attached to the skeleton provided in the template, so the model can support programmed animations or be controlled by real people. This human control is commonly done through a literal mapping (motion capture) or a gesture-based puppetry system. Our ultimate goal is to create a mixed reality (MR) system, in which the participants can manipulate virtual objects, and in which these virtual objects can affect the participant, e.g., by restricting their mobility. This MR system prototype design motivated the work of this dissertation, since a realistic 3D human model of the participant is an essential part of implementing this vision.
83

The use of mixed reality in simulations

Byström, Jesper January 2022 (has links)
Simulators utilizing virtual reality have a problem with visibility of controls; when using a head mounted display, a user is blind to the controls being used. Meaning that the user needs to become accustomed to the controls before utilizing the simulator properly. Oryx Simulations has acknowledged this issue, and have been experimenting if it would be possible to use mixed reality to solve this issue. This study investigates two techniques as a solution, depth occlusion and stencil masking. The study compares depth occlusion and stencil masking to the commonly-used chroma key functionality. Chroma keying could theoretically achieve a seamless blend between the virtual objects and real objects. The results presented show a promising outcome for depth occlusion specifically, which has the highest total score, visibility and lowest amount of leakages between the categories tested. This report presents and reflects upon those results. It concludes by discussing opportunities for further investigation into depth occlusion.
84

Embodied Data Exploration in Immersive Environments: Application in Geophysical Data Analysis

Sardana, Disha 05 June 2023 (has links)
Immersive analytics is an emerging field of data exploration and analysis in immersive environments. It is an active research area that explores human-centric approaches to data exploration and analysis based on the spatial arrangement and visualization of data elements in immersive 3D environments. The availability of immersive extended reality systems has increased tremendously recently, but it is still not as widely used as conventional 2D displays. In this dissertation, we described an immersive analysis system for spatiotemporal data and performed several user studies to measure the user performance in the developed system, and laid out design guidelines for an immersive analytics environment. In our first study, we compared the performance of users based on specific visual analytics tasks in an immersive environment and on a conventional 2D display. The approach was realized based on the coordinated multiple-views paradigm. We also designed an embodied interaction for the exploration of spatial time series data. The findings from the first user study showed that the developed system is more efficient in a real immersive environment than using it on a conventional 2D display. One of the important challenges we realized while designing an immersive analytics environment was to find the optimal placement and identification of various visual elements. In our second study, we explored the iterative design of the placement of visual elements and interaction with them based on frames of reference. Our iterative designs explored the impact of the visualization scale for three frames of reference and used the collected user feedback to compare the advantages and limitations of these three frames of reference. In our third study, we described an experiment that quantitatively and qualitatively investigated the use of sonification, i.e., conveying information through nonspeech audio, in an immersive environment that utilized empirical datasets obtained from a multi-dimensional geophysical system. We discovered that using event-based sonification in addition to the visual channel was extremely effective in identifying patterns and relationships in large, complex datasets. Our findings also imply that the inclusion of audio in an immersive analytics system may increase users’ level of confidence when performing analytics tasks like pattern recognition. We outlined the sound design principles for an immersive analytics environment using real-world geospace science datasets and assessed the benefits and drawbacks of using sonification in an immersive analytics setting. / Doctor of Philosophy / When it comes to exploring data, visualization is the norm. We make line charts, scatter plots, bar graphs, or heat maps to look for patterns in data using traditional desktop-based approaches. However, biologically humans are optimized to observe the world in three dimensions. This research is motivated by the idea that representing data in immersive 3D environments can provide a new perspective that may lead to the discovery of previously undetected data patterns. Experiencing the data in three dimensions, engaging multiple senses like sound and sight, and leveraging human embodiment, interaction capabilities, and sense of presence may lead to a unique understanding of the data that is not feasible using traditional visual analytics. In this research, we first compared the data analysis process in a mixed reality system, where real and virtual worlds co-exist, versus doing the same analytical tasks in a desktop-based environment. In our second study, we studied where different charts and data visualizations should be placed based on the scale of the environment, such as table-top versus room-sized. We studied the strengths and limitations of different scales based on the visual and interaction design of the developed system. In our third study, we used a real-world space science dataset to test the liabilities and advantages of using the immersive approach. We also used audio and explored what kinds of audio work for which analytical tasks and laid out design guidelines based on audio. Through this research, we studied how to do data analytics in emerging mixed reality environments and presented results and design guidelines for future developers, designers, and researchers in this field.
85

Looks Good To Me (LGTM): Authentication for Augmented Reality

Gaebel, Ethan Daniel 27 June 2016 (has links)
Augmented reality is poised to become the next dominant computing paradigm over the course of the next decade. With the three-dimensional graphics and interactive interfaces that augmented reality promises it will rival the very best science fiction novels. Users will want to have shared experiences in these rich augmented reality scenarios, but surely users will want to restrict who can see their content. It is currently unclear how users of such devices will authenticate one another. Traditional authentication protocols reliant on centralized authorities fall short when different systems with different authorities try to communicate and extra infrastructure means extra resource expenditure. Augmented reality content sharing will usually occur in face-to-face scenarios where it will be advantageous for both performance and usability reasons to keep communications and authentication localized. Looks Good To Me (LGTM) is an authentication protocol for augmented reality headsets that leverages the unique hardware and context provided with augmented reality headsets to solve an old problem in a more usable and more secure way. LGTM works over point to point wireless communications so users can authenticate one another in any circumstance and is designed with usability at its core, requiring users to perform only two actions: one to initiate and one to confirm. LGTM allows users to intuitively authenticate one another, using seemingly only each other's faces. Under the hood LGTM uses a combination of facial recognition and wireless localization to ensure secure and extremely simple authentication. / Master of Science
86

<b>Real–time performance comparison of environments created using traditional geometry rendering versus Unreal Nanite technology in virtual reality.</b>

Tianshu Li Sr. (17596065) 26 April 2024 (has links)
<p dir="ltr">This study talks about the use of Nanite in Unreal Engine 5.3 in a VR environment and evaluates its impact on scene performance and image quality. Through experimental studies, it was found that Nanite significantly reduced the number of triangles and draw calls for complex scenes. However, Nanite may have caused FPS drops and excessive GPU load, limiting its application areas. Additionally, as a prior condition for Nanite, disabling forward shading reduces performance even though it has positive impacts on graphic quality. The results show that Nanite may have potential in VR environments but requires further optimization to improve its performance. Future research should focus on new optimization methods and expand the use of Nanite in different fields while hardware technology improves.</p>
87

From E-Learning to M-Learning – the use of Mixed Reality Games as a New Educational Paradigm

Fotouhi-Ghazvini, Faranak, Earnshaw, Rae A., Moeini, A., Robison, David J., Excell, Peter S. January 2011 (has links)
No / This paper analyses different definitions of mobile learning which have been proposed by various researchers. The most distinctive features of mobile learning are extracted to propose a new definition for Mobile Educational Mixed Reality Games (MEMRG). A questionnaire and a quantifying scale are designed to assist the game developers in designing MEMRG. A new psycho-pedagogical approach to teaching is proposed for MEMRG. This methodology is based on the theme of "conversation" between different actors of the learning community with the objective of building the architectural framework for MEMRG.
88

<b>THE FUTURE OF EMOTIONAL SUPPORT: CAN VIRTUAL REALITY (VR) REPLACE EMOTIONAL SUPPORT ANIMALS (ESA)? - A COMPARATIVE ANALYSIS</b>

Abhaya Kirtivasan (19193554) 23 July 2024 (has links)
<p dir="ltr">This paper examines whether Virtual Reality (VR) can be a good alternative to Emotional Support Animals (ESAs) for emotional support. With new advancements, VR is becoming more popular in mental health treatments. This study looks at how VR can address issues like allergies, housing restrictions, and the need for constant care that comes with having ESAs. By reviewing various studies, the paper compares the benefits of VR and ESAs for emotional and psychological support. Key findings show that VR can help reduce stress, create feelings of love and belonging, and reduce loneliness, just like a physical ESA. VR is also accessible, flexible, and cost-effective, making it a great option for those who cannot have traditional ESAs. However, the study notes some limitations of VR, such as the absence of physical touch and technical challenges. It highlights the need for long-term studies and diverse samples to confirm VR's long-term benefits. This research shows that VR could be a new, scalable, and inclusive way to support mental health.</p>
89

<b>Immersion in Georges Seurat’s Painting “La Grande Jatte” with VR: A Study for Art Appreciation</b>

Siddhant Bal (19182175) 20 July 2024 (has links)
<p dir="ltr">The study aims to provide insight into improving the understanding and, in tandem, appreciation of a traditional art piece, the "Sunday Afternoon on the Island of La Grande Jatte" made by Georges Seurat, using virtual reality.</p>
90

Evaluation of Hand Collision in Mixed Reality

Tegelind, Adrian January 2024 (has links)
Background. With the growing prospects of extended realities (XR), new use casesand experiences are constantly being developed. Especially with the introduction ofmixed reality (MR), allowing for a more seamless blend of the physical and digitalspace, it provides great opportunities in many fields such as education and trainingwhere dangerous procedures can be practiced safely. However, to make these experi-ences as effective and educational as possible, there is a need to make the experiencesrealistic. Objectives. One important aspect of creating realistic experiences is believablecollision between the user’s physical hand and the digital objects. This study specif-ically takes aim at this aspect. Trying to find how the performance difference anduser experience (UX) is affected by the addition of collision around the user’s handsin an MR environment. In order to help guide the way to get the answers to thesequestions, a set of objectives has been formulated. These objectives are; finding andimplementing a hand collision method, designing and performing the user study, andfinally finding and utilizing appropriate methods for analyzing the collected data. Methods. To get a better understanding of the UX and performance of using handcollision, a user study was created where the participants had to complete a seriesof tasks, with and without collision around their hands. For each task, answering aquestionnaire about their experience. Once the data have been collected, it will beanalyzed with the help of the SUS scoring system and statistical tests. Results. The study had 12 participants. With and without hand collision receivedan average SUS score of 62,5 and 69,2 respectively. The results show that the methodusing no collision performed better in terms of time to complete the task. However,hand collision performed better with fewer grabs used. No statistically significantdifference was detected between having or not having hand collision in terms of in-tuitiveness and realism. However, participants were observed to intuitively use thehand collision to their advantage. Conclusions. In conclusion, the participants did not perform better with handcollision, however, did indicate some level of increased intuition and realism. Thenegative aspects of the hand collision are believed to be attributed to the methodused to implement it, and potential in the area exists for further improvements andresearch. / Bakgrund. Med ett växande potential för extended realities (XR), nya använd-ningsområden och upplevelser utvecklas ständigt. Speciellt med införandet av mixedrealities (MR), möjligjorde en mer enad upplevelse av det fysiska och digitala, medstora möjligheter inom utbildning och träning där det farligt sitvationer kan övas påett säkert sätt. Men, för att göra dessa upplevelser så effektiva och pedagogiska sommöjligt behöves mer realistiska upplevelser. Syfte. En viktig aspekt av att skapa realistiska upplevelser är att skapa trovärdigakollisioner mellan användarens fysiska hand och the digitala objekten. Detta är ettav målen denna studien tar sikte på. Att försöker hitta hur prestandaskillnaden äroch användarupplevelsen (UX) påverkas med tillägget av kollision runt användarenshänder i en MR-miljö. För att enklare kunna hitta en väg till svaret för dessa frå-gor har mål formulerats. Dessa mål är att; hitta och implementera en handkollisionsmetod, designa och utför en användarstudie, och hitta samt använd lämpliga metoderför att analysera den insamlade datan. Metod. För att få en bättre förståelse för hur UX och prestanda för använd-ning av handkollsion skapades en användarstudie där deltagarna genomförde en serieuppgifter, med och utan kollision runt deras händer. För varje uppgift besvaradesett frågeformulär om deras upplevelse. När uppgifterna har samlats in kommer deatt analyseras med hjälp av SUS poängsystem och statistiska tester. Resultat. Denna studie hade 12 deltagare. Med och utan handkollision fick engenomsnittlig SUS-poäng av 62,5 respektive 69,2. Resultaten visar att metoden sominte använder någon kollision presterade bättre när det gäller tid för att slutförauppgiften. Men, kollision fick dock bättre med resultat med ett färre antal greppsom används. Det var ingen statistiskt signifikant skillnad som upptäcktes mellanmed och utan handkollision i avsikt på intuitivitet och realism. Dock observeradesdeltagarna att använda kollisionen på ett mer intuitivt sätt till sin fördel. Slutsatser. Sammanfattningsvis, deltagarna presterade inte bättre med kollision,men indikerade viss nivå av ökad intuition och realism. De negativa aspekterna avkollisionen tros att vara i grund på den metod som använts för att implementera den,och potential finns inom området för ytterligare förbättringar och forskning.

Page generated in 0.028 seconds