• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 518
  • 107
  • 87
  • 38
  • 36
  • 34
  • 19
  • 14
  • 7
  • 6
  • 6
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 1007
  • 1007
  • 294
  • 201
  • 186
  • 153
  • 150
  • 139
  • 127
  • 123
  • 117
  • 99
  • 99
  • 94
  • 93
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

XR Development with the Relay and Responder Pattern

Elvezio, Carmine January 2021 (has links)
Augmented Reality (AR) and Virtual Reality (VR) provide powerful, natural, and robust ways to interact with digital content, across a number of different domains. AR and VR, collectively known as Extended Reality (XR), can facilitate the execution of surgical procedures, aid in maintenance and repair of mechanical equipment, provide novel visualization paradigms for data analysis, and even empower new ways to experience video games. These experiences are built on rich, complex real-time interactive systems (RISs) that require the integration of numerous components supporting everything from rendering of virtual content to tracking of objects and people in the real world. There are decades of research on the development of robust RISs, utilizing different software engineering modalities, which facilitate the creation of these systems. While in the past, developers would frequently write all of the components and the “logical glue” themselves (often built with graphics suites such as OpenGL and DirectX), with the the rise of popular 3D game creation engines, such as Unity and Unreal, new development modalities have begun to emerge. While the underlying game engines provide a significantly easier pipeline to integrate different subsystems of AR/VR applications, there are a number of development questions that arise when considering how interaction, visualization, rendering, and application logic should interact, as developers are often left to create the “logical glue” on their own, leading to software components with low reusability. As the needs of users of these systems increase and become more complex, and as the software and hardware technology improves and becomes more sophisticated, the underlying subsystems must also evolve to help meet these needs. In this work, I present a new software design pattern, the Relay & Responder (R&R) pattern, that attempts to address the concerns found with many traditional object-oriented approaches in XR systems. The R&R pattern simplifies the design of these systems by separating logical components from the communication infrastructure that connects them, while minimizing coupling and facilitating the creation of logical hierarchies that can improve XR application design and module reuse. Additionally, I explore how this pattern can, across a number of different research development efforts, aid in the creation of powerful and rich XR RISs. I first present related work in XR system design and introduce the R&R pattern. Then I discuss how XR development can be eased by utilizing modular building blocks and present the Mercury Messaging framework, which implements the R&R pattern. Next I delve into three new XR systems that explore complex XR RIS designs (including user study management modules) using the pattern and framework. I then address the creation of multi-user, networked XR RISs using R&R and Mercury. Finally I end with a discussion on additional considerations, advantages, and limitations of the pattern and framework, in addition to prospective future work that will help improve both.
432

A comparison between remote and physically co-located, Plane and AR tag, as well as 2D and 3D supervision in a collaborative AR-environment

Svahn, Niclas, Bergstedt, Philip January 2021 (has links)
As Covid-19 has been a long-lasting worldwide pandemic, more companies wish to find a solution in collaborative Augmented Reality (AR). That makes AR a growing technology that allows users to observe a virtual object in the real world in real-time. The virtual object can interact with real-world objects to fully augment the user’s reality. This paper's first aim is to evaluate whether a remote or aphysically co-located AR space is most efficient. The second aim concerns whether AR planes or AR tags will increase efficiency in the virtual environment. The third is to evaluate whether having a supervisor on a desktop with a mouse and keyboard and a screen or holding a phone connected to the same AR space is most efficient. The paper’s experiment’s focus will be to measure efficiency by fetching quantifiable data from the application while the pair of subjects complete the task of building a pyramid with cubes. Three paired t-tests have been done, one for each of the different test requirements. Co-located have been tested against remote, AR tag against AR plane, and 2D against 3D. The null hypothesis for these three tests is that there is no difference. A survey was done to collect qualitative data to determine which configuration was preferred. It was shown that co-located, 2D supervising, and AR planes were perceived as the best configuration. The results of the paired t-tests show that the difference between co-located and remote is significant with a 99% accuracy. At the same time, the two other tests have an insignificant difference, even with a 95% accuracy.
433

Support component reusability by integrating augmented reality and product lifecycle management

Quesada Díaz, Raquel January 2016 (has links)
In an ever changing market that expands continuously and where innovations cycles become shorter, there is an important increase of the renewal frequency of the electrical and electronic equipment (EEE) and vehicles. This makes the manufacture of EEE and vehicles a fast-growing source of waste in terms of used products. The immense amount of information generated by all these technological products which are currently in the market must be managed throughout the whole life cycle of the products. The problem is to provide information about the technological product’s reusability in the recycling process given the colossal complexity of many products and the lifespan of operation. This includes instructions about the components qualifications as elements in a new product. Technologies such as augmented reality (AR) combined with product lifecycle management (PLM) systems can provide the platform for an information system that provides the necessary information and support for the decommissioning process of EEE and vehicles at the end of their life cycle. The present project describes the framework of integration between AR and PLM with the purpose of recycling a technological product at the end of its life cycle. The proposed method of integration could be considered to constitute both an innovation and a possible improvement if compared with the current approach. It is believed that the development of a method that addresses the issue of integration between AR and PLM could provide with a secure, efficient management of stored data related to various products and their properties related to the recycling process at the end-of-life of the product. The result of this approach is an AR-PLM system architecture which assists the circular economy’s recycling process by the use of visual information superimposed on the physical technological equipment.
434

Supporting the Design and Authoring of Pervasive Smart Environments

Tianyi Wang (12232550) 19 April 2022 (has links)
<p>The accelerated development of mobile computational platforms and artificial intelligence (AI) has led to increase in interconnected products with sensors that are creating smart environments. The smart environment expands the interactive spaces from limited digital screens, such as desktops and phones, to a much broader category that includes everyday objects, smart things, surrounding contexts, robots, and humans. The improvement of personal computing devices including smartphones, watches, and AR glasses further broadens the communication bandwidth between us and the ambient intelligence from the surrounding environment. Additionally in this smart environment people want to pursue personalization and are motivated to design and build their own smart environments and author customized functions.</p> <p> </p> <p>My work in this thesis focuses on investigating workflows and methods to support end-users to create personalized interactive experiences within smart environments. In particular, I designed the authoring systems by inspecting different interaction modalities, namely the direct input, spatial movement, in-situ activity and embodied interactions between users and everyday objects, smart things, robots and virtual mid-air contents. To this end, we explored 1) the software tools, hardware modules, and machines that support users to augment non-smart environments with digital interfaces and functions, and 2) the intelligence and context-awareness powered by the smart environments that deliver automatic and assistance during living and entertaining experiences. In this thesis, I mainly studied the following authoring workflows and systems: 1) customizing interactive interfaces on ordinary objects by surface painted circuits, 2) constructing a spatially aware environment for service robots with IoT modules, 3) authoring robot and IoT applications that can be driven by body actions and daily activities and 4) creating interactive and responsive augmented reality applications and games that can be played through natural input modalities.</p> <p> </p> <p>Takeaways from the main body of the research indicate that the authoring systems greatly lower the barrier for end-users to understand, control, and modify the smart environments. We conclude that seamless, fluent, and intuitive authoring experiences are crucial for building harmonious human-AI symbiotic living environments.</p>
435

Collaborative learning via mobile language gaming and augmented reality: affordances and limitations of technologies

Perry, Bernadette 05 April 2022 (has links)
This research explores collaborative second language (L2) learning in gamified environments, and specifically examines affordances and limitations of mobile gamified language systems and augmented reality (AR) in supporting collaborative L2 learning. Therefore, this design-based research entailed the development and evaluation of two L2 AR gamified collaborative learning tools, Explorez and VdeUVic. At different locations on campus, players interact with characters that give them quests including clues or options to further the storyline. The gameplay interactions were designed to take place either in the form of written text or audio and video recordings, encouraging students to practice both oral and written language competencies. Three cohorts of FL2 university students playtested both gamified systems, and 58 students chose to participate in the study. The evaluation of the AR language tools was implemented by means of mixed-method case studies, collecting data of both a qualitative and quantitative nature, through pre- and post- play questionnaires, interviews, and video recordings of student gameplay interactions for analysis. This research examined the learners’ perceptions of their learning experience and in what ways students collaborated to complete the tasks. Additionally, the adaptation of Volet et al.’s (2009) collaborative learning framework permitted the examination of the learners content processing and social regulation during gameplay. The findings suggested the potential of AR gamified environments to facilitate high levels of interaction and collaboration. The analysis showed distinct patterns of collaborative learning across groups and sessions. Additionally, the findings identified patterns in the emergence of learners’ high-level co-regulation, as well as factors that assisted students in sustaining engagement of high-level co-regulation during gameplay. / Graduate
436

Upplevelseköpet : En användbarhetsstudie av mobila Augmented Reality applikationer inom möbelhandeln / The experience purchase : A usability study of mobile Augmented Reality applications in the furniture trade

Bjurström Ellström, Mariette January 2020 (has links)
New technological innovations and solutions create new buying patterns and new opportunities for companies to showcase and communicate their products. Research shows that digital and interactive product experiences in retail increase the consumer experience as technology influences the perception of products and willingness to buy. Augmented Reality (AR) improves the quality of the user experience by digitizing experiences and products in new dynamic and innovative ways that include the user's real environment. The mobile AR- applications IKEA Place and Ethan Allen inHome are designed to visualize furniture and home decor in the user's own home environment. In order to understand and explore how users perceive the interface and usability of the AR-applications, a usability study has been performed linking the results to accepted design principles in information architecture with focus on usability and user experience. Results from the tests, interviews and survey shows that the content, the placement and the amount of information plays an important role in user experience and usability. The findings also indicates that without a clear user context, the application loses its purpose, however well-designed the interface is. In order to be usable, the applications must create some kind of added value that is perceivable to the user. The overall studie has resulted in new empirical material regarding user research and opened up for a broader understanding as to how users experience and accepts mobile AR-applications within the trade.
437

Att träna elever i historical thinking genom museibesök

Agronius, Rebecca, Börjesson Sundquist, Dennis January 2020 (has links)
The primary purpose of this research overview is to look into previous research on how teachers use physical or virtual visits to museums in order to stimulate learners’ historical thinking skills. In this paper we use Peter Seixas and Tom Morton’s (2013) definition of the framework historical thinking. The paper is based on the research question: What does previous research say about how history teachers can stimulate learners in historical thinking in a physical or virtual museum environment? In order to find the research for this overview several databases were used, such as ERC, ERIC, Swepub and the search engine Libsearch. To meet the requirements there were several inclusion and exclusion criteria applied in the search for relevant articles. The researchers agree that there is potential to stimulate learner’s historical thinking skills during physical or virtual museum visits, and they also call for the need of preparation and cooperation between schools and museums. The results of this paper may be used by teachers to assess whether a trip to a museum or exploring a virtual reality museum is worth all the relevant preparations.
438

Remote Assistance for Repair Tasks Using Augmented Reality

Sun, Lu 15 September 2020 (has links)
In the past three decades, using Augmented Reality (AR) in repair tasks has received a growing amount of attention from researchers, because AR provides the users with a more immersive experience than traditional methods, e.g., instructional booklets, and audio, and video content. However, traditional methods are mostly used today, because there are several key challenges to using AR in repair tasks. These challenges include device limita- tions, object pose tracking, human-computer interaction, and authoring. Fortunately, the research community is investigating these challenges actively. The vision of this thesis is to move the AR technology towards being widely used in this field. Under this vision, I propose an AR platform for repair tasks and address the challenges of device limitations and authoring. The platform contains a new authoring ap- proach that tracks the real components on the expert’s side to monitor her or his operations. The proposed approach gives experts a novel authoring tool to specify 6DoF movements of a component and apply the geometrical and physical constraints in real-time. To ad- dress the challenge of device limitations, I present a hybrid remote rendering framework for applications on mobile devices. In my remote rendering approach, I adopt a client-server model, where the server is responsible for rendering high-fidelity models, encoding the ren- dering results and sending them to the client, while the client renders low-fidelity models and overlays the high-fidelity frames received from the server on its rendering results. With this configuration, we are able to minimize the bandwidth requirements and interaction latency, since only key models are rendered in high-fidelity mode. I perform a quantitive analysis on the effectiveness of my proposed remote rendering method. Moreover, I conduct a user study on the subjective and objective effects of the remote rendering method on the user experience. The results show that key model fidelity has a significant influence on the objective task difficulty, while interaction latency plays an important role in the subjective task difficulty. The results of the user study show how my method can benefit the users while minimizing resource requirements. By conducting a user study for the AR remote assistance platform, I show that the proposed AR plat- form outperforms traditional instructional videos and sketching. Through questionnaires provided at the end of the experiment, I found that the proposed AR platform receives higher recommendation than sketching, and, compared to traditional instructional videos, it stands out in terms of instruction clarity, preference, recommendation and confidence of task completion. Moreover, as to the overall user experience, the proposed method has an advantage over the video method.
439

Mixed Reality Book

Ruiz, Aleksandr January 2018 (has links)
This report covers the methodology, research, and design process of my Thesis Project I: The Mixed Reality Book. The project is a proof-of-concept system that adds contextual periphery effects to regular paper books, using Spatial Augmented Reality. The intention is to enhance reading experiences within public libraries – amongst children and students. In this brief study we investigate how Projected Periphery can create, improve, and augment reading by manipulating the physical book, and the area around it, using projections. Throughout the study, I conduct design engagements, rapid prototyping, and workshops with the intention of identifying meaningful interactions. Two primary contexts of use are identified and analysed with an emphasis on developing usable design conventions and laying the foundation for a Mixed Reality Book system. The result is a working prototype, analysis of the research and challenges, and an exploration of how this technology could be shaped further and deployed.
440

Influencing everyday decision-making by having  digital systems introducing subtle cues - an initial  viability study

Dalvig, Sara Kristina, Wasslöv, Lisa January 2020 (has links)
I denna studie undersökte vi hypotesen att visuella subtila ledtrådar kan användas i augmented reality-glasögon för att påverka människors beslutsfattande. Det var baserat på den konceptuella visionen om att AR-glasögon en dag kan vara en del av vårt dagliga liv, och då kan påverka hur beslut fattas. För att undersöka detta genomfördes en litteraturstudie och ett valbaserat videoexperiment. 39 deltagare i olika åldrar deltog i experimentet. Deltagarna delades slumpmässigt upp i tre grupper med olika förutsättningar; en kontrollgrupp, en grupp som exponerades för subtila “priming”-villkor och en grupp som exponerades för subtila “point-of-decision”-villkor. Resultaten påvisade samband mellan subtil priming och beslutsfattande, men inga samband kunde styrkas       mellan subtila “point-of-decision”-villkor och beslutsfattande. / In this study we investigated the hypothesis that visual subtle cues can be used in Augmented Reality glasses to affect human decision making. This was based on the conceptual vision that AR-glasses might one day be a part of our daily life, and could then affect the way decisions are made. In order to investigate, a literature study and a video-based choice experiment was conducted. 39 participants of varied ages took part in       the experiment. Participants were randomly assigned to a control condition, subtle priming cues condition, or a subtle point-of-decision cues condition. The subtle priming cue was in the form of action words and smiley faces, and the subtle point-of-decision cues were in the form of flashing lights. The results showed evidence of a relation between subtle priming cues and the choices made by the participants, but no evidence of such relations were found between subtle point-of-decision cues and the choices made.

Page generated in 0.1142 seconds