• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 518
  • 107
  • 87
  • 38
  • 36
  • 34
  • 19
  • 14
  • 7
  • 6
  • 6
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 1007
  • 1007
  • 294
  • 201
  • 186
  • 153
  • 150
  • 139
  • 127
  • 123
  • 117
  • 99
  • 99
  • 94
  • 93
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
681

A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

Günther, Ulrik 27 November 2020 (has links)
Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography Selbstständigkeitserklärung
682

Designing an Augmented Reality Based Navigation Interface for Large Indoor Spaces

Curtsson, Fanny January 2021 (has links)
Navigating from one place to another is something we as humans do on an everyday basis, and modern technology has made it easier than ever by providing navigation tools in our mobile devices. In indoor spaces, augmented reality (AR) based navigation interfaces have shown a lot of potential, as it has been proven to increase efficiency and overall usability. However, there is a lack of research investigating how these types of interfaces should be designed to create a good user experience. This study aimed at providing more insight into this by exploring the usability of a mobile AR interface for indoor navigation through the Rapid Iterative Testing and Evaluation (RITE) method. In total, six participants tested the interface in three rounds of user testing and iteration, with two participants taking part in each round. The results showed that the usability increased with each iteration. Findings also reaffirmed the importance of minimizing the amount of information presented in the AR interface, by for example presenting information prior to the AR interface, as well as the value of adding support for occlusion. Moreover, confusion caused by how the virtual objects aligned with the real physical space showed the importance of testing on-site. / Att navigera från en plats till en annan är något vi människor gör varenda dag, och modern teknologi har gjort detta enklare än någonsin genom att erbjuda navigationsverktyg i våra mobila enheter. I inomhusmiljöer så har navigationsverktyg som använder förstärkt verklighet (AR) visat mycket potential, då det har visats ökat effektiviteten och den övergripliga användbarheten. Däremot finns det en brist på forskning som undersöker hur dessa typer av gränssnitt ska designas för att skapa en bra användarupplevelse. Denna studie syftade till att ge mer insikt i detta genom att utforska användbarheten av ett mobilt AR-gränssnitt för inomhusnavigering med hjälp av metoden Rapid Iterative Testing and Evaluation (RITE). en iterativ design och utvärderingsmetod. Totalt testade sex deltagare gränssnittet i tre omgångar av användartester, där två deltagare deltog i varje omgång. Resultaten visade att användbarheten ökade med varje iteration. Resultaten bekräftade även vikten av att minimera mängden information som presenteras i AR-gränssnittet, till exempel genom att presentera information innan AR-gränssnittet, samt värdet av att lägga till stöd för ocklusion. Vidare så visade även förvirringen kring hur de virtuella objekten relaterade till det riktiga fysiska utrymmet värdet av att testa på plats.
683

A study on the use of ARKit toextract and geo-reference oorplans / En studie på användingen av ARKit för att extrahera och georeferera planlösningar

Larsson, Niklas, Runesson, Hampus January 2021 (has links)
Indoor positioning systems (IPS) has seen an increase in demand because of the needto locate users in environments where Global Navigation Satellite Systems (GNSS) lacksaccuracy. The current way of implementing an IPS is often tedious and time consuming.However, with the improvements of position estimation and object detection on phones,a lightweight and low-cost solution could become the standard for the implementationphase of an IPS. Apple recently included a Light Detection And Ranging (LiDAR) sensorin their phones, greatly improving the phones depth measurements and depth understanding.This allows for a more accurate virtual representation of an environment. This thesisstudies the accuracy of ARKit’s reconstructed world and how different environments impactthe accuracy. The thesis also investigates the use of reference points as a tool to map thereconstructed environment to a geo-referenced map, such as Google Maps and Open StreetMap. The results show that ARKit can create virtual representations with centimetre levelaccuracy for small to medium sized environments. For larger or vertical environments,such as corridors or staircases, ARKit’s SLAM algorithm no longer recognizes previouslyvisited areas, causing both duplicated virtual environments and large drift errors. With theuse of multiple reference points, we showed that ARKit can and should be considered asa viable tool for scanning and mapping small scale environments to geo-referenced floorplans.
684

End-to-end Speech Separation with Neural Networks

Luo, Yi January 2021 (has links)
Speech separation has long been an active research topic in the signal processing community with its importance in a wide range of applications such as hearable devices and telecommunication systems. It not only serves as a fundamental problem for all higher-level speech processing tasks such as automatic speech recognition, natural language understanding, and smart personal assistants, but also plays an important role in smart earphones and augmented and virtual reality devices. With the recent progress in deep neural networks, the separation performance has been significantly advanced by various new problem definitions and model architectures. The most widely-used approach in the past years performs separation in time-frequency domain, where a spectrogram or a time-frequency representation is first calculated from the mixture signal and multiple time-frequency masks are then estimated for the target sources. The masks are applied on the mixture's time-frequency representation to extract the target representations, and then operations such as inverse short-time Fourier transform is utilized to convert them back to waveforms. However, such frequency-domain methods may have difficulties in modeling the phase spectrogram as the conventional time-frequency masks often only consider the magnitude spectrogram. Moreover, the training objectives for the frequency-domain methods are typically also in frequency-domain, which may not be inline with widely-used time-domain evaluation metrics such as signal-to-noise ratio and signal-to-distortion ratio. The problem formulation of time-domain, end-to-end speech separation naturally arises to tackle the disadvantages in the frequency-domain systems. The end-to-end speech separation networks take the mixture waveform as input and directly estimate the waveforms of the target sources. Following the general pipeline of conventional frequency-domain systems which contains a waveform encoder, a separator, and a waveform decoder, time-domain systems can be design in a similar way while significantly improves the separation performance. In this dissertation, I focus on multiple aspects in the general problem formulation of end-to-end separation networks including the system designs, model architectures, and training objectives. I start with a single-channel pipeline, which we refer to as the time-domain audio separation network (TasNet), to validate the advantage of end-to-end separation comparing with the conventional time-frequency domain pipelines. I then move to the multi-channel scenario and introduce the filter-and-sum network (FaSNet) for both fixed-geometry and ad-hoc geometry microphone arrays. Next I introduce methods for lightweight network architecture design that allows the models to maintain the separation performance while using only as small as 2.5% model size and 17.6% model complexity. After that, I look into the training objective functions for end-to-end speech separation and describe two training objectives for separating varying numbers of sources and improving the robustness under reverberant environments, respectively. Finally I take a step back and revisit several problem formulations in end-to-end separation pipeline and raise more questions in this framework to be further analyzed and investigated in future works.
685

Design of instructions for a remanufacturing operation using AR

Hervás Gutiérrez, María, Sáez García, Elisa January 2021 (has links)
The concept of sustainability is gaining visibility in recent years. Both society and companies are increasing their interest in every of its social, environmental, and economic dimensions. This interest is one of the reasons why the Circular Economy is escalating. One of the goals of this model of production and consumption is waste reduction through the creation of a closed-loop chain, where remanufacturing has a crucial role. Despite the benefits of remanufacturing, this process increases the complexity of the task, limiting access to this job due to the high level of knowledge required. This is the reason why Augmented Reality is presented in this thesis as a method to assist operators by guiding them and providing real-time feedback interactively. The main goal is to increase the efficiency, effectiveness, and accessibility of this task. At the same time, this project aims to contribute to all dimensions of sustainability to a greater or lesser extent. To meet the objectives mentioned above, and keeping in mind the Design Science Research Methodology (DSRM), an artifact is created. The case of study is an assembly operation reproduced in the IRMA-demonstrator in ASSAR Innovation Arena in Skövde (Sweden). A set of Augmented Reality instructions have been designed to guide the operator through the assembly task. First, with the help of the software provided by Microsoft, Dynamics 365 Guides, and afterward, by using Unity. The software is compared, and an attempt is made to justify the implementation of AR, specifically in the remanufacturing assembly task. The results seem to point to a reduction of errors in the operation. Finally, conclusions are extracted based on previous studies and the analysis of the design and implementation of the set of instructions. / <p>Det finns övrigt digitalt material (t.ex. film-, bild- eller ljudfiler) eller modeller/artefakter tillhörande examensarbetet som ska skickas till arkivet.</p>
686

Augmented Reality Supported Learning Process for Operators

Joseph Christian, Haranya, Mani, Sofia January 2022 (has links)
Abstract Intro: Conventional training methods are intended to teach and offer the most human way of teaching but is it the most effective and can another method improve or eliminate nonvalue adding activities. This master thesis tackles the difference between traditional training versus AR supported training, studying different aspects of training to see the advantages and disadvantages of the methods. The research questions were the following:  RQ1: Is the learning process improved with or without AR supported technology?   RQ2: What are the benefits of AR use in industry?  RQ3: Will the investment of AR implementation in the learning process pay off?  Method: To answer the research questions, a case study at a case company was conducted. The case study consisted of working with two companies, an external company for a theoretical answer based on experiences and an experimental study at the internal case company. In parallel to this case study a literature review was done to answer the research questions.  Theory and Literature Review: Literature for the frame of reference was gathered to support the thesis and give the perspectives of the investigated area. The literature review sought to find answers to the research questions and the search was systematically formed to focus on data gathering to answer the questions.  Analyses: The findings from this research shows that AR supported training is beneficial and has the potential to eliminate nonvalue adding activities. AR has many capabilities and research shows that AR in general has positive impact.  Conclusion: AR-applied training has both advantages and disadvantages, but the potential of improvements is high. It is difficult to conclude if it is economical to invest in AR, with the advances the technology holds on to now since the outcome can depend on several factors from different parameters. However, AR does contribute to more effective learning and further benefits within the applied area. It has been proven to reduce the error rate and thus may increase the quality of the product.
687

Interacting with Hand Gestures in Augmented Reality : A Typing Study

Moberg, William, Pettersson, Joachim January 2017 (has links)
Smartphones are used today to accomplish a variety of different tasks, but it has some issues that might be solved with new technology. Augmented Reality is a developing technology that in the future can be used in our daily lives to solve some of the problems that smartphones have. Before people will adopt the new augmented technology it is important to have an intuitive method to interact with it. Hand gesturing has always been a vital part of human interaction. Using hand gestures to interact with devices has the potential to be a more natural and familiar method than traditional methods, such as keyboards, controllers, and computer mice. The aim of this thesis is to explore whether hand gesture recognition in an Augmented Reality head-mounted display can provide the same interaction possibilities as a smartphone touchscreen. This was done by implementing an application in Unity that mimics an interface of a smartphone, but uses hand gestures as input in AR. The Leap Motion Controller was the device used to perform hand gesture recognition. To test how practical hand gestures are as an interaction method, text typing was chosen as the task to be used to measure this, as it is used in many applications on smartphones. Thus, the results can be better generalized to real world usage.Five different keyboards were designed and tested in a pilot study. A controlled experiment was conducted, in which 12 participants tried two hand gesturing keyboards and a touchscreen keyboard. This was done to compare how hand gestures compare to touchscreen interaction. In the experiment, participants wrote words using the keyboards, while their completion time and accuracy was recorded. After using a keyboard, a questionnaire was completed by the participants to measure the usability.  The results consists of an implementation of five different keyboards, and data collected from the experiment. The data gathered from the experiment consists of completion time, accuracy, and usability derived from questionnaire responses. Statistical tests were used to determine statistical significance between the keyboards used in the experiment. The results are presented in graphs and tables. The results show that typing with pinch gestures in augmented reality is a slow and tiresome way of typing and affects the users completion time and accuracy negatively, in relation to using a touchscreen. The lower completion time, and higher usability, of the touchscreen keyboard could be determined with statistical significance. Prediction and auto-completion might help with fatigue as fewer key presses are needed to create a word. The research concludes that hand gestures are reasonable to use as input technique to accomplish certain tasks that a smartphone performs. These include simple tasks such as scrolling through a website or opening an email. However, tasks that involve typing long sentences, e.g. composing an email, is arduous using pinch gestures. When it comes to typing, the authors advice developers to employ a continuous gesture typing approach such as Swype for Android and iOS.
688

IKEA flytt[AR] in

Kornblad, Moa January 2018 (has links)
Digitaliseringens framfart tillsammans människors allt högre förväntningar på teknisk utveckling, kvalitet och effektivitet ställer större krav på företag att skapa tillfredsställande användarupplevelser av produkter och tjänster. Applikationen IKEA Place är framtagen för att möta konsumenterna på deras villkor och genom tillfredsställande användarupplevelser påverka deras köpbeslut. För att ta reda på om och hur användarupplevelsen av IKEA Place påverkar användarnas intentioner att köpa produkter från IKEA har en enkätundersökning och två fokusgruppsdiskussioner genomförts. Som en del av fokusgruppmetoden genomfördes användartester där de nio deltagarna fick interagera med applikationen på ett ändamålsenligt sätt. För att en applikation ska accepteras och användas krävs det att olika kvaliteter tillfredsställer användaren och hjälper denne att nå sitt mål med användandet. Enkätundersökningen, fokusgruppdiskussionerna och användartesterna tyder på att användarupplevelsen av IKEA Place är otillfredsställande och att den i nästan obetydligt liten utsträckning påverkar användarnas köpintentioner. / The expansion of digitalization and people’s increased expectations for technical development, qualities and efficiency puts higher demand on companies to create satisfying user experiences of products and services. The application IKEA Place is produced to meet the consumers on their terms and assist with good user experience to affect their intentions to buy. To find out if and how the user experience of IKEA Place affects users' intentions to buy products from IKEA, one survey and two focus groups were conducted. User tests, where all of the nine participants interacted with the application in an appropriate way, were executed as a part of the focus group method. For an application to be accepted and used, several different qualities needs to satisfy the user and help him or her to reach the goal of the usage. The survey, the focus group discussions and the user tests indicates that the user experience of IKEA Place is unsatisfying. Additionally, the application barley affects the users’ intentions to buy at all.
689

BUDI: Building Urban Designs Interactively - a spatial-based visualization and collaboration platform for urban design

Sun, Xi 03 September 2020 (has links)
BUDI (Building Urban Designs Interactively) is an integrated 3D visualization and remote collaboration platform for complex urban design tasks. Users with different backgrounds can remotely engage in the entire design cycle, improving the quality of the end result. In this paper, I consider the trade-offs encountered when trying to make spatial-based collaboration seamless. Specifically, I detail the multi-dimensional data visualization and interaction the platform provides, and outline how users can interact with and analyze various aspects of urban design. In $BUDI$, the display and interactive environment was designed to seamlessly expand beyond a traditional two-dimensional surface into a fully immersive three-dimensional space. Clients on various devices connect with servers for different functionalities tailored for different user groups. A demonstration with a local urban planning use-case shows the costs and benefits of $BUDI$ as a spatial-based collaborative platform. A performance evaluation with remote collaboration shows how the platform can meet the requirements for real-time and seamless collaboration. / Graduate
690

Effect of Augmented Reality on Anxiety in Prelicensure Nursing Students

Ball, Sarah 01 January 2018 (has links)
Prelicensure nursing students experience high anxiety as they enter the clinical setting, which can have a negative impact on learning care performance and critical thinking. Nursing faculty are faced with the challenges of limited time for clinical experiences, meeting the needs of learners who are technologically astute, and engaging students in the clinical environment to meet learning outcomes. The purpose of this pretest posttest quasi-experimental study, guided by the discovery learning theory, was to determine the effect of augmented reality (AR) 360 photosphere on prelicensure nursing students' level of anxiety as they entered a new clinical environment as compared to prelicensure nursing students' level of anxiety who did not experience AR 360 photosphere orientation. Forty-seven students completed the Spielberger's State-Trait Anxiety Inventory with 17 completing a faculty-led orientation and 30 using the AR 360 photosphere orientation method. An independent t-test revealed no difference between the two methods of orientation in prelicensure nursing students' anxiety levels in the immediate first clinical experience. Though no statistical difference was evident, the technology platform of AR 360 photosphere orientation allowed for autonomous orientation without having to overcome clinical environment variances. The findings of the study contribute to positive social change by indicating that the AR 360 photosphere demonstrated value as a consistent and efficient method of clinical orientation as students' encounter new environments and new evidence-based care that requires orientation.

Page generated in 0.2958 seconds