• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 508
  • 105
  • 87
  • 38
  • 36
  • 34
  • 19
  • 14
  • 7
  • 6
  • 6
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 994
  • 994
  • 291
  • 197
  • 182
  • 150
  • 147
  • 135
  • 127
  • 120
  • 116
  • 99
  • 96
  • 92
  • 91
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

HD4AR: High-Precision Mobile Augmented Reality Using Image-Based Localization

Miranda, Paul Nicholas 05 June 2012 (has links)
Construction projects require large amounts of cyber-information, such as 3D models, in order to achieve success. Unfortunately, this information is typically difficult for construction field personnel to access and use on-site, due to the highly mobile nature of the job and hazardous work environments. Field personnel rely on carrying around large stacks of construction drawings, diagrams, and specifications, or traveling to a trailer to look up information electronically, reducing potential project efficiency. This thesis details my work on Hybrid 4-Dimensional Augmented Reality, known as HD4AR, a mobile augmented reality system for construction projects that provides high-precision visualization of semantically-rich 3D cyber-information over real-world imagery. The thesis examines the challenges related to augmenting reality on a construction site, describes how HD4AR overcomes these challenges, and empirically evaluates the capabilities of HD4AR. / Master of Science
82

Immersive Space to Think: Immersive Analytics for Sensemaking with Non-Quantitative Datasets

Lisle, Lorance Richard 09 February 2023 (has links)
Analysts often work with large complex non-quantitative datasets in order to better understand concepts, themes, and other forms of insight contained within them. As defined by Pirolli and Card, this act of sensemaking is cognitively difficult, and is performed iteratively and repetitively through various stages of understanding. Immersive analytics has purported to assist with this process through putting users in virtual environments that allows them to sift through and explore data in three-dimensional interactive settings. Most previous research, however, has focused on quantitative data, where users are interacting with mostly numerical representations of data. We designed Immersive Space to Think, an immersive analytics approach to assist users perform the act of sensemaking with non-quantitative datasets, affording analysts the ability to manipulate data artifacts, annotate them, search through them, and present their findings. We performed several studies to understand and refine our approach and how it affects users sensemaking strategies. An exploratory virtual reality study found that users place documents in 2.5-dimensional structures, where we saw semicircular, environmental, and planar layouts. The environmental layout, in particular, used features of the environment as scaffolding for users' sensemaking process. In a study comparing levels of mixed reality as defined by Milgram-Kishino's Reality-Virtuality Continuum, we found that an augmented virtuality solution best fits users' preferences while still supporting external tools. Lastly, we explored how users deal with varying amounts of space and three-dimensional user interaction techniques in a comparative study comparing small virtual monitors, large virtual monitors, and a seated-version implementation of Immersive Space to Think. Our participants found IST best supported the task of sensemaking, with evidence that users leveraged spatial memory and utilized depth to denote additional meaning in the immersive condition. Overall, Immersive Space to Think affords an effective sensemaking three-dimensional space using 3D user interaction techniques that can leverage embodied cognition and spatial memory which aids the users understanding. / Doctor of Philosophy / Humans are constantly trying to make sense of the world around them. Whether they're a detective trying to understand what happened at a crime scene or a shopper trying to find the best office chair, people are consuming vast quantities of data to assist them with their choices. This process can be difficult, and people are often returning to various pieces of data repeatedly to remember why they are making the choice they decided upon. With the advent of cheap virtual reality products, researchers have pursued the technology as a way for people to better understand large sets of data. However, most mixed reality applications looking into this problem focus on numerical data, whereas a lot of the data people process is multimedia or text-based in nature. We designed and developed a mixed reality approach for analyzing this type of data called Immersive Space to Think. Our approach allows users to look at and move various documents around in a virtual environment, take notes or highlight those documents, search those documents, and create reports that summarize what they've learned. We also performed several studies to investigate and evolve our design. First, we ran a study in virtual reality to understand how users interact with documents using Immersive Space to Think. We found users arranging documents around themselves in a semicircular or flat plane pattern, or using various cues in the virtual environment as a way to organize the document set. Furthermore, we performed a study to understand user preferences with augmented and virtual reality. We found a mix of the two, also known as augmented virtuality, would best support user preferences and ability. Lastly, we ran two comparative studies to understand how three dimensional space and interaction affects user strategies. We ran a small user study looking at how a single student uses a desktop computer with a single display as well as immersive space to think to write essays. We found that they wrote essays with a better understanding of the source data with Immersive Space to Think than the desktop setup. We conducted a larger study where we compared a small virtual monitor simulating a traditional desktop screen, a large virtual monitor simulating a monitor 8 times the size of traditional desktop monitors, and immersive space to think. We found participants engaged with documents more in Immersive Space to Think, and used the space to denote importance for documents. Overall, Immersive Space to Think provides a compelling environment that assists users in understanding sets of documents.
83

General-Purpose Task Guidance from Natural Language in Augmented Reality using Vision-Language Models

Stover, Daniel James 12 June 2024 (has links)
Augmented reality task guidance systems provide assistance for procedural tasks, which require a sequence of physical actions, by rendering virtual guidance visuals within the real-world environment. An example of such a task would be to secure two wood parts together, which could display guidance visuals indicating the user to pick up a drill and drill each screw. Current AR task guidance systems are limited in that they require AR system experts for use, require CAD models of real-world objects, or only function for limited types of tasks or environments. We propose a general-purpose AR task guidance approach and proof-of-concept system to generate guidance for tasks defined by natural language. Our approach allows an operator to take pictures of relevant objects and write task instructions for an end user, which are used by the system to determine where to place guidance visuals. Then, an end user can receive and follow guidance even if objects change location or environment. Guidance includes reusable visuals that display generic actions, such as our system's 3D hand animations. Our approach utilizes current vision-language machine learning models for text and image semantic understanding and object localization. We built a proof-of-concept system using our approach and tested its accuracy and usability in a user study. We found that all operators were able to generate clear guidance for tasks in an office room, and end users were able to follow the guidance visuals to complete the expected action 85.7% of the time without knowledge of their tasks. Participants rated that our system was easy to use to generate guidance visuals they expected. / Master of Science / Augmented Reality (AR) task guidance systems provide assistance for tasks by placing virtual guidance visuals on top of the real world through displays. An example of such a task would be to secure two wood parts together, which could display guidance visuals indicating the user to pick up a drill and drill each screw. Current AR task guidance systems are limited in that they require AR system experts for use, require detailed models of real-world objects, or only function for limited types of tasks or environments. We propose a new task guidance approach and built a system to generate guidance for tasks defined by written instructions. Our approach allows an operator to take pictures of relevant objects and write task instructions for an end user, which are used by the system to determine where to place digital visuals. Then, an end user can receive and follow guidance even if objects change location or environment. Guidance includes visuals that display generic actions, such as our system's 3D hand animations that mimic human hand actions. Our approach utilizes AI models for text and image understanding and object detection. We built a proof-of-concept system using our approach and tested its accuracy and usability in a user study. We found that all operators were able to generate clear guidance for tasks in an office room, and end users were able to follow the guidance visuals to complete the expected action 85.7% of the time without knowledge of the tasks. Participants rated that our system made it easy to write instructions and take pictures to create guidance visuals.
84

Augmented Reality i utomhusmiljöer : En jämförelse mellan ARKit och ARCore

Thulin, Felix January 2018 (has links)
Purpose – The purpose of this thesis is to understand how the two frameworks: ARKit and ARCore works in the outside environment, in terms of possibilities and constrictions. This thesis intends to answer the following research questions: -        Which possibilities and restrictions does ARKit and ARCore have for usage in the outside environment? -        Which is the lowest illuminance needed for ARKit to work? Method – The study uses a literature study to answer the first research question. For the second research question a combination of literature study and experimental study is used, in which a hypothesis and prediction is formulated. Results – The results of the study show that there’s more restrictions than possibilities for using ARKit and ARCore in the outside environment. There are a lot of factors that can affect the frameworks ability to read the surroundings and place virtual objects. Dynamic elements, such as weather and illumination, is something that needs to be kept in mind. The experimental study showed that the minimal illumination needed for an ARKit based application to be able to place an object in an environment was 10.275 lx. In comparison; this is when the sun is positioned at a -5O under the horizon or higher. For example, this means that 1st of January only has 7h and 30min of the day where there is enough daylight for ARKit to read the surrounding. This does not account for other factors that can be troublesome, such as during the wintertime there is likely snow covering the ground. Implications – The study contributes to the exploration of an area which is relatively unknown since these frameworks have barely been out for a year. The results in the study can give an insight for developers and companies that have visions to use AR technology in the outside environment. The study covers the main areas AR, ARKit and ARCore. Limitations – The experimental study was only performed using ARKit, the reason being that the only testing device available was iOS Keywords – Augmented Reality, AR, Apple, Google, ARKit, ARCore, Lux, Experiment, Literature study. / Syfte - Syftet med denna studie är att undersöka hur ramverken ARKit och ARCore lämpar sig för utomhusbruk i form av möjligheter och begränsningar. Denna undersökning avser att besvara följande frågeställningar:   Vilka möjligheter och begränsningar har ARKit respektive ARCore vid utomhusbruk?  Vilken är den minimala belysningsstyrka som krävs för att ARKit skall fungera?   Metod – I studien används litteraturstudie för att besvara första frågeställningen, och för andra frågeställningen är det en kombination av litteraturstudie och experimentell studie med hypotes och förutsägelse. Resultat – Resultatet visar att begränsningarna är fler än möjligheterna. Det är många faktorer i en utomhusmiljö som kan påverka ramverkens förmåga att läsa av omgivningen och placera ett virtuellt objekt. Dynamiska element som väder och ljus är något man uppenbarligen måste ha i åtanke. Den experimentella studien visade att den minimala belysningsstyrkan som krävs för att en ARKit baserad applikation skall kunna placera ut ett objekt är 10.275 lx. Detta kan jämföras med att solen befinner sig -5O under horisonten eller högre. Ett exempel är tillexempel den 1:a januari som har 7h 30min under dygnet då det är tillräckligt ljust. En annan aspekt som kan ställa till problem under vintermånaderna är eventuella snötäcken. Implikationer – Studien bidrar till att utforska ett område som är relativt okänt då dessa ramverk är knappt ett år gamla. Resultatet i studien kan ge en insikt till utvecklare eller företag som har egna visioner att nyttja AR teknologin utomhus. Studien lägger grunden för huvudområdena AR, ARKit och ARCore. Begränsningar – Den experimentella studien genomfördes endast med ARKit med anledning att den enda testenhet som fanns tillgänglig var iOS. Nyckelord – Augmented Reality, Apple, Google, ARKit, ARCore, Lux, Experiment, Litteraturstudie
85

Creating augmented reality authoring tools informed by designer workflow and goals

Coleman, Maribeth Gandy 27 September 2012 (has links)
In a 20-year period, AR has gone from being viewed as a heavyweight technology to a new medium for a variety of applications. As a result there has been an increasing need for tools to support AR design and development that fully address the needs of non-technologists. From my AR research, I learned that three critical components for these authoring tools are support for an established content pipeline, rapid prototyping, and user experience testing. The history of media teaches us that AR also shares underlying technologies with a variety of more mature media such as film, VR, and the web with existing workflows and tools. Therefore, we created an AR authoring tool that supported these three critical components, and whose design was informed by established approaches in these related domains, allowing developers with a range of technical expertise to explore the AR medium. In this dissertation I present four main contributions. The first was an exploration of the AR design space focused on close collaboration with designers. This work resulted in guidelines for AR authoring tools, and informed the development of the Designer's Augmented Reality Toolkit (DART). These guidelines were validated via internal and external projects. A qualitative study of long term DART use that provided insight into the successes and failures of DART as well as additional understanding of AR authoring needs. Lastly, I trace two main threads to highlight the impact of this work, the development of the AR Second Life system and the creation of the Argon AR web browser.
86

A multiscale framework for mixed reality walking tours

Barba, Evan 17 January 2013 (has links)
Mixed Reality experiences, that blend physical and virtual objects, have become commonplace on handheld computing devices. One common application of these technologies is their use in cultural heritage "walking tours." These tours provide information about the surrounding environment in a variety of contexts, to suit the needs and interests of different groups of participants. Using the familiar "campus tour" as a canonical example, this dissertation investigates the technical and cognitive processes involved in transferring this tour from its physical and analog form into Mixed Reality. Using the concept of spatial scale borrowed from cognitive geography, this work identifies the need to create and maintain continuity across different scales of spatial experience as being of paramount importance to successful Mixed Reality walking tours. The concepts of scale transitions, coordination of representations across scales, and scale-matching are shown to be essential to maintaining the continuity of experience. Specific techniques that embody these concepts are also discussed and demonstrated in a number of Mixed Reality examples, including in the context of a successful deployment of a Mixed Reality Tour of the Georgia Tech campus. The potential for a "Language of Mixed Reality" based on the concepts outlined in this work is also discussed, and a general framework, called the Mixed Reality Scale Framework is shown to meet all the necessary criteria for being a cognitive theory of Human-Centered Computing in the context of Mixed Reality.
87

Der Einsatz von Augmented Reality in der Fußgängernavigation : Konzeption und prototypische Implementierung eines smartphonebasierten Fußgängernavigationssystems / The use of augmented reality for pedestrian navigation : design and prototypical implementation of a smartphone-based pedestrian navigation system

Kluge, Mario January 2012 (has links)
Fußverkehr findet im gesamten öffentlichen Raum statt und ermöglicht die lückenlose Verbindung von Tür zu Tür. Jeder Mensch steht vor Beginn einer Fortbewegung vor den Fragen „Wo bin ich?“, „Wo liegt mein Ziel?“ und „Wie komme ich dahin?“. Ein Großteil der auf dem Markt befindlichen Navigationssysteme für Fußgänger stellen reduzierte Varianten aus Fahrzeugen dar und basieren auf 2D- Kartendarstellungen oder bilden die Realität als dreidimensionales Modell ab. Navigationsprobleme entstehen dann, wenn es dem Nutzer nicht gelingt, die Information aus der Anweisung auf die Wirklichkeit zu beziehen und umzusetzen. Ein möglicher Grund dafür liegt in der Visualisierung der Navigationsanweisung. Die räumliche Wahrnehmung des Menschen erfolgt ausgehend von einem bestimmten Betrachtungsstandpunkt und bringt die Lage von Objekten und deren Beziehung zueinander zum Ausdruck. Der Einsatz von Augmented Reality (erweiterte Realität) entspricht dem Erscheinungsbild der menschlichen Wahrnehmung und ist für Menschen eine natürliche und zugleich vertraute Ansichtsform. Im Unterschied zu kartographischer Visualisierung wird die Umwelt mittels Augmented Reality nicht modelliert, sondern realitätsgetreu abgebildet und ergänzt. Das Ziel dieser Arbeit ist ein Navigationsverfahren, das der natürlichen Fort-bewegung und Sichtweise von Fußgängern gerecht wird. Das Konzept basiert auf dem Einsatz einer Kombination aus Realität und virtueller Realität zu einer erweiterten Ansicht. Da keine Darstellungsform als die Route selbst besser geeignet ist, um einen Routenverlauf zu beschreiben, wird die Realität durch eine virtuelle Route erweitert. Die perspektivische Anpassung der Routendarstellung erfordert die sensorische Erfassung der Position und Lage des Betrachtungsstandpunktes. Das der Navigation zu Grunde liegende Datenmodell bleibt dem Betrachter dabei verborgen und ist nur in Form der erweiterten Realität sichtbar. Der im Rahmen dieser Arbeit entwickelte Prototyp trägt die Bezeichnung RealityView. Die Basis bildet ein freies und quelloffenes Navigationssystem, das für die Fußgängernavigation modular erweitert wurde. Das Ergebnis ist ein smartphonebasierter Navigationsprototyp, in dem die Ansichtsform einer zweidimensionalen Bildschirmkarte im Grundriss und die Darstellung einer erweiterten Realität im Aufriss kombiniert werden. Die Evaluation des Prototyps bestätigt die Hypothese, dass der Einsatz von Augmented Reality für die Navigation von Fußgängern möglich ist und von der Nutzergruppe akzeptiert wird. Darüber hinaus bescheinigen Wissenschaftler im Rahmen von Experten-interviews den konzeptionellen Ansatz und die prototypische Umsetzung des RealityView. Die Auswertung einer Eye-Tracking-Pilotstudie erbrachte den Nachweis, dass Fußgänger die Navigationsanweisung auf markante Objekte der Umwelt beziehen, deren Auswahl durch den Einsatz von Augmented Reality begünstigt wird. / Pedestrian traffic takes place in public spaces and provides a seamless connection from door to door. Right before the start of a movement, every human being faces the following questions "Where am I?", "What is my goal?" and "How do I get there?". Existing navigation systems for pedestrians are based on 2D map representations or depict reality as a three-dimensional model. As a result, the majority of the systems on the market for pedestrians are smaller versions of vehicle navigation systems. Navigation problems can occur if the user is unable to relate the information of an instruction to reality and cannot implement it. One possible reason for this is the visualization of the navigation instruction. Peoples spatial perception takes place from their own perspective and shows how objects are positioned and how they are related to each other. The use of Augmented Reality corresponds to the appearance of human perception, and this is a natural and yet familiar view. Compared to cartographic visualization techniques reality is not modeled but mapped realistically and refers to the environment. The concept of this thesis is directly linked to the current state of research and examines the use of a simple method and target group-oriented representation, which meets the natural movement of pedestrians. Its central focus is the combination of reality and virtual reality to a common view. This view will be extended by a virtual route representation, which follows the route course in reality. No other form of representation is better suited to explain the course of a route than the route itself. The perspective adjustment and the calculation of the virtual image scene require a data model, which remains hidden from the viewer and only appears in form of a virtual route. The navigation prototype, which is developed in this study, is based on Augmented Reality and is called RealityView. Its base is a free and open source navigation platform, which was expanded modularly into a pedestrian navigation system. The result is a smartphone-based navigation prototype, which combines the two forms of a two-dimensional screen map in plan with the illustration of an Augmented Reality in elevation. The validation of the prototype confirms the hypothesis that the use of Augmented Reality for pedestrian navigation is possible and is also accepted by the user group. In addition, in interviews with experts scientists have confirmed the conceptual approach and the implementation of the RealityView-prototype. The evaluation of an eye-tracking pilot study proved that the use of Augmented Reality favors the selection of prominent objects in the environment.
88

Using Mobile Augmented Reality and Reasoning Systems in Industrial Maintenance

Asplund, Anton, Hanna, Gabriel January 2018 (has links)
Inom industrin utvärderas maskiners tillstånd av enskilda arbetare för att avgöra behovet av underhåll. Dessa beslut baseras på antaganden och den enskilda arbetarens erfarenhet, vilket kan leda till felaktiga beslut. Beslut som leder till onödigt underhåll påverkar företagens ekonomi negativt. Genom att använda sensorer installerade på maskiner tillsammans med ett system för att resonera om värden från dessa kan maskinernas tillstånd avgöras. Genom att använda Augmented Reality för att visa detta tillstånd för arbetarna kan mer informerade beslut om underhåll tas. Den här rapporten undersöker de olika teknologier som behövs för att göra detta möjligt, Augmented Reality, Reasoning Systems, och Internet of Things. En prototypapplikation som utnyttjar dessa har skapats för att visa på vad som är möjligt med de enheter vi alla bär med oss varje dag. / Inspection workers in industries, evaluates the state of machines based on assumptions to decide if a need for service exists. These assumptions varies depending on the person performing the evaluation, which can cause the wrong decision to be made. These decisions on machine service affect the economy of the industry. By using sensors mounted to the machines and a reasoning system to evaluate the data from these sensors, the condition of the machines can be determined. Augmented Reality can then be used to display this condition to the inspection worker, leading to more informed decisions about the need for service being made. This thesis examines the different technologies needed to make this possible, Augmented Reality, Reasoning Systems, and Internet of Things. A prototype application is created using these to show what is possible using the mobile devices we all carry.
89

LANGUAGE LEARNING VIA AN ANDROID AUGMENTED REALITY SYSTEM / LANGUAGE LEARNING VIA AN ANDROID AUGMENTED REALITY SYSTEM

Beder, Paweł January 2012 (has links)
Augmented Reality (AR) can be described as one of possible steps between real world and fully virtual reality. Into this mixed reality we can make an overlay with virtual objects onto the real world typically by capturing camera images in real-time to produce a new layer to the environment with which we can interact. Mobile Augmented Reality (MAR) is a term used when equipment through which we achieve AR is small in size and typically easy to carry e.g. a smartphone or a tablet. The concept of using AR in facilitating learning and improving its quality seems to attract more attention in the academic world in recent years. One of the areas that receive much attention is AR language learning. In this thesis an experiment on a group of 20 people was conducted to answer the question: “Is MAR language learning system a viable solution for language learning?” For the purpose of the experiment an AR Language Learning Tool was designed for Android smartphones. This AR Language Learning Tool facilitated vocabulary learning by displaying 3D objects along with their spelling and providing audio of pronunciation. Participants were divided into an equal control group and test group. The control group learned new vocabulary through classic flashcards while the test group used the previously designed AR Language Learning Tool. The Vocabulary Knowledge Scale questionnaires were provided for both groups right after learning and one week later. By performing statistical analysis with Student’s t-test on gathered data it was discovered that there is a positive improvement in long term recall rate in the AR Language Learning Tool group when compared with the flashcards learning group. No difference was found in short term recall rate between both groups. Participants also provided feedback about their quality of experience and enthusiasm for new learning methods. Their answers were very positive and provided proof that mobile AR is a viable method of learning vocabulary.
90

Kombinierter Einsatz von Augmented Reality in virtuellen Umgebungen

Stelzer , Ralph, Saske , Bernhard, Steindecker, Erik 28 September 2017 (has links) (PDF)
Virtual Reality (VR) und Augmented Reality (AR) sind innovative Technologien, die in der modernen Entwicklung, Herstellung und Nutzung von Produkten zum Einsatz kommen. Bisher werden beide Technologien nicht gemeinsam genutzt, obwohl eine Kombination in bestimmten Fällen erhebliches Potenzial zur Kosteneinsparung besitzt. Die VR-Technologie wird vorrangig In der Produktentwicklung eingesetzt um Kosten für physische Prototypen einzusparen. Bei der Montage oder der Wartung komplexer Produkte hingegen kommt die AR-Technologie zum Einsatz. Dabei wird der Servicetechniker durch Arbeitsunterlagen, die über ein Display in sein Sichtfeld projiziert werden, bei seiner Tätigkeit unterstützt. Um die Qualität der Arbeitsunterlagen für AR-Systeme schon während der Produktentwicklung zu sichern und einen Schulungsvorlauf beim Servicepersonal zu erreichen, ist die Evaluierung dieser Arbeitsunterlagen bereits am virtuellen Prototyp eines künftigen Produktes sinnvoll. Mit der Kombination von AR und VR Technologie in einem integrierten System sollen für diesen Ansatz die Voraussetzung geschaffen werden. Der Beitrag beschreibt die notwendigen Grundlagen und stellt die Entwicklung eines Systems vor, welches die Wahrnehmung von AR-Informationen am virtuellen Prototyp ermöglicht. Anhand eines gewählten Wartungsszenarios wird das notwendige Vorgehen zum Erstellen von virtuellen Prototyp und AR-Arbeitsunterlagen erläutert und Gestaltungsparameter beschrieben. Basierend auf diesem Szenario wird das entwickelte System in einer Benutzerstudie getestet und Vorschläge für die weitere Entwicklung abgeleitet.

Page generated in 0.1039 seconds