Spelling suggestions: "subject:"augmentedreality"" "subject:"augmentedreality’s""
541 |
Tangible User Interface for CAVE based on Augmented Reality TechniqueKim, Ji-Sun 20 January 2006 (has links)
This thesis presents a new 3-dimensional (3D) user interface system for a Cave Automated Virtual Environment (CAVE) application, based on Virtual Reality (VR), Augmented Reality (AR), and Tangible User Interface (TUI). We explore fundamental 3D interaction tasks with our user interface for the CAVE system. User interface (UI) is comprised of a specific set of components, including input/output devices and interaction techniques. Our approach is based on TUIs using ARToolKit, which is currently the most popular toolkit for use in AR projects. Physical objects (props) are used as input devices instead of any tethered electromagnetic trackers. An off-the-shelf webcam is used to get tracking input data. A unique pattern marker is attached to the prop, which is easily and simply tracked by ARToolKit. Our interface system is developed on CAVE infrastructure, which is a semi-immersive environment. All virtual objects are directly manipulated with props, each of which corresponds to a certain virtual object. To navigate, the user can move the background itself, while virtual objects remain in place. The user can actually feel the prop's movement through the virtual space. Thus, fundamental 3D interaction tasks such as object selection, object manipulation, and navigation are performed with our interface. To feel immersion, the user is allowed to wear stereoscopic glasses with a head tracker. This is the only tethered device for our work. Since our interface is based on tangible input tools, seamless transition between one and two-handed operation is provided. We went through three design phases to achieve better task performance. In the first phase, we conducted the pilot study, focusing on the question whether or not this approach is applicable to 3D immersive environments. After the pilot study, we redesigned props and developed ARBox. ARBox is used for as interaction space while the CAVE system is only used for display space. In this phase, we also developed interaction techniques for fundamental 3D interaction tasks. Our summative user evaluation was conducted with ARDesk, which is redesigned after our formative user evaluation. Two user studies aim to get user feedback and to improve interaction techniques as well as interface tools' design. The results from our user studies show that our interface can be intuitively and naturally applied to 3D immersive environments even though there are still some issues with our system design. This thesis shows that effective interactions in a CAVE system can be generated using AR technique and tangible objects. / Master of Science
|
542 |
VTQuestAR: An Augmented Reality Mobile Software Application for Virginia Tech Campus VisitorsYao, Zhennan 07 January 2021 (has links)
The main campus of Virginia Polytechnic Institute and State University (Virginia Tech) has more than 120 buildings. The campus visitors face problems recognizing a building, finding a building, obtaining directions from one building to another, and getting information about a building. The exploratory development research described herein resulted in an iPhone / iPad software application (app) named VTQuestAR that provides assistance to the campus visitors by using the Augmented Reality (AR) technology. The Machine Learning (ML) technology is used to recognize a sample of 31 campus buildings in real-time. The VTQuestAR app enables the user to have a visual interactive experience with those 31 campus buildings by superimposing building information on top of the building picture shown through the camera. The app also enables the user to get directions from the current location or a building to another building displayed on a 2D map as well as an AR map. The user can perform complex searches on 122 campus buildings by building name, description, abbreviation, category, address, and year built. The app enables the user to take multimedia notes during a campus visit. Our exploratory development research illustrates the feasibility of using AR and ML in providing much more effective assistance to visitors of any organization. / Master of Science / The main campus of Virginia Polytechnic Institute and State University (Virginia Tech) has more than 120 buildings. The campus visitors face problems recognizing a building, finding a building, obtaining directions from one building to another, and getting information about a building. The exploratory development research described herein resulted in an iPhone / iPad software application named VTQuestAR that provides assistance to the campus visitors by using the Augmented Reality (AR) and Machine Learning (ML) technologies. Our research illustrates the feasibility of using AR and ML in providing much more effective assistance to visitors of any organization.
|
543 |
Development of Shared Situation Awareness Guidelines and Metrics as Developmental and Analytical Tools for Augmented and Virtual Reality User Interface Design in Human-Machine TeamsVan Dam, Jared Martindale Mccolskey 21 August 2023 (has links)
As the frontiers and futures of work evolve, humans and machines will begin to share a more cooperative working space where collaboration occurs freely amongst the constituent members. To this end, it is then necessary to determine how information should flow amongst team members to allow for the efficient sharing and accurate interpretation of information between humans and machines. Shared situation awareness (SSA), the degree to which individuals can access and interpret information from sources other than themselves, is a useful framework from which to build design guidelines for the aforementioned information exchange. In this work, we present initial Augmented/virtual reality (AR/VR) design principles for shared situation awareness that can help designers both (1) design efficacious interfaces based on these fundamental principles, and (2) evaluate the effectiveness of candidate interface designs based on measurement tools we created via a scoping literature review. This work achieves these goals with focused studies that 1) show the importance of SSA in augmented reality-supported tasks, 2) describe design guidelines and measurement tools necessary to support SSA, and 3) validate the guidelines and measurement tools with a targeted user study that employs an SSA-derived AR interface to confirm the guidelines distilled from the literature review. / Doctor of Philosophy / As the way in which humans work and play changes, people and machines will need to work together in shared spaces where team members rely on one another to complete goals. To make this interaction happen in ways that benefit both humans and machines, we will need to figure out the best way for information to flow between team members, including both humans and machines. Shared situation awareness (SSA) is a helpful concept that allows us to understand how people can get and understand information from sources other than themselves. In this research, we present some basic ideas for designing augmented reality (AR) tools that help people work together in better ways using SSA as a guiding framework. These ideas can help designers (1) create AR tools that work well based on these basic ideas and (2) test how well different interface designs work using specially developed tools we made. We completed user studies to (1) show how important SSA is when using AR to help with tasks, (2) explain the design ideas and tools needed to support SSA, and (3) test these ideas and tools with a study that uses an AR tool, based on SSA, to make sure the guidelines we got from reading other research are correct.
|
544 |
Multimodal 3D User Interfaces for Augmented Reality and Omni-Directional VideoRovelo Ruiz, Gustavo Alberto 29 July 2015 (has links)
[EN] Human-Computer Interaction is a multidisciplinary research field that combines, amongst others, Computer Science and Psychology. It studies human-computer interfaces from the point of view of both, technology and the user experience.
Researchers in this area have now a great opportunity, mostly because the technology required to develop 3D user interfaces for computer applications (e.g. visualization, tracking or portable devices) is now more affordable than a few years ago.
Augmented Reality and Omni-Directional Video are two promising examples of this type of interfaces where the user is able to interact with the application in the three-dimensional space beyond the 2D screen.
The work described in this thesis is focused on the evaluation of interaction aspects in both types of applications. The main goal is contributing to increase the knowledge about this new type of interfaces to improve their design. We evaluate how computer interfaces can convey information to the user in Augmented Reality applications exploiting human multisensory capabilities. Furthermore, we evaluate how the user can give commands to the system using more than one type of input modality, studying Omnidirectional Video gesture-based interaction.
We describe the experiments we performed, outline the results for each particular scenario and discuss the general implications of our findings. / [ES] El campo de la Interacción Persona-Computadora es un área multidisciplinaria que combina, entre otras a las Ciencias de la Computación y Psicología. Estudia la interacción entre los sistemas computacionales y las personas considerando tanto el desarrollo tecnológico, como la experiencia del usuario.
Los dispositivos necesarios para crear interfaces de usuario 3D son ahora más asequibles que nunca (v.gr. dispositivos de visualización, de seguimiento o móviles) abriendo así un área de oportunidad para los investigadores de esta disciplina. La Realidad Aumentada y el Video Omnidireccional son dos ejemplos de este tipo de interfaces en donde el usuario es capaz de interactuar en el espacio tridimensional más allá de la pantalla de la computadora.
El trabajo presentado en esta tesis se centra en la evaluación de la interacción del usuario con estos dos tipos de aplicaciones. El objetivo principal es contribuir a incrementar la base de conocimiento sobre este tipo de interfaces y así, mejorar su diseño.
En este trabajo investigamos de qué manera se pueden emplear de forma eficiente las interfaces multimodales para proporcionar información relevante en aplicaciones de Realidad Aumentada. Además, evaluamos de qué forma el usuario puede usar interfaces 3D usando más de un tipo de interacción; para ello evaluamos la interacción basada en gestos para Video Omnidireccional.
A lo largo de este documento se describen los experimentos realizados y los resultados obtenidos para cada caso en particular. Se presenta además una discusión general de los resultados. / [CA] El camp de la Interacció Persona-Ordinador és una àrea d'investigació multidisciplinar que combina, entre d'altres, les Ciències de la Informàtica i de la Psicologia. Estudia la interacció entre els sistemes computacionals i les persones considerant tant el desenvolupament tecnològic, com l'experiència de l'usuari.
Els dispositius necessaris per a crear interfícies d'usuari 3D són ara més assequibles que mai (v.gr. dispositius de visualització, de seguiment o mòbils) obrint així una àrea d'oportunitat per als investigadors d'aquesta disciplina. La Realitat Augmentada i el Vídeo Omnidireccional són dos exemples d'aquest tipus d'interfícies on l'usuari és capaç d'interactuar en l'espai tridimensional més enllà de la pantalla de l'ordinador.
El treball presentat en aquesta tesi se centra en l'avaluació de la interacció de l'usuari amb aquests dos tipus d'aplicacions. L'objectiu principal és contribuir a augmentar el coneixement sobre aquest nou tipus d'interfícies i així, millorar el seu disseny. En aquest treball investiguem de quina manera es poden utilitzar de forma eficient les interfícies multimodals per a proporcionar informació rellevant en aplicacions de Realitat Augmentada. A més, avaluem com l'usuari pot utilitzar interfícies 3D utilitzant més d'un tipus d'interacció; per aquesta raó, avaluem la interacció basada en gest per a Vídeo Omnidireccional.
Al llarg d'aquest document es descriuen els experiments realitzats i els resultats obtinguts per a cada cas particular. A més a més, es presenta una discussió general dels resultats. / Rovelo Ruiz, GA. (2015). Multimodal 3D User Interfaces for Augmented Reality and Omni-Directional Video [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/53916
|
545 |
Virtualidad geolocalizada, proyectos de Realidad Aumentada en el espacio público, propuestas experimentalesFerrer Hernández, Manuel 14 March 2016 (has links)
[EN] This doctoral thesis aims to make an approach to the area of Public Art and Augmented Reality. To this end, it is essential to understand the changes that have taken place in public space, changes that are linked to the concept of speed of movement, and which have been affected significantly in recent decades with the introduction of new technologies in daily life.
Augmented Reality has been formed as an emerging art form. It is a genre in which we find subdivisions according to the technologies employed and the ideas or metaphors to represent by the artists. We understand that in the last decade of economic and social crisis activist involvement is an ethical imperative for the artist. Activism has been and is capable of adapting to the emergence and standardization of several communication technologies to make a critical and artistic use of them. It is able to break the classic barrier between the real and the virtual while providing a positive overcoming of the
conflicts that the system generates in its post-industrial age like the atomization and alienation of the citizen, as well as the processes as much from constitution of ghettos that are isolated urbanistically as from the gentrification in several districts of the current cities.
In this respect, a catalog of works related to the use of Augmented Reality technologies in the artistic activism is proposed. The empirical experimentation and practical production turns out to be, likewise, necessary to approach the theoretical parameters proposed in this thesis. For this reason our work has been focused on the implementation of artistic specific projects, which allow us to demonstrate this relation between the Public Art and the Augmented Reality. A relation that shows us simultaneously, the new typology of human relations and alternative topographies of the city that are generated in the public hybrid space, increasing its accessibility to citizens and overcoming the artist-active / audience-passive dichotomy through the contextual practice and the situationist drift through the different works that we have proposed. This practice establishes a new democratizing vector of the art that, therefore, allows the civil participation in other artistic closed circles which transmute into a constellation of decentralized interconnected nodes like artistic participative rhizome in which the citizenship can express creatively in all their dimensions. / [ES] La presente tesis pretende realizar una aproximación al campo del Arte Público y la Realidad Aumentada. Para ello resulta imprescindible comprender los cambios acontecidos en el espacio público, cambios que van ligados al concepto de velocidad de movimiento, y que se han visto afectados significativamente en las ultimas décadas con la implantación de las nuevas tecnologías en la vida cotidiana.
La Realidad Aumentada se ha conformado como un género artístico emergente. Un género en el que podemos encontrar subdivisiones en función de las tecnologías empleadas y de las ideas o metáforas a representar por los artistas. Entendemos que en esta última década de crisis económica y social la implicación activista resulta un imperativo ético para el artista. Pues el activismo ha sido y es capaz de adaptarse al surgimiento y estandarización de diversas tecnologías de comunicación para hacer un uso crítico-artístico de las mismas capaz de romper la barrera clásica entre lo real y lo virtual al tiempo que propone una superación positiva de los conflictos que genera el sistema en su era pos-industrial como son la atomización y alienación del ciudadano, así como los procesos tanto de constitución de guetos aislados urbanísticamente como de gentrificación en diversos distritos de las ciudades actuales.
En este sentido, se propone una catalogación de obras relacionadas con la utilización de tecnologías de Realidad Aumentada dentro del activismo artístico. Resulta, así mismo, necesaria la experimentación empírica y producción práctica para abordar los parámetros teóricos propuestos en esta tesis, por lo que nuestro trabajo se ha centrado en la implementación de proyectos artísticos concretos, que permiten evidenciar esta relación entre el Arte Público y la Realidad Aumentada. Una relación que nos muestra a un tiempo, la nueva tipología de relaciones humanas y topografías alternativas de la urbe que se generan en el espacio público híbrido, aumentando su accesibilidad a la ciudadanía y superando la dicotomía artista-activo / público-pasivo mediante la práctica contextual y la deriva situacionista a través de las diferentes obras que hemos propuesto. Práctica que recorre un nuevo vector democratizador del arte, que por ende permite la participación ciudadana en círculos artísticos cerrados, que transmutan en una constelación de nodos descentralizados interconexionados a modo de rizoma artístico participativo, en el que la ciudadanía puede expresarse creativamente en toda su magnitud. / [CA] La present tesi pretén realitzar una aproximació al camp de l'Art Públic i la Realitat Augmentada. Per açò resulta imprescindible comprendre els canvis esdevinguts en l'espai públic, canvis que van lligats al concepte de velocitat de moviment, i que s'han vist afectats significativament en les últimes dècades amb la implantació de les noves tecnologies en la vida quotidiana.
La Realitat Augmentada s'ha conformat com un gènere artístic emergent. Un gènere en el qual podem trobar subdivisions en funció de les tecnologies emprades i de les idees o metàfores a representar pels artistes. Entenem que en aquesta última dècada de crisi econòmica i social la implicació activista resulta un imperatiu ètic per a l'artista. Doncs l'activisme ha sigut i és capaç d'adaptar-se al sorgiment i estandardització de diverses tecnologies de comunicació per a fer un ús crític-artístic de les mateixes, capaç de trencar la barrera clàssica entre el real i el virtual al mateix temps que proposa una superació positiva dels conflictes que genera el sistema en la seua era post-industrial com són l'atomització i alienació del ciutadà, així com els processos tant de constitució de guetos aïllats urbanísticament com de gentrificació en diversos districtes de les ciutats actuals.
En aquest sentit, es proposa una catalogació d'obres relacionades amb la utilització de tecnologies de Realitat Augmentada dins de l'activisme artístic. Resulta, així mateix, necessària l'experimentació empírica i producció pràctica per a abordar els paràmetres teòrics proposats en aquesta tesi, per la qual cosa el nostre treball s'ha centrat en la implementació de projectes artístics concrets, que permeten evidenciar aquesta relació entre l'Art Públic i la Realitat Augmentada. Una relació que ens mostra a un temps, la nova tipologia de relacions humanes i topografies alternatives de la urbs que es generen
en l'espai públic híbrid, augmentant la seua accessibilitat a la ciutadania i superant la dicotomia artista-actiu / públic-passiu mitjançant la pràctica contextual i la deriva situacionista a través de les diferents obres que hem proposat. Pràctica que recorre un nou vector democratitzador de l'art que per tant permet la participació ciutadana en els altres cercles artístics tancats que transmuten en una constel·lació de nodes descentralitzats interconnectats a manera de rizoma artístic participatiu en el qual la ciutadania pot expressar-se creativament en tota la seua magnitud. / Ferrer Hernández, M. (2016). Virtualidad geolocalizada, proyectos de Realidad Aumentada en el espacio público, propuestas experimentales [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/61771
|
546 |
Human Factors Design and Evaluation of Augmented Reality for Mass Casualty Incident TriageNelson, Cassidy Rae 09 September 2024 (has links)
Augmented reality (AR) is an emerging technology with immense potential for enhancing human-to-human interaction tasks, particularly in high-risk environments such as mass casualty incident (MCI) tri-age. However, developing practical and effective AR tools for this purpose necessitates a meticulous user-centered design (UCD) process, thoughtfully crafted and validated through iterative testing with first responders in increasingly contextually relevant simulations. In academic circles, the perceived complexity and time requirements of such a process might discourage its adoption within the constraints of traditional publishing cycles. This is likely due, in part, to a lack of representative applied UCD examples. This work addresses this challenge by presenting a scholarly UCD framework tailored specifically for MCI triage, which progresses seamlessly from controlled, context-free laboratory settings to virtual patient simulations and finally to realistic patient (actor) scenarios. Moreover, MCIs and triage are under-served areas, likely due to their high intensity and risk. This means developers need to 'get it right' as quickly as possible. UCD and evaluation alone are not an efficient means to developing these complex and dangerous work domains. Thus, this research also delves into a cognitive work analysis, offering a comprehensive breakdown of the MCI triage domain and how those findings inform future AR sup-ports. This analysis serves to fortify the foundation for future UCD endeavors in this critical space. Finally, it is imperative to recognize that MCI triage fundamentally involves human-to-human interaction supported by AR technology. Therefore, UCD efforts must encompass a diverse array of study stimuli and participants to ensure that the technology functions as intended across all demographic groups. It is established that racial bias exists in emergency room triage, creating worse outcomes for patients of color. Consequently, this study also investigates the potential impact of racial biases on MCI triage efficacy. This entire body of work has implications for UCD evaluation methodology, the development of future AR support tools, and the potential to catch racially biased negative performance before responders ever hit the field. / Doctor of Philosophy / Augmented reality (AR) is uniquely situated to make work within high-risk work environments, like mass casualty incident (MCI) response, safer and more effective. This is because AR augments the user's reali-ty with context-relevant information, like by providing a temperature gauge for firefighters that is always in their visual field. Development of such AR tools for a sensitive arena like MCIs requires several rigor-ous steps before those tools can be deployed in the field. It is crucial to engage in a user-centered design (UCD) process in partnership with actual emergency responders so they can help us understand what help they need. We outline that UCD process in Chapter 2. Once we understand what responders say they need help with, we then need to evaluate those pinch points in the broader context of their work. This means that we evaluate how their job process creates the situation where responders need the kind of help they are asking for. Understanding this helps us create solutions that address the responder's needs while we minimize any new problems created with implementing a new tool into the job. What we learned from examining the work domain is described in Chapter 3. Once we have this firm foundational understanding of responder needs and work and we have designed an AR support tool, we need to evaluate that tool for effectiveness. It is too dangerous to put the AR tool straight into the field, so Chapter 4 explores how we can create simulations of an MCI scenario to study our AR support tool. Finally, after evaluating our AR tool within the scenarios and the scenarios themselves, we evaluate (in Chapter 5) other facets of the job that may be impacting MCI response. In our case, we explore how racial bias may be impacting patient care. It is important to study bias as it has implications for future MCI training and AR tool development. Perhaps future work can explore an AR tool that offsets bias-based performance, or a training that helps catch bias before responders ever get to the real field.
|
547 |
Augmented Reality Pedestrian Collision Warning: An Ecological Approach to Driver Interface Design and EvaluationKim, Hyungil 17 October 2017 (has links)
Augmented reality (AR) has the potential to fundamentally change the way we interact with information. Direct perception of computer generated graphics atop physical reality can afford hands-free access to contextual information on the fly. However, as users must interact with both digital and physical information simultaneously, yesterday's approaches to interface design may not be sufficient to support the new way of interaction. Furthermore, the impacts of this novel technology on user experience and performance are not yet fully understood.
Driving is one of many promising tasks that can benefit from AR, where conformal graphics strategically placed in the real-world can accurately guide drivers' attention to critical environmental elements. The ultimate purpose of this study is to reduce pedestrian accidents through design of driver interfaces that take advantage of AR head-up displays (HUD). For this purpose, this work aimed to (1) identify information requirements for pedestrian collision warning, (2) design AR driver interfaces, and (3) quantify effects of AR interfaces on driver performance and experience.
Considering the dynamic nature of human-environment interaction in AR-supported driving, we took an ecological approach for interface design and evaluation, appreciating not only the user but also the environment. The requirement analysis examined environmental constraints imposed on the drivers' behavior, interface design translated those behavior-shaping constraints into perceptual forms of interface elements, and usability evaluations utilized naturalistic driving scenarios and tasks for better ecological validity.
A novel AR driver interface for pedestrian collision warning, the virtual shadow, was proposed taking advantage of optical see-through HUDs. A series of usability evaluations in both a driving simulator and on an actual roadway showed that virtual shadow interface outperformed current pedestrian collision warning interfaces in guiding driver attention, increasing situation awareness, and improving task performance. Thus, this work has demonstrated the opportunity of incorporating an ecological approach into user interface design and evaluation for AR driving applications. This research provides both basic and practical contributions in human factors and AR by (1) providing empirical evidence furthering knowledge about driver experience and performance in AR, and, (2) extending traditional usability engineering methods for automotive AR interface design and evaluation. / Ph. D. / On average, a pedestrian was killed every 2 hours and injured every 8 minutes on U.S. roadways in 2013. Most common driver errors responsible for pedestrian collisions were drivers’ lack of situation awareness due to low visibility or unexpected appearance of pedestrians. As a solution to the problem, automakers introduced pedestrian collision warnings, taking advantage of recent advances in sensor technology and pedestrian detection algorithms. Once pedestrians are detected in the vehicle’s path, warnings are given to the driver typically through auditory alarms and/or simple visual symbols. However, with current warnings that often lack spatial information, drivers need to further localize and evaluate approaching pedestrians’ movement for appropriate decision and reaction. Augmented reality (AR) is one of the most promising solutions to address the limitations of current warning interfaces. By overlaying computer generated conformal graphics atop physical reality, AR head up displays (HUDs) can guide drivers’ attention to dangerous pedestrians, affording direct perception of spatial information about those pedestrians.
The ultimate purpose of this work is to reduce pedestrian accidents by design of driver interfaces, taking advantage of AR HUDs. For this purpose, we aimed to (1) design a novel driver interface for cross traffic alerts, (2) prototype design ideas for a specific use-case of pedestrian collision warning, and (3) evaluate usability of the new design ideas in consideration of unique aspects of human-environment interaction with AR while driving.
We proposed a novel driver interface for pedestrian collision warning, the virtual shadow, which can cast shadows of approaching pedestrians to the vehicle’s path via AR HUDs. Usability evaluations in a driving simulator and a roadway showed the potential benefits of the proposed idea over existing warnings in driver attention management, situation awareness, task performance with reduced workload. Thus, this work demonstrated the capabilities of AR HUDs as intuitive and effective interfaces for vehicle drivers.
|
548 |
Intelligent Augmented Reality (iAR):Context-aware Inference and Adaptation in ARDavari-Najafabadi, Shakiba 12 September 2024 (has links)
Augmented Reality (AR) transforms the entire 3D space around the user into a dynamic screen, surpassing the limitations of traditional displays and enabling efficient access to multiple pieces of information simultaneously, all day, every day. Recent developments in AR eyeglasses promise that AR could become the next generation of personal computing devices.
To realize this vision of pervasive AR, the AR interface must address the challenges posed by constant and omnipresent virtual content. As the user's context changes, the virtual content in AR head-worn displays can occasionally become obtrusive, hindering the user's perception and awareness of their surroundings and their interaction with both the virtual and physical worlds. An intelligent interface is needed to adapt the presentation and interaction of AR content.
This dissertation outlines a roadmap towards effective, efficient, and unobtrusive AR through intelligent AR (iAR) systems that automatically learn and adapt the interface to the user's context.
To achieve this goal, we:
%(1) Design multiple context-aware AR interfaces and explore their design and effectiveness in various contexts through four experiments; (1) Identify multiple AR design principles and guidelines that maintain efficiency while addressing challenges such as occlusion, social interaction, and content placement in AR.
(2) Demonstrate the impact of context on AR effectiveness, validating the advantages of context-awareness and highlighting the complexities of implementing a context-aware approach in pervasive AR, particularly in scenarios involving context-switching.
(3) Propose a design space for XR interfaces; (4) Develop a taxonomy of quantifiable contextual components and a framework for designing iAR interfaces. / Doctor of Philosophy / Augmented Reality (AR) integrates digital information with the real world in real-time, transforming the surrounding physical space into a dynamic, interactive screen. This technology can simultaneously provide hands-free access to unlimited virtual applications and information, facilitating fast and easy access. With recent advancements in AR eyeglasses, AR is anticipated to become the next generation of personal computing, potentially replacing mobile phones and computers. However, to be seamlessly integrated into daily life, AR must overcome challenges such as occluding important real-world objects, distraction, visual clutter, and information overload.
This dissertation presents a roadmap for developing intelligent AR (iAR) systems that automatically adapt to the user's context.
To achieve this goal, we identify the design space for adaptable AR elements and design and test various context-aware AR interfaces.
We identify key AR design principles that ensure efficiency while addressing challenges like occlusion, social interaction, and content placement. We also highlight the impact of context on AR effectiveness and the complexities of implementing a context-aware approach, especially in context-switching scenarios.
Additionally, we informed the design process of iAR interfaces by identifying the contextual components that influence their effectiveness and providing a framework and architecture for utilizing this information for automatic adaptations.
These efforts aim to enhance AR effectiveness and efficiency while ensuring it remains unobtrusive in everyday use.
|
549 |
Looks Good To Me (LGTM): Authentication for Augmented RealityGaebel, Ethan Daniel 27 June 2016 (has links)
Augmented reality is poised to become the next dominant computing paradigm over the course of the next decade. With the three-dimensional graphics and interactive interfaces that augmented reality promises it will rival the very best science fiction novels. Users will want to have shared experiences in these rich augmented reality scenarios, but surely users will want to restrict who can see their content. It is currently unclear how users of such devices will authenticate one another. Traditional authentication protocols reliant on centralized authorities fall short when different systems with different authorities try to communicate and extra infrastructure means extra resource expenditure. Augmented reality content sharing will usually occur in face-to-face scenarios where it will be advantageous for both performance and usability reasons to keep communications and authentication localized. Looks Good To Me (LGTM) is an authentication protocol for augmented reality headsets that leverages the unique hardware and context provided with augmented reality headsets to solve an old problem in a more usable and more secure way. LGTM works over point to point wireless communications so users can authenticate one another in any circumstance and is designed with usability at its core, requiring users to perform only two actions: one to initiate and one to confirm. LGTM allows users to intuitively authenticate one another, using seemingly only each other's faces. Under the hood LGTM uses a combination of facial recognition and wireless localization to ensure secure and extremely simple authentication. / Master of Science
|
550 |
Playing to Win: Applying Cognitive Theory and Gamification to Augmented Reality for Enhanced Mathematical Outcomes in Underrepresented Student PopulationsBrown, TeAirra Monique 24 September 2018 (has links)
National dialogue and scholarly research illustrate the need for engaging science, math, technology, and engineering (STEM) innovations in K-12 environments, most importantly in low-income communities (President's Council of Advisors on Science and Technology, 2012). According to Educating the Engineer of 2020, "current curricular material does not portray STEM in ways that seem likely to excite the interest of students from a variety of ethnic and cultural backgrounds" (Phase, 2005). The National Educational Technology Plan of 2010 believes that one of the most powerful ways to transform and improve K-12 STEM education it to instill a culture of innovation by leveraging cutting edge technology (Polly et al., 2010). Augmented reality (AR) is an emerging and promising educational intervention that has the potential to engage students and transform their learning of STEM concepts. AR blends the real and virtual worlds by overlaying computer-generated content such as images, animations, and 3D models directly onto the student's view of the real world. Visual representations of STEM concepts using AR produce new educational learning opportunities, for example, allowing students to visualize abstract concepts and make them concrete (Radu, 2014). Although evidence suggests that learning can be enhanced by implementing AR in the classroom, it is important to take into account how students are processing AR content. Therefore, this research aims to examine the unique benefits and challenges of utilizing augmented reality (AR) as a supplemental learning technique to reinforce mathematical concepts while concurrently responding to students' cognitive demands.
To examine and understand how cognitive demands affect students' information processing and creation of new knowledge, Mayer's Cognitive Theory of Multimedia Learning (CTML) is leveraged as a theoretical framework to ground the AR application and supporting research. Also, to enhance students' engagement, gamification was used to incorporate game elements (e.g. rewards and leaderboards) into the AR applications. This research applies gamification and CTML principles to tablet-based gamified learning AR (GLAR) applications as a supplemental tool to address three research objectives: (1) understanding the role of prior knowledge on cognitive performance, (2) examining if adherence to CTML principles applies to GLAR, and, (3) investigating the impact of cognitive style on cognitive performance. Each objective investigates how the inclusion of CTML in gamifying an AR experience influences students' perception of cognitive effects and how GLAR affects or enhances their ability to create new knowledge.
Significant results from objective one suggest, (1) there were no differences between novice and experienced students' cognitive load, and, (2) novice students' content-based learning gains can be improved through interaction with GLAR. Objective two found that high adherence to CTML's principles was effective at (1) lowering students' cognitive load, and, (2) improving GLAR performance. The key findings of objective three are (1) there was no difference in FID students' cognitive load when voice and coherence were manipulated, and, (2) both FID and FD students had content-based learning gains after engagement with GLAR.
The results of this research adds to the existing knowledge base for researchers, designers and practitioners to consider when creating gamified AR applications. Specifically, this research provides contributions to the field that include empirical evidence to suggest to what degree CTML is effective as an AR-based supplemental pedagogical tool for underrepresented students in southwest Virginia. And moreover, offers empirical data on the relationship between underrepresented students' perceived benefits of GLAR and it is impact on students' cognitive load. This research further offers recommendations as well as design considerations regarding the applicability of CTML when developing GLAR applications. / PHD / The purpose of this research is to examine the unique benefits and challenges of using augmented reality (AR) to reinforce underrepresented students’ math concepts while observing how their process information. Gamification and Mayer’s Cognitive Theory of Multimedia Learning (CTML) principles are applied to create tablet-based gamified learning AR (GLAR) applications to address three research objectives: (1) understanding the role of prior knowledge on cognitive performance, (2) examining if adherence to CTML principles applies to GLAR, and, (3) investigating the impact of cognitive style on cognitive performance. Each objective investigates how the inclusion of CTML in gamifying an AR experience influences students’ perception of cognitive effects and how GLAR affects or enhances their ability to create new knowledge. This research offers recommendations as well as design considerations regarding the applicability of CTML when developing GLAR applications for underrepresented students in southwest Virginia.
|
Page generated in 0.1079 seconds