• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 522
  • 107
  • 87
  • 38
  • 36
  • 34
  • 19
  • 15
  • 7
  • 6
  • 6
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 1012
  • 1012
  • 294
  • 203
  • 186
  • 154
  • 151
  • 140
  • 128
  • 125
  • 117
  • 100
  • 99
  • 96
  • 94
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
541

Looks Good To Me (LGTM): Authentication for Augmented Reality

Gaebel, Ethan Daniel 27 June 2016 (has links)
Augmented reality is poised to become the next dominant computing paradigm over the course of the next decade. With the three-dimensional graphics and interactive interfaces that augmented reality promises it will rival the very best science fiction novels. Users will want to have shared experiences in these rich augmented reality scenarios, but surely users will want to restrict who can see their content. It is currently unclear how users of such devices will authenticate one another. Traditional authentication protocols reliant on centralized authorities fall short when different systems with different authorities try to communicate and extra infrastructure means extra resource expenditure. Augmented reality content sharing will usually occur in face-to-face scenarios where it will be advantageous for both performance and usability reasons to keep communications and authentication localized. Looks Good To Me (LGTM) is an authentication protocol for augmented reality headsets that leverages the unique hardware and context provided with augmented reality headsets to solve an old problem in a more usable and more secure way. LGTM works over point to point wireless communications so users can authenticate one another in any circumstance and is designed with usability at its core, requiring users to perform only two actions: one to initiate and one to confirm. LGTM allows users to intuitively authenticate one another, using seemingly only each other's faces. Under the hood LGTM uses a combination of facial recognition and wireless localization to ensure secure and extremely simple authentication. / Master of Science
542

Playing to Win: Applying Cognitive Theory and Gamification to Augmented Reality for Enhanced Mathematical Outcomes in Underrepresented Student Populations

Brown, TeAirra Monique 24 September 2018 (has links)
National dialogue and scholarly research illustrate the need for engaging science, math, technology, and engineering (STEM) innovations in K-12 environments, most importantly in low-income communities (President's Council of Advisors on Science and Technology, 2012). According to Educating the Engineer of 2020, "current curricular material does not portray STEM in ways that seem likely to excite the interest of students from a variety of ethnic and cultural backgrounds" (Phase, 2005). The National Educational Technology Plan of 2010 believes that one of the most powerful ways to transform and improve K-12 STEM education it to instill a culture of innovation by leveraging cutting edge technology (Polly et al., 2010). Augmented reality (AR) is an emerging and promising educational intervention that has the potential to engage students and transform their learning of STEM concepts. AR blends the real and virtual worlds by overlaying computer-generated content such as images, animations, and 3D models directly onto the student's view of the real world. Visual representations of STEM concepts using AR produce new educational learning opportunities, for example, allowing students to visualize abstract concepts and make them concrete (Radu, 2014). Although evidence suggests that learning can be enhanced by implementing AR in the classroom, it is important to take into account how students are processing AR content. Therefore, this research aims to examine the unique benefits and challenges of utilizing augmented reality (AR) as a supplemental learning technique to reinforce mathematical concepts while concurrently responding to students' cognitive demands. To examine and understand how cognitive demands affect students' information processing and creation of new knowledge, Mayer's Cognitive Theory of Multimedia Learning (CTML) is leveraged as a theoretical framework to ground the AR application and supporting research. Also, to enhance students' engagement, gamification was used to incorporate game elements (e.g. rewards and leaderboards) into the AR applications. This research applies gamification and CTML principles to tablet-based gamified learning AR (GLAR) applications as a supplemental tool to address three research objectives: (1) understanding the role of prior knowledge on cognitive performance, (2) examining if adherence to CTML principles applies to GLAR, and, (3) investigating the impact of cognitive style on cognitive performance. Each objective investigates how the inclusion of CTML in gamifying an AR experience influences students' perception of cognitive effects and how GLAR affects or enhances their ability to create new knowledge. Significant results from objective one suggest, (1) there were no differences between novice and experienced students' cognitive load, and, (2) novice students' content-based learning gains can be improved through interaction with GLAR. Objective two found that high adherence to CTML's principles was effective at (1) lowering students' cognitive load, and, (2) improving GLAR performance. The key findings of objective three are (1) there was no difference in FID students' cognitive load when voice and coherence were manipulated, and, (2) both FID and FD students had content-based learning gains after engagement with GLAR. The results of this research adds to the existing knowledge base for researchers, designers and practitioners to consider when creating gamified AR applications. Specifically, this research provides contributions to the field that include empirical evidence to suggest to what degree CTML is effective as an AR-based supplemental pedagogical tool for underrepresented students in southwest Virginia. And moreover, offers empirical data on the relationship between underrepresented students' perceived benefits of GLAR and it is impact on students' cognitive load. This research further offers recommendations as well as design considerations regarding the applicability of CTML when developing GLAR applications. / PHD / The purpose of this research is to examine the unique benefits and challenges of using augmented reality (AR) to reinforce underrepresented students’ math concepts while observing how their process information. Gamification and Mayer’s Cognitive Theory of Multimedia Learning (CTML) principles are applied to create tablet-based gamified learning AR (GLAR) applications to address three research objectives: (1) understanding the role of prior knowledge on cognitive performance, (2) examining if adherence to CTML principles applies to GLAR, and, (3) investigating the impact of cognitive style on cognitive performance. Each objective investigates how the inclusion of CTML in gamifying an AR experience influences students’ perception of cognitive effects and how GLAR affects or enhances their ability to create new knowledge. This research offers recommendations as well as design considerations regarding the applicability of CTML when developing GLAR applications for underrepresented students in southwest Virginia.
543

Informing Design of In-Vehicle Augmented Reality Head-Up Displays and Methods for Assessment

Smith, Martha Irene 23 August 2018 (has links)
Drivers require a steady stream of relevant but focused visual input to make decisions. Most driving information comes from the surrounding environment so keeping drivers' eyes on the road is paramount. However, important information still comes from in-vehicle displays. With this in mind, there has been renewed recent interest in delivering driving in-formation via head-up display. A head-up display (HUD) can present an image directly on-to the windshield of a vehicle, providing a relatively seamless transition between the display image and the road ahead. Most importantly, HUD use keeps drivers' eyes focused in the direction of the road ahead. The transparent display coupled with a new location make it likely that HUDs provide a fundamentally different driving experience and may change the way people drive, in both good and bad ways. Therefore, the objectives of this work were to 1) understand changes in drivers' glance behaviors when using different types of displays, 2) investigate the impact of HUD position on glance behaviors, and 3) examine the impact of HUD graphic type on drivers' behaviors. Specifically, we captured empirical data regarding changes in driving behaviors, glance behaviors, reported workload, and preferences while driving performing a secondary task using in-vehicle displays. We found that participants exhibited different glance behaviors when using different display types, with participants allocating more and longer glances towards a HUD as compared to a traditional Head-Down Display. However, driving behaviors were not largely affected and participants reported lower workload when using the HUD. HUD location did not cause large changes in glance behaviors, but some driving behaviors were affected. When exam-ining the impact of graphic types on participants, we employed a novel technique for ana-lyzing glance behaviors by dividing the display into three different areas of interest relative to the HUD graphic. This method allowed us to differentiate between graphic types and to better understand differences found in driving behaviors and participant preferences than could be determined with frequently used glance analysis methods. Graphics that were fixed in place rather than animated generally resulted in less time allocated to looking at the graphics, and these changes were likely because the fixed graphics were simple and easy to understand. Ultimately, glance and driving behaviors were affected at some level by the display type, display location, and graphic type as well as individual differences like gender and age. / Ph. D. / Drivers gather most of the information that they need to drive by looking at the world around them and at displays within the vehicle. However, research has shown that looking down at vehicle displays can be distracting to drivers which could be unsafe. Therefore, automotive manufacturers look for new ways to help decrease driver distraction, and one potential solution to this problem is the introduction of head-up displays (HUDs). By displaying a graphic on a see-through surface, like a windshield, we can add information to the world in front of the driver. This means that drivers no longer have to physically look away from the road to gather information, and they may be able to use peripheral vision to help drive while they look at the display. While the technology is promising, it is important that we fully understand other impacts of this technology on drivers before we widely incorporate it into vehicles. Therefore, the purpose of this work is to understand how HUDs change drivers’ ability to drive and their glance patterns as they gather the visual information needed to drive safely. We examined differences between HUDs and traditional displays found in vehicles. We then gathered data regarding the location of HUDs. Finally, we tested different graphics displayed on the HUD. In addition to gathering data about glance and driving behaviors, we also gathered data about drivers’ preferences and experiences with the displays. HUDs may tempt drivers to look away from the road for longer periods of time without negatively affecting their driving behaviors. Different HUD locations did not cause large differences in glance behaviors but did have some impact on driving behaviors. Finally, different graphics resulted in very different glance behaviors without significantly changing driving behaviors. These results suggest that HUDs may capture drivers’ attention and cause drivers to be less observant of other elements around them as they drive. However, because different graphics result in different glance patterns, with careful design we may be able to help drivers keep their eyes on the road while safely gathering necessary information from the vehicle.
544

Walk-Centric User Interfaces for Mixed Reality

Santos Lages, Wallace 31 July 2018 (has links)
Walking is a natural part of our lives and is also becoming increasingly common in mixed reality. Wireless headsets and improved tracking systems allow us to easily navigate real and virtual environments by walking. In spite of the benefits, walking brings challenges to the design of new systems. In particular, designers must be aware of cognitive and motor requirements so that walking does not negatively impact the main task. Unfortunately, those demands are not yet fully understood. In this dissertation, we present new scientific evidence, interaction designs, and analysis of the role of walking in different mixed reality applications. We evaluated the difference in performance of users walking vs. manipulating a dataset during visual analysis. This is an important task, since virtual reality is increasingly being used as a way to make sense of progressively complex datasets. Our findings indicate that neither option is absolutely better: the optimal design choice should consider both user's experience with controllers and user's inherent spatial ability. Participants with reasonable game experience and low spatial ability performed better using the manipulation technique. However, we found that walking can still enable higher performance for participants with low spatial ability and without significant game experience. In augmented reality, specifying points in space is an essential step to create content that is registered with the world. However, this task can be challenging when information about the depth or geometry of the target is not available. We evaluated different augmented reality techniques for point marking that do not rely on any model of the environment. We found that triangulation by physically walking between points provides higher accuracy than purely perceptual methods. However, precision may be affected by head pointing tremors. To increase the precision, we designed a new technique that uses multiple samples to obtain a better estimate of the target position. This technique can also be used to mark points while walking. The effectiveness of this approach was demonstrated with a controlled augmented reality simulation and actual outdoor tests. Moving into the future, augmented reality will eventually replace our mobile devices as the main method of accessing information. Nonetheless, to achieve its full potential, augmented reality interfaces must support the fluid way we move in the world. We investigated the potential of adaptation in achieving this goal. We conceived and implemented an adaptive workspace system, based in the study of the design space and through user contextual studies. Our final design consists in a minimum set of techniques to support mobility and integration with the real world. We also identified a set of key interaction patterns and desirable properties of adaptation-based techniques, which can be used to guide the design of the next-generation walking-centered workspaces. / Ph. D. / Until recently, walking with virtual and augmented reality headsets was restricted by issues such as excessive weight, cables, tracking limitations, etc. As those limits go away, walking is becoming more common, making the user experience closer to the real world. If well explored, walking can also make some tasks easier and more efficient. Unfortunately, walking reduces our mental and motor performance and its consequences in interface design are not fully understood. In this dissertation, we present studies of the role of walking in three areas: scientific visualization in virtual reality, marking points in augmented reality, and accessing information in augmented reality. We show that although walking reduces our ability to perform those tasks, careful design can reduce its impact in a meaningful way.
545

Applications of Close-Range Terrestrial 3D Photogrammetry to Improve Safety in Underground Stone Mines

Bishop, Richard 22 May 2020 (has links)
The underground limestone mining industry is a small, but growing segment of the U.S. crushed stone industry. However, its fatality rate has been amongst the highest of the mining sector in recent years due to ground control issues related to ground collapses. It is therefore important to improve the engineering design, monitoring and visualization of ground control by utilizing new technologies that can help an underground limestone company maintain a safe and productive operation. Photogrammetry and laser scanning are remote sensing technologies that are useful tools for collecting three-dimensional spatial data with high levels of precision for many types of mining applications. Due to the reality of budget constraints for many underground stone mining operations, this research concentrates on photogrammetry as a more accessible technology for the average operation. Despite the challenging lighting conditions and size of underground limestone mines that has previous hindered photogrammetric surveys in these environments, over 13,000 photographic images were taken over a 3-year period in active mines to compile these models. This research summarizes that work and highlights the many applications of terrestrial close-range photogrammetry, including practical methodologies for implementing the techniques in working operations to better visualize hazards and pragmatic approaches for geotechnical analysis, improved engineering design and monitoring. / M.S. / The underground limestone mining industry is a small, but growing segment of the U.S. crushed stone industry. However, its fatality rate has been amongst the highest of the mining sector in recent years due to ground control issues related to ground collapses. It is therefore important to improve the engineering design, monitoring and visualization of ground control by utilizing new technologies that can help maintain safe and productive underground stone operations. Photogrammetry and laser scanning are remote sensing technologies that are useful tools for collecting three-dimensional spatial data with high levels of precision for many different mining applications. Due to the reality of budget constraints for many mining operations, this research concentrates on photogrammetry as a more accessible technology for the average operation, despite the challenging lighting conditions and expansive size of underground limestone mines that has previous hindered photogrammetric surveys in these environments. This research focuses on the applications of photogrammetry in underground stone mines and practical methodologies for implementing the techniques in working operations to better visualize hazards for improved engineering design and infrastructure management.
546

Tangible User Interface for CAVE based on Augmented Reality Technique

Kim, Ji-Sun 20 January 2006 (has links)
This thesis presents a new 3-dimensional (3D) user interface system for a Cave Automated Virtual Environment (CAVE) application, based on Virtual Reality (VR), Augmented Reality (AR), and Tangible User Interface (TUI). We explore fundamental 3D interaction tasks with our user interface for the CAVE system. User interface (UI) is comprised of a specific set of components, including input/output devices and interaction techniques. Our approach is based on TUIs using ARToolKit, which is currently the most popular toolkit for use in AR projects. Physical objects (props) are used as input devices instead of any tethered electromagnetic trackers. An off-the-shelf webcam is used to get tracking input data. A unique pattern marker is attached to the prop, which is easily and simply tracked by ARToolKit. Our interface system is developed on CAVE infrastructure, which is a semi-immersive environment. All virtual objects are directly manipulated with props, each of which corresponds to a certain virtual object. To navigate, the user can move the background itself, while virtual objects remain in place. The user can actually feel the prop's movement through the virtual space. Thus, fundamental 3D interaction tasks such as object selection, object manipulation, and navigation are performed with our interface. To feel immersion, the user is allowed to wear stereoscopic glasses with a head tracker. This is the only tethered device for our work. Since our interface is based on tangible input tools, seamless transition between one and two-handed operation is provided. We went through three design phases to achieve better task performance. In the first phase, we conducted the pilot study, focusing on the question whether or not this approach is applicable to 3D immersive environments. After the pilot study, we redesigned props and developed ARBox. ARBox is used for as interaction space while the CAVE system is only used for display space. In this phase, we also developed interaction techniques for fundamental 3D interaction tasks. Our summative user evaluation was conducted with ARDesk, which is redesigned after our formative user evaluation. Two user studies aim to get user feedback and to improve interaction techniques as well as interface tools' design. The results from our user studies show that our interface can be intuitively and naturally applied to 3D immersive environments even though there are still some issues with our system design. This thesis shows that effective interactions in a CAVE system can be generated using AR technique and tangible objects. / Master of Science
547

VTQuestAR: An Augmented Reality Mobile Software Application for Virginia Tech Campus Visitors

Yao, Zhennan 07 January 2021 (has links)
The main campus of Virginia Polytechnic Institute and State University (Virginia Tech) has more than 120 buildings. The campus visitors face problems recognizing a building, finding a building, obtaining directions from one building to another, and getting information about a building. The exploratory development research described herein resulted in an iPhone / iPad software application (app) named VTQuestAR that provides assistance to the campus visitors by using the Augmented Reality (AR) technology. The Machine Learning (ML) technology is used to recognize a sample of 31 campus buildings in real-time. The VTQuestAR app enables the user to have a visual interactive experience with those 31 campus buildings by superimposing building information on top of the building picture shown through the camera. The app also enables the user to get directions from the current location or a building to another building displayed on a 2D map as well as an AR map. The user can perform complex searches on 122 campus buildings by building name, description, abbreviation, category, address, and year built. The app enables the user to take multimedia notes during a campus visit. Our exploratory development research illustrates the feasibility of using AR and ML in providing much more effective assistance to visitors of any organization. / Master of Science / The main campus of Virginia Polytechnic Institute and State University (Virginia Tech) has more than 120 buildings. The campus visitors face problems recognizing a building, finding a building, obtaining directions from one building to another, and getting information about a building. The exploratory development research described herein resulted in an iPhone / iPad software application named VTQuestAR that provides assistance to the campus visitors by using the Augmented Reality (AR) and Machine Learning (ML) technologies. Our research illustrates the feasibility of using AR and ML in providing much more effective assistance to visitors of any organization.
548

Development of Shared Situation Awareness Guidelines and Metrics as Developmental and Analytical Tools for Augmented and Virtual Reality User Interface Design in Human-Machine Teams

Van Dam, Jared Martindale Mccolskey 21 August 2023 (has links)
As the frontiers and futures of work evolve, humans and machines will begin to share a more cooperative working space where collaboration occurs freely amongst the constituent members. To this end, it is then necessary to determine how information should flow amongst team members to allow for the efficient sharing and accurate interpretation of information between humans and machines. Shared situation awareness (SSA), the degree to which individuals can access and interpret information from sources other than themselves, is a useful framework from which to build design guidelines for the aforementioned information exchange. In this work, we present initial Augmented/virtual reality (AR/VR) design principles for shared situation awareness that can help designers both (1) design efficacious interfaces based on these fundamental principles, and (2) evaluate the effectiveness of candidate interface designs based on measurement tools we created via a scoping literature review. This work achieves these goals with focused studies that 1) show the importance of SSA in augmented reality-supported tasks, 2) describe design guidelines and measurement tools necessary to support SSA, and 3) validate the guidelines and measurement tools with a targeted user study that employs an SSA-derived AR interface to confirm the guidelines distilled from the literature review. / Doctor of Philosophy / As the way in which humans work and play changes, people and machines will need to work together in shared spaces where team members rely on one another to complete goals. To make this interaction happen in ways that benefit both humans and machines, we will need to figure out the best way for information to flow between team members, including both humans and machines. Shared situation awareness (SSA) is a helpful concept that allows us to understand how people can get and understand information from sources other than themselves. In this research, we present some basic ideas for designing augmented reality (AR) tools that help people work together in better ways using SSA as a guiding framework. These ideas can help designers (1) create AR tools that work well based on these basic ideas and (2) test how well different interface designs work using specially developed tools we made. We completed user studies to (1) show how important SSA is when using AR to help with tasks, (2) explain the design ideas and tools needed to support SSA, and (3) test these ideas and tools with a study that uses an AR tool, based on SSA, to make sure the guidelines we got from reading other research are correct.
549

Multimodal 3D User Interfaces for Augmented Reality and Omni-Directional Video

Rovelo Ruiz, Gustavo Alberto 29 July 2015 (has links)
[EN] Human-Computer Interaction is a multidisciplinary research field that combines, amongst others, Computer Science and Psychology. It studies human-computer interfaces from the point of view of both, technology and the user experience. Researchers in this area have now a great opportunity, mostly because the technology required to develop 3D user interfaces for computer applications (e.g. visualization, tracking or portable devices) is now more affordable than a few years ago. Augmented Reality and Omni-Directional Video are two promising examples of this type of interfaces where the user is able to interact with the application in the three-dimensional space beyond the 2D screen. The work described in this thesis is focused on the evaluation of interaction aspects in both types of applications. The main goal is contributing to increase the knowledge about this new type of interfaces to improve their design. We evaluate how computer interfaces can convey information to the user in Augmented Reality applications exploiting human multisensory capabilities. Furthermore, we evaluate how the user can give commands to the system using more than one type of input modality, studying Omnidirectional Video gesture-based interaction. We describe the experiments we performed, outline the results for each particular scenario and discuss the general implications of our findings. / [ES] El campo de la Interacción Persona-Computadora es un área multidisciplinaria que combina, entre otras a las Ciencias de la Computación y Psicología. Estudia la interacción entre los sistemas computacionales y las personas considerando tanto el desarrollo tecnológico, como la experiencia del usuario. Los dispositivos necesarios para crear interfaces de usuario 3D son ahora más asequibles que nunca (v.gr. dispositivos de visualización, de seguimiento o móviles) abriendo así un área de oportunidad para los investigadores de esta disciplina. La Realidad Aumentada y el Video Omnidireccional son dos ejemplos de este tipo de interfaces en donde el usuario es capaz de interactuar en el espacio tridimensional más allá de la pantalla de la computadora. El trabajo presentado en esta tesis se centra en la evaluación de la interacción del usuario con estos dos tipos de aplicaciones. El objetivo principal es contribuir a incrementar la base de conocimiento sobre este tipo de interfaces y así, mejorar su diseño. En este trabajo investigamos de qué manera se pueden emplear de forma eficiente las interfaces multimodales para proporcionar información relevante en aplicaciones de Realidad Aumentada. Además, evaluamos de qué forma el usuario puede usar interfaces 3D usando más de un tipo de interacción; para ello evaluamos la interacción basada en gestos para Video Omnidireccional. A lo largo de este documento se describen los experimentos realizados y los resultados obtenidos para cada caso en particular. Se presenta además una discusión general de los resultados. / [CA] El camp de la Interacció Persona-Ordinador és una àrea d'investigació multidisciplinar que combina, entre d'altres, les Ciències de la Informàtica i de la Psicologia. Estudia la interacció entre els sistemes computacionals i les persones considerant tant el desenvolupament tecnològic, com l'experiència de l'usuari. Els dispositius necessaris per a crear interfícies d'usuari 3D són ara més assequibles que mai (v.gr. dispositius de visualització, de seguiment o mòbils) obrint així una àrea d'oportunitat per als investigadors d'aquesta disciplina. La Realitat Augmentada i el Vídeo Omnidireccional són dos exemples d'aquest tipus d'interfícies on l'usuari és capaç d'interactuar en l'espai tridimensional més enllà de la pantalla de l'ordinador. El treball presentat en aquesta tesi se centra en l'avaluació de la interacció de l'usuari amb aquests dos tipus d'aplicacions. L'objectiu principal és contribuir a augmentar el coneixement sobre aquest nou tipus d'interfícies i així, millorar el seu disseny. En aquest treball investiguem de quina manera es poden utilitzar de forma eficient les interfícies multimodals per a proporcionar informació rellevant en aplicacions de Realitat Augmentada. A més, avaluem com l'usuari pot utilitzar interfícies 3D utilitzant més d'un tipus d'interacció; per aquesta raó, avaluem la interacció basada en gest per a Vídeo Omnidireccional. Al llarg d'aquest document es descriuen els experiments realitzats i els resultats obtinguts per a cada cas particular. A més a més, es presenta una discussió general dels resultats. / Rovelo Ruiz, GA. (2015). Multimodal 3D User Interfaces for Augmented Reality and Omni-Directional Video [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/53916
550

Virtualidad geolocalizada, proyectos de Realidad Aumentada en el espacio público, propuestas experimentales

Ferrer Hernández, Manuel 14 March 2016 (has links)
[EN] This doctoral thesis aims to make an approach to the area of Public Art and Augmented Reality. To this end, it is essential to understand the changes that have taken place in public space, changes that are linked to the concept of speed of movement, and which have been affected significantly in recent decades with the introduction of new technologies in daily life. Augmented Reality has been formed as an emerging art form. It is a genre in which we find subdivisions according to the technologies employed and the ideas or metaphors to represent by the artists. We understand that in the last decade of economic and social crisis activist involvement is an ethical imperative for the artist. Activism has been and is capable of adapting to the emergence and standardization of several communication technologies to make a critical and artistic use of them. It is able to break the classic barrier between the real and the virtual while providing a positive overcoming of the conflicts that the system generates in its post-industrial age like the atomization and alienation of the citizen, as well as the processes as much from constitution of ghettos that are isolated urbanistically as from the gentrification in several districts of the current cities. In this respect, a catalog of works related to the use of Augmented Reality technologies in the artistic activism is proposed. The empirical experimentation and practical production turns out to be, likewise, necessary to approach the theoretical parameters proposed in this thesis. For this reason our work has been focused on the implementation of artistic specific projects, which allow us to demonstrate this relation between the Public Art and the Augmented Reality. A relation that shows us simultaneously, the new typology of human relations and alternative topographies of the city that are generated in the public hybrid space, increasing its accessibility to citizens and overcoming the artist-active / audience-passive dichotomy through the contextual practice and the situationist drift through the different works that we have proposed. This practice establishes a new democratizing vector of the art that, therefore, allows the civil participation in other artistic closed circles which transmute into a constellation of decentralized interconnected nodes like artistic participative rhizome in which the citizenship can express creatively in all their dimensions. / [ES] La presente tesis pretende realizar una aproximación al campo del Arte Público y la Realidad Aumentada. Para ello resulta imprescindible comprender los cambios acontecidos en el espacio público, cambios que van ligados al concepto de velocidad de movimiento, y que se han visto afectados significativamente en las ultimas décadas con la implantación de las nuevas tecnologías en la vida cotidiana. La Realidad Aumentada se ha conformado como un género artístico emergente. Un género en el que podemos encontrar subdivisiones en función de las tecnologías empleadas y de las ideas o metáforas a representar por los artistas. Entendemos que en esta última década de crisis económica y social la implicación activista resulta un imperativo ético para el artista. Pues el activismo ha sido y es capaz de adaptarse al surgimiento y estandarización de diversas tecnologías de comunicación para hacer un uso crítico-artístico de las mismas capaz de romper la barrera clásica entre lo real y lo virtual al tiempo que propone una superación positiva de los conflictos que genera el sistema en su era pos-industrial como son la atomización y alienación del ciudadano, así como los procesos tanto de constitución de guetos aislados urbanísticamente como de gentrificación en diversos distritos de las ciudades actuales. En este sentido, se propone una catalogación de obras relacionadas con la utilización de tecnologías de Realidad Aumentada dentro del activismo artístico. Resulta, así mismo, necesaria la experimentación empírica y producción práctica para abordar los parámetros teóricos propuestos en esta tesis, por lo que nuestro trabajo se ha centrado en la implementación de proyectos artísticos concretos, que permiten evidenciar esta relación entre el Arte Público y la Realidad Aumentada. Una relación que nos muestra a un tiempo, la nueva tipología de relaciones humanas y topografías alternativas de la urbe que se generan en el espacio público híbrido, aumentando su accesibilidad a la ciudadanía y superando la dicotomía artista-activo / público-pasivo mediante la práctica contextual y la deriva situacionista a través de las diferentes obras que hemos propuesto. Práctica que recorre un nuevo vector democratizador del arte, que por ende permite la participación ciudadana en círculos artísticos cerrados, que transmutan en una constelación de nodos descentralizados interconexionados a modo de rizoma artístico participativo, en el que la ciudadanía puede expresarse creativamente en toda su magnitud. / [CA] La present tesi pretén realitzar una aproximació al camp de l'Art Públic i la Realitat Augmentada. Per açò resulta imprescindible comprendre els canvis esdevinguts en l'espai públic, canvis que van lligats al concepte de velocitat de moviment, i que s'han vist afectats significativament en les últimes dècades amb la implantació de les noves tecnologies en la vida quotidiana. La Realitat Augmentada s'ha conformat com un gènere artístic emergent. Un gènere en el qual podem trobar subdivisions en funció de les tecnologies emprades i de les idees o metàfores a representar pels artistes. Entenem que en aquesta última dècada de crisi econòmica i social la implicació activista resulta un imperatiu ètic per a l'artista. Doncs l'activisme ha sigut i és capaç d'adaptar-se al sorgiment i estandardització de diverses tecnologies de comunicació per a fer un ús crític-artístic de les mateixes, capaç de trencar la barrera clàssica entre el real i el virtual al mateix temps que proposa una superació positiva dels conflictes que genera el sistema en la seua era post-industrial com són l'atomització i alienació del ciutadà, així com els processos tant de constitució de guetos aïllats urbanísticament com de gentrificació en diversos districtes de les ciutats actuals. En aquest sentit, es proposa una catalogació d'obres relacionades amb la utilització de tecnologies de Realitat Augmentada dins de l'activisme artístic. Resulta, així mateix, necessària l'experimentació empírica i producció pràctica per a abordar els paràmetres teòrics proposats en aquesta tesi, per la qual cosa el nostre treball s'ha centrat en la implementació de projectes artístics concrets, que permeten evidenciar aquesta relació entre l'Art Públic i la Realitat Augmentada. Una relació que ens mostra a un temps, la nova tipologia de relacions humanes i topografies alternatives de la urbs que es generen en l'espai públic híbrid, augmentant la seua accessibilitat a la ciutadania i superant la dicotomia artista-actiu / públic-passiu mitjançant la pràctica contextual i la deriva situacionista a través de les diferents obres que hem proposat. Pràctica que recorre un nou vector democratitzador de l'art que per tant permet la participació ciutadana en els altres cercles artístics tancats que transmuten en una constel·lació de nodes descentralitzats interconnectats a manera de rizoma artístic participatiu en el qual la ciutadania pot expressar-se creativament en tota la seua magnitud. / Ferrer Hernández, M. (2016). Virtualidad geolocalizada, proyectos de Realidad Aumentada en el espacio público, propuestas experimentales [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/61771

Page generated in 0.1034 seconds