• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 34
  • 13
  • 11
  • 10
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 225
  • 102
  • 89
  • 42
  • 41
  • 39
  • 36
  • 34
  • 29
  • 28
  • 25
  • 24
  • 24
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

In an Immersive Space, Raising Awareness to Universalize the Concept of Resistance and Hope

Jazemi, Elaheh 01 January 2024 (has links) (PDF)
Inspired by my Iranian heritage, I symbolize my narrative of the social injustice and suppression prevalent in Iran. Through this thesis, I universalize the concept of resistance and hope for equality by raising awareness and giving a voice to the voiceless despite their significant sacrifices. In my studio practice, I sought to achieve a visual density that would enhance the immersive experiences. I constructed this by overlapping transparent materials such as tulle, resin, silk, and transparent sheets, creating a disorienting atmosphere that invites viewers to grapple with visual metaphors. This overwhelming ambiance, mirroring the despondent nature of the emotions conveyed in my stories, encourages viewers to align with my thoughts and join me on my journey. As a creator, the narrator stands with the Iranian people, expressing empathy through various mediums. They capture Iranian anguish, echo their messages metaphorically, and highlight their values. Symbolically present in all works are elements of Iranian art, the narrator's appearance, and voice. Through multimedia, solidarity is shown, and the voices of the Iranian people are expressed.
112

A Framework for Incorporating Virtual Reality into the Early Stages of the Design Process and Massing Studies

Saghafi Moghaddam, Sara 10 September 2024 (has links)
This dissertation studies the integration of Virtual Reality (VR) into the early stages of the architectural design process, particularly during massing studies. The research proposes a framework identifying the necessary knowledge domains and technologies to facilitate this integration. Traditional design tools often restrict architects' ability to fully explore spatial qualities and contextualize their ideas within the project site, limiting their understanding of spatial relationships, scale, and proportions. By merging VR technologies into the early design stages, architects can better visualize their proposals within the site context, iterate more rapidly among massing design alternatives, and enhance decision-making. The research, based on a literature review, class observations, user studies, immersive case studies, and the Delphi method, examines how VR can support the exploration of design alternatives at a 1:1 scale, enabling real-time feedback and iterative processes. The findings highlight the opportunities and challenges within the design workflow, demonstrating that VR can significantly improve design feedback, expand the thinking space and user engagement, and enrich spatial understanding. The proposed framework identifies key decision nodes and knowledge domains essential for effective VR integration in architectural practice. Additionally, the study suggests a suitable interface for VR-integrated tools and proposes a communication model between architects and VR developers. / Doctor of Philosophy / The design process consists of different stages, and the decisions made during the early phases, including massing—the study of a project's shape, form, size, and envelope configuration within its site—can significantly impact the project's overall performance and cost throughout its life cycle. As the project evolves, making changes becomes more time-consuming and expensive. This dissertation focuses on how architects can use Virtual Reality (VR) to enhance massing studies in the early design stages. For each architectural project, architects need to examine how it will fit into its location, its impact on the context, and how it will interact with site features such as sunlight, land shape, directionality, adjacent buildings, and greenery. Traditionally, tools like computer software, 3D models, sketches, and prototyping help visualize these elements, but they can sometimes be limiting, making changes difficult once a plan is set. This research investigates how integrating VR into early design decisions allows architects to "step into" their designs, better explore alternatives, and improve decision-making. By using VR, architects can more effectively visualize their designs within the actual site context, quickly test different massing options, and refine their decision-making process. Based on a literature review, classroom observations, user studies, and immersive case studies, the research proposes a framework that identifies key knowledge areas, technologies, and themes essential for integrating VR with the design process and understanding spatial relationships.
113

Immersive Dynamic Scenes for Virtual Reality from a Single RGB-D Camera

Lai, Po Kong 26 September 2019 (has links)
In this thesis we explore the concepts and components which can be used as individual building blocks for producing immersive virtual reality (VR) content from a single RGB-D sensor. We identify the properties of immersive VR videos and propose a system composed of a foreground/background separator, a dynamic scene re-constructor and a shape completer. We initially explore the foreground/background separator component in the context of video summarization. More specifically, we examined how to extract trajectories of moving objects from video sequences captured with a static camera. We then present a new approach for video summarization via minimization of the spatial-temporal projections of the extracted object trajectories. New evaluation criterion are also presented for video summarization. These concepts of foreground/background separation can then be applied towards VR scene creation by extracting relative objects of interest. We present an approach for the dynamic scene re-constructor component using a single moving RGB-D sensor. By tracking the foreground objects and removing them from the input RGB-D frames we can feed the background only data into existing RGB-D SLAM systems. The result is a static 3D background model where the foreground frames are then super-imposed to produce a coherent scene with dynamic moving foreground objects. We also present a specific method for extracting moving foreground objects from a moving RGB-D camera along with an evaluation dataset with benchmarks. Lastly, the shape completer component takes in a single view depth map of an object as input and "fills in" the occluded portions to produce a complete 3D shape. We present an approach that utilizes a new data minimal representation, the additive depth map, which allows traditional 2D convolutional neural networks to accomplish the task. The additive depth map represents the amount of depth required to transform the input into the "back depth map" which would exist if there was a sensor exactly opposite of the input. We train and benchmark our approach using existing synthetic datasets and also show that it can perform shape completion on real world data without fine-tuning. Our experiments show that our data minimal representation can achieve comparable results to existing state-of-the-art 3D networks while also being able to produce higher resolution outputs.
114

Tempêtes. Composition audiovisuelle

Breuleux-Ouellette, Yan 05 1900 (has links)
No description available.
115

Understanding Immersive Environments for Visual Data Analysis

Satkowski, Marc 06 February 2024 (has links)
Augmented Reality enables combining virtual data spaces with real-world environments through visual augmentations, transforming everyday environments into user interfaces of arbitrary type, size, and content. In the past, the development of Augmented Reality was mainly technology-driven. This made head-mounted Mixed Reality devices more common in research, industrial, or personal use cases. However, such devices are always human-centered, making it increasingly important to closely investigate and understand human factors within such applications and environments. Augmented Reality usage can reach from a simple information display to a dedicated device to present and analyze information visualizations. The growing data availability, amount, and complexity amplified the need and wish to generate insights through such visualizations. Those, in turn, can utilize human visual perception and Augmented Reality’s natural interactions, the potential to display three-dimensional data, or the stereoscopic display. In my thesis, I aim to deepen the understanding of how Augmented Reality applications must be designed to optimally adhere to human factors and ergonomics, especially in the area of visual data analysis. To address this challenge, I ground my thesis on three research questions: (1) How can we design such applications in a human-centered way? (2) What influence does the real-world environment have within such applications? (3) How can AR applications be combined with existing systems and devices? To answer those research questions, I explore different human properties and real-world environments that can affect the same environment’s augmentations. For human factors, I investigate the competence in working with visualizations as visualization literacy, the visual perception of visualizations, and physical ergonomics like head movement. Regarding the environment, I examine two main factors: the visual background’s influence on reading and working with immersive visualizations and the possibility of using alternative placement areas in Augmented Reality. Lastly, to explore future Augmented Reality systems, I designed and implemented Hybrid User Interfaces and authoring tools for immersive environments. Throughout the different projects, I used empirical, qualitative, and iterative methods in studying and designing immersive visualizations and applications. With that, I contribute to understanding how developers can apply human and environmental parameters for designing and creating future AR applications, especially for visual data analysis. / Augmented Reality ermöglicht es, die reale Welt mit virtuellen Datenräume durch visuelle Augmentierungen zu kombinieren. Somit werden alltägliche Umgebungen in Benutzeroberflächen beliebiger Art, Größe und beliebigen Inhalts verwandelt. In der Vergangenheit war die Entwicklung von Augmented Reality hauptsächlich technologiegetrieben. Folglich fanden head-mounted Mixed Reality Geräte immer häufiger in der Forschung, Industrie oder im privaten Bereich anwendung. Da die Geräte jedoch immer auf den Menschen ausgerichtet sind, wird es immer wichtiger die menschlichen Faktoren in solchen Anwendungen und Umgebungen genau zu untersuchen. Die Nutzung von Augmented Reality kann von einer einfachen Informationsanzeige bis hin zur Darstellung und Analyse von Informationsvisualisierungen reichen. Die wachsende Datenverfügbarkeit, -menge und -komplexität verstärkte den Bedarf und Wunsch, durch solche Visualisierungen Erkenntnisse zu gewinnen. Diese wiederum können die menschliche visuelle Wahrnehmung und die durch Augmented Reality bereitgestellte natürlichen Interaktion und die Darstellung dreidimensionale and stereoskopische Daten nutzen. In meiner Dissertation möchte ich das Verständnis dafür vertiefen, wie Augmented Reality-Anwendungen gestaltet werden müssen, um menschliche Faktoren und Ergonomie optimal zu berücksichtigen, insbesondere im Bereich der visuellen Datenanalyse. Hierbei stütze ich mich in meiner Arbeit auf drei Forschungsfragen: (1) Wie können solche Anwendungen menschenzentriert gestaltet werden? (2) Welchen Einfluss hat die reale Umgebung auf solche Anwendungen? (3) Wie können AR Anwendungen mit existierenden Systemen und Geräten kombiniert werden? Um diese Forschungsfragen zu beantworten, untersuche ich verschiedene menschliche und Umgebungseigenschaften, die sich auf die Augmentierungen derselben Umgebung auswirken können. Für menschliche Faktoren untersuche ich die Kompetenz im Umgang mit Visualisierungen als ``Visualization Literacy'', die visuelle Wahrnehmung von Visualisierungen, und physische Ergonomie wie Kopfbewegungen. In Bezug auf die Umgebung untersuche ich zwei Hauptfaktoren: den Einfluss des visuellen Hintergrunds auf das Lesen und Arbeiten mit immersiven Visualisierungen und die Möglichkeit der Verwendung alternativer Platzierungsbereiche in Augmented Reality. Um zukünftige Augmented Reality-Systeme zu erforschen, habe ich schließlich Hybride Benutzerschnittstellen und Konfigurationstools für immersive Umgebungen entworfen und implementiert. Während der verschiedenen Projekte habe ich empirische, qualitative und iterative Methoden bei der Untersuchung und Gestaltung von immersiven Visualisierungen und Anwendungen eingesetzt. Damit trage ich zum Verständnis bei, wie Entwickler menschliche und umbebungsbezogene Parameter für die Gestaltung und Erstellung zukünftiger AR-Anwendungen, insbesondere für die visuelle Datenanalyse, nutzen können.
116

結合互動裝置之實境遊戲創作 / Incorporating Interactive Devices in the Design of Reality Games

蔡雯琪, Tsai, Wen Chi Unknown Date (has links)
隨著資訊科技的蓬勃發展,為生活與休閒娛樂帶來許多變革與創新,以真人實境闖關遊戲來說,單純應用實體道具的互動體驗已經無法滿足參加者。因此,遊戲設計者逐漸開始研發各種裝置,以尋求更新奇有趣的效果,使得這類體驗性質的實境遊戲蘊含大量的實驗與創新元素。 本研究首先歸納現有實境遊戲案例的重點特徵,將之概念化並進行編碼,在文獻回顧的部分,進一步探討沉浸體驗的定義、特色與價值。接著規劃並實作一個「結合互動裝置的實境遊戲」,在遊戲中加入數位技術、互動裝置及感測器,提供參加者別於日常生活的感受,讓他們完成遊戲後,就像親身經歷過一場有趣的冒險。 本創作共招募二十七位參加者進行遊戲體驗,透過問卷與訪談蒐集反饋。在創作的過程中發現,有意義的互動發生在故事內容、實體物件及互動觸發三者緊密扣合的當下,而加入數位整合實體的互動裝置,帶來相當程度的沉浸與投入效果,雖然在若干細節方面,仍有改進的空間,但我們相信此創作提供了互動科技應用的新方向,作為後續研究可參考的實際案例,進一步發展更多元的實境遊戲體驗。 / With recent advances in digital technology, more and more innovations in entertainment are occurring. In reality games, the installation of simple physical devices is no longer able to satisfy the participants. To make the games more interesting and playful, game designers are developing novel content and interaction by incorporating latest technological elements. In this thesis, we summarize the key concepts in the design of existing reality games and organize these cases into different codes, which are later employed to guide the design of our reality game. In the literature review, we select flow theory as the primary related work. We clarify the definition, features and value of this theory, as well as its connection to our design philosophy. We then plan and implement the ‘Revenge of the BEAR’ project: a reality game that incorporates digital technology, interactive devices and sensors. This reality game aims to provide the participants with astounding experiences distant from their daily life. Upon completion of the missions outlined in the game, the player will feel that they have just engaged in an interesting adventure. Twenty-seven participants were recruited to take part in this reality game. Through questionnaires and interviews, we obtain constructive feedbacks that help us understand the attractions and limitations of this game. The results indicate that meaningful interaction occurs while the story, the interactive objects and the trigger are cohesively linked. Besides, new digital technologies help to improve immersive experiences. Although there are some design details that remain to be attended to, we believe that this study provides a new direction for the applications of interactive technology in the design of future reality games.
117

Low latency video streaming solutions based on HTTP/2 / Solutions de transmission vidéo avec faible latence basées sur HTTP/2

Ben Yahia, Mariem 10 May 2019 (has links)
Les techniques adaptatives de transmission vidéo s’appuient sur un contenu qui est encodé à différents niveaux de qualité et divisé en segments temporels. Avant de télécharger un segment, le client exécute un algorithme d’adaptation pour décider le meilleur niveau de qualité à considérer. Selon les services, ce niveau de qualité doit correspondre aux ressources réseaux disponibles, mais aussi à d’autres éléments comme le mouvement de tête d’un utilisateur regardant une vidéo immersive (à 360°) afin de maximiser la qualité de la portion de la vidéo qui est regardée. L’efficacité de l’algorithme d’adaptation a un impact direct sur la qualité de l’expérience finale. En cas de mauvaise sélection de segment, un client HTTP/1 doit attendre le téléchargement du prochain segment afin de choisir une qualité appropriée. Dans cette thèse, nous proposons d’utiliser le protocole HTTP/2 pour remédier à ce problème. Tout d’abord, nous nous focalisons sur le service de vidéo en direct. Nous concevons une stratégie de rejet d’images vidéo quand la bande passante est très variable afin d’éviter les arrêts fréquents de la lecture vidéo et l’accumulation des retards. Le client doit demander chaque image vidéo dans un flux HTTP/2 dédié pour contrôler la livraison des images par appel aux fonctionnalités HTTP/2 au niveau des flux concernées. Ensuite, nous optimisons la livraison des vidéos immersives en bénéficiant de l’amélioration de la prédiction des mouvements de têtes de l’utilisateur grâce aux fonctionnalités d’initialisation et de priorité de HTTP/2. Les résultats montrent que HTTP/2 permet d’optimiser l’utilisation des ressources réseaux et de s’adapter aux latences exigées par chaque service. / Adaptive video streaming techniques enable the delivery of content that is encoded at various levels of quality and split into temporal segments. Before downloading a segment, the client runs an adaptation algorithm to determine the level of quality that best matches the network resources. For immersive video streaming this adaptation mechanism should also consider the head movement of a user watching the 360° video to maximize the quality of the viewed portion. However, this adaptation may suffer from errors, which impact the end user’s quality of experience. In this case, an HTTP/1 client must wait for the download of the next segment to choose a suitable quality. In this thesis, we propose to use the HTTP/2 protocol instead to address this problem. First, we focus live streaming video. We design a strategy to discard video frames when the band width is very variable in order so as to avoid the rebuffering events and the accumulation of delays. The customer requests each video frame in an HTTP/2 stream which allows to control the delivery of frames by leveraging the HTTP/2 features at the level of the dedicated stream. Besides, we use the priority and reset stream features of HTTP/2 to optimize the delivery of immersive videos. We propose a strategy to benefit from the improvement of the user’s head movements prediction overtime. The results show that HTTP/2 allows to optimize the use of network resources and to adapt to the latencies required by each service.
118

Experimentações artísticas no ambiente imersivo da Cave / Experimentações artísticas no ambiente imersivo da Cave

Modia Junior, Roberto Cabado 24 March 2006 (has links)
Os paradigmas da criação artística para a CAVE (CAVE Automatic Virtual Environment) ainda estão sendo elaborados pois, no âmbito da Realidade Virtual, é uma tecnologia recente e ainda bastante onerosa, fatores que dificultam sua disseminação. Ainda assim, algumas instituições possuem CAVEs para pesquisas artístico-culturais. Dentre elas, figura o centro de pesquisas Ars Electronica que, em parceria com o artista transmídia Maurice Benayoun, produziu em sua CAVE, cuja visitação é aberta ao público, a premiada obra World Skin. Com o intuito de situar o atual estado da arte das experimentações artísticas em CAVEs, esta obra mereceu análise detalhada de seus processos criativos e metodológicos. Nela, o autor investiga as reações cognitivas dos visitantes e propõe uma nova relação espaçocorporal dentro de um mundo virtual. A potencialidade artística da CAVE é grande e existe interesse dos artistas em explorá-la. As novas pesquisas e avanços tecnológicos apontam perspectivas de um maior acesso a este tipo de ambiente imersivo, consolidando-o como um prolífero suporte artístico. / The paradigms behind artistic creation for CAVE (CAVE Automatic Virtual Environment) are yet to be elaborate, since in the realm of Virtual Reality, this is a recent and expensive technology, which poses barriers to its own spreading. Even so, some institutions do have CAVEs for culture and artistic research. Among them, there\'s the research center Ars Electronica, which, in partnership with transmedia artist Maurice Benayoun, has produced in its CAVE (which is open for public visits), the award-winning work World Skin. With the objective of placing experimental artistic manifestations in CAVEs, this work has been deeply analyzed, concerning its creative processes and methods. The author evaluates the cognitive reactions of the visitors and proposes a new corporal-space relationship in a virtual world. The artistic potential behind a CAVE seems to be huge, and artists are willing to explore it. According to new researches and recent developed technologies, there will be a broader access to this kind of immersive environment, which might become a highly productive platform for artists.
119

Contribuições ao desenvolvimento de um sistema de telepresença por meio da aquisição, transmissão e projeção em ambientes imersivos de vídeos panorâmicos. / Contributions to the development of a system of telepresença by means of the acquisition, transmission and projection in imersivos environments of panoramic videos.

Hu, Osvaldo Ramos Tsan 05 July 2006 (has links)
Sistemas de telepresença têm sido pesquisados e desenvolvidos para inúmeras aplicações que exigem a presença física de pessoas em ambientes inacessíveis; tais situações são diversas, desde aquelas relacionadas com educação a distância até aquelas que envolvem alta periculosidade. Neste trabalho, a pesquisa e desenvolvimento se concentram na concepção e implementação de sistemas de telepresença voltados para ambientes imersivos em 360º, capazes de realizar a aquisição, transmissão e projeção de imagens em movimento em sistemas de multiprojeção imersivos, como é o caso da CAVERNA Digital®, desenvolvida pelo Laboratório de Sistemas Integráveis da Escola Politécnica da USP. Assim, o presente trabalho apresenta contribuições para o desenvolvimento de sistemas de telepresença, dentre os quais destacam-se: o detalhamento da arquitetura geral do sistema, a implementação de métodos para a calibração, correção das imagens e montagem de panoramas de 360º. Construiu-se um protótipo composto de: um Módulo de Aquisição, que adquire as imagens de oito câmeras (montadas num anel de câmeras sobre um robô), efetua as correções e monta uma imagem panorâmica; um Módulo de Composição que costura as várias imagens panorâmicas em um panorama final; e um Módulo de Exibição que ajusta e projeta o panorama nas telas da CAVERNA Digital®. Finalmente, apresentam-se considerações sobre o presente trabalho e perspectivas futuras. / Research and development in telepresence systems have been done in several applications that require physical presence of people into non-accessible environments. These situations may vary, from those related to distance education to those related to very hazardous places for humans. In this work, the main research and development goal is the conception and implementation of a telepresence system for 360 degrees immersive environments, this system is able to perform acquisition, transmission and projection of moving images on immersive multi-projection environments, such as the CAVERNA Digital®, developed by the Laboratório de Sistemas Integráveis at the Escola Politécnica, Universidade de São Paulo. The main contributions to the development of telepresence systems are: a detailed specification of the general architecture of the system, the implementation of the calibration, the imagery correction and the 360 degrees panoramas composition methods. The prototype that was implemented includes: an Acquisition Module, that acquires image of eight cameras (mounted on a ring of cameras placed on top of a robot), executes corrections and prepares a panoramic image; a Composition Module, that stitches the images in a final panorama; and an Exhibition Module, that adjusts and projects the panorama into the screens of the CAVERNA Digital®. Final, remarks on this present work and future perspectives are presented at the end.
120

Contribuições ao desenvolvimento de um sistema de telepresença por meio da aquisição, transmissão e projeção em ambientes imersivos de vídeos panorâmicos. / Contributions to the development of a system of telepresença by means of the acquisition, transmission and projection in imersivos environments of panoramic videos.

Osvaldo Ramos Tsan Hu 05 July 2006 (has links)
Sistemas de telepresença têm sido pesquisados e desenvolvidos para inúmeras aplicações que exigem a presença física de pessoas em ambientes inacessíveis; tais situações são diversas, desde aquelas relacionadas com educação a distância até aquelas que envolvem alta periculosidade. Neste trabalho, a pesquisa e desenvolvimento se concentram na concepção e implementação de sistemas de telepresença voltados para ambientes imersivos em 360º, capazes de realizar a aquisição, transmissão e projeção de imagens em movimento em sistemas de multiprojeção imersivos, como é o caso da CAVERNA Digital®, desenvolvida pelo Laboratório de Sistemas Integráveis da Escola Politécnica da USP. Assim, o presente trabalho apresenta contribuições para o desenvolvimento de sistemas de telepresença, dentre os quais destacam-se: o detalhamento da arquitetura geral do sistema, a implementação de métodos para a calibração, correção das imagens e montagem de panoramas de 360º. Construiu-se um protótipo composto de: um Módulo de Aquisição, que adquire as imagens de oito câmeras (montadas num anel de câmeras sobre um robô), efetua as correções e monta uma imagem panorâmica; um Módulo de Composição que costura as várias imagens panorâmicas em um panorama final; e um Módulo de Exibição que ajusta e projeta o panorama nas telas da CAVERNA Digital®. Finalmente, apresentam-se considerações sobre o presente trabalho e perspectivas futuras. / Research and development in telepresence systems have been done in several applications that require physical presence of people into non-accessible environments. These situations may vary, from those related to distance education to those related to very hazardous places for humans. In this work, the main research and development goal is the conception and implementation of a telepresence system for 360 degrees immersive environments, this system is able to perform acquisition, transmission and projection of moving images on immersive multi-projection environments, such as the CAVERNA Digital®, developed by the Laboratório de Sistemas Integráveis at the Escola Politécnica, Universidade de São Paulo. The main contributions to the development of telepresence systems are: a detailed specification of the general architecture of the system, the implementation of the calibration, the imagery correction and the 360 degrees panoramas composition methods. The prototype that was implemented includes: an Acquisition Module, that acquires image of eight cameras (mounted on a ring of cameras placed on top of a robot), executes corrections and prepares a panoramic image; a Composition Module, that stitches the images in a final panorama; and an Exhibition Module, that adjusts and projects the panorama into the screens of the CAVERNA Digital®. Final, remarks on this present work and future perspectives are presented at the end.

Page generated in 0.0591 seconds