• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 7
  • 7
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 42
  • 42
  • 21
  • 15
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Dreidimensionale Darstellung und Kombination multimodaler Bilddaten in der Neurochirurgie

Meyer, Tobias, Podlesek, Dino, Kuß, Julia, Uhlemann, Falk, Leimert, Mario, Simank, Marita, Steinmeier, Ralf, Schackert, Gabriele, Morgenstern, Ute, Kirsch, Matthias 11 October 2008 (has links)
In der Neurochirurgie werden für die prä-, intra- und postoperative medizinische Bildgebung dreidimensionale Daten unterschiedlicher Modalitäten überlagert, segmentiert und visualisiert. Durch die Verknüpfung der multimodalen Daten erhofft man einen Informationsgewinn für Diagnose und Therapie. Dabei wird großer Wert auf die intuitive Handhabbarkeit und Verständlichkeit der Software gelegt. In einem neu entwickelten 3-D-Planungssystem wird insbesondere die Erfassung komplexer räumlicher Strukturen und die neurochirurgische Operationsplanung durch aktuelle 3-D-Interaktions- und Visualisierungstechnologien vereinfacht. Das Anwendungsspektrum des Systems wird durch die Entwicklung spezieller Algorithmen kontinuierlich erweitert, z. B. der automatischen Segmentierung und elastischen Registrierung von multimodalen Bilddaten, und kann somit an neue klinische Fragestellungen angepasst werden. / In neurosurgery, three-dimensional data from different modalities are registered, segmented and visualised for pre-, intraand post-operative medical imaging. A combination of the multimodal data sets provides additional information for the analyses of anatomic and functional correlations and for surgical planning. For routine use, it is important to design a software application that is simple and intuitive to use. A neurosurgical operation planning system is realised in combination with novel 3D-interaction and visualisation technologies. The development of additional functions, such as automatic segmentation and elastic registration, enhances the usability of the systems to approach further clinical objectives.
22

[pt] ENVIRONRC: INTEGRANDO COLABORAÇÃO E COMUNICAÇÃO MÓVEL A APLICAÇÕES DE ENGENHARIA OFFSHORE EM AMBIENTES DE REALIDADE VIRTUAL / [en] ENVIRONRC: INTEGRATING COLLABORATION AND MOBILE COMMUNICATION TO OFFSHORE ENGINEERING VIRTUAL REALITY APPLICATIONS

BERNARDO FRANKENFELD VILLELA PEDRAS 20 July 2016 (has links)
[pt] Aplicações de visualização de Engenharia Offshore são, na maioria dos casos, muito complexas e devem ser capazes de representar muitos dados vindos de simulações numéricas computacionalmente intensivas. Para ajudar a melhor analisar os resultados, visualização 3D pode ser usada em conjunto com ambientes de Realidade Virtual (RV). A ideia principal desse trabalho começou quando reconhecemos duas demandas que aplicações de engenharia tinham ao rodar em ambientes RV. Primeiramente há uma demanda para suporte à visualização na forma de melhorias na navegação e na capacidade de analisar os dados. Em segundo lugar, há também uma demanda por colaboração devido às dificuldades de coordenar uma equipe com um dos membros utilizando RV. Para atender estas demandas, nós desenvolvemos uma arquitetura orientada a serviços (SOA) capaz de adicionar colaboração a qualquer aplicação. A ideia por trás da nossa solução é permitir a visualização de dados em tempo real através de tablets e smartphones. Estes dispositivos móveis podem ser usados para ajudar a navegar o mundo virtual ou serem usados como uma segunda tela, ajudando a visualizar e a manipular grandes conjuntos de dados na forma de tabelas ou gráficos. Além disso, queremos adicionar as funcionalidades de colaboração a uma aplicação com o mínimo possível de alterações na aplicação original. Outra vantagem importante que dispositivos móveis adicionam a aplicações de engenharia é a capacidade de acessar os dados em locais remotos, como plataformas de petróleo e refinarias, e assim permitindo ao engenheiro de campo manipular os dados no local de trabalho. Como aplicação teste, utilizamos o ENVIRON, que é uma aplicação de RV para visualização de modelos e simulações de engenharia, desenvolvida em conjunto com a equipe do Instituto Tecgraf da PUC-Rio. Adicionamos esta solução ao ENVIRON e testamos com um experimento e durante o processo de análise de Engenharia Offshore usando ambientes RV (PowerWall e CAVE). / [en] Offshore Engineering visualization applications are, on most cases, very complex and should display a lot of data coming from very computational intensive numerical simulations. To help analyze and better visualize the results, 3D visualization can be used in conjunction with a Virtual Reality (VR) environment. The main idea for this work began as we realized two different demands that engineering applications had when running on VR setups: firstly, a demand for visualization support in the form of better navigation and better data analysis capabilities. Secondly, a demand for collaboration, due to the difficulties of coordinating a team with one member using VR. To meet this demands, we developed a Service Oriented Architecture (SOA) capable of adding collaboration capabilities to any application. The idea behind our solution is to enable real-time data visualization and manipulation on tablets and smartphones. Such devices can be used to help navigate the virtual world or be used as a second screen, helping visualize and manipulate large sets of data in the form of tables or graphs. Furthermore, we want to allow collaboration-unaware application to collaborate with as little reworking of the original application as possible. Another big advantage that mobile devices bring to the engineering applications is the capability of accessing the data on remote locations, like on oil platforms or refineries, and so allowing the field engineer to check the data or even change it on the fly. As our test application, we used ENVIRON, which is a VR application for visualization of 3D models and simulations developed in collaboration with a team from the Institute Tecgraf of PUC-Rio. We added this solution to ENVIRON and it was tested with an experiment and during a review process of Offshore Engineering using VR Setups (Power wall and CAVE).
23

Understanding and Improving Distal Pointing Interaction

Kopper, Regis Augusto Poli 04 August 2011 (has links)
Distal pointing is the interaction style defined by directly pointing at targets from a distance. It follows a laser pointer metaphor and the position of the cursor is determined by the intersection of a vector extending the pointing device with the display surface. Distal pointing as a basic interaction style poses several challenges for the user, mainly because of the lack of precision humans have when using it. The focus of this thesis is to understand and improve distal pointing, making it a viable interaction metaphor to be used in a wide variety of applications. We achieve this by proposing and validating a predictive model of distal pointing that is inspired by Fitts' law, but which contains some unique features. The difficulty of a distal pointing task is best described by the angular size of the target and the angular distance that the cursor needs to go across to reach the target from the input device perspective. The practical impact of this is that the user's relative position to the target should be taken into account. Based on the model we derived, we proposed a set of design guidelines for high-precision distal pointing techniques. The main guideline from the model is that increasing the target size is much more important than reducing the distance to the target. In order to improve distal pointing, we followed the model guidelines and designed interaction techniques that aim at improving the precision of distal pointing tasks. Absolute and Relative Mapping (ARM) distal pointing increases precision by offering the user a toggle which changes the control/display (CD) ratio such that a large movement of the input device is mapped to a small movement of the cursor. Dynamic Control Display Ratio (DyCoDiR) automatically increases distal pointing precision, as the user needs it. DyCoDiR takes into account the user distance to the interaction area and the speed at which the user moves the input device to dynamically calculate an increased CD ratio, making the action more precise the steadier the user tries to be. We performed an evaluation of ARM and DyCoDiR comparing them to basic distal pointing in a realistic context. In this experiment, we also provided variations of the techniques which increased the visual perception of targets through zooming in the area around the cursor when precision was needed. Results from the study show that ARM and DyCoDiR are significantly faster and more accurate than basic distal pointing with tasks that require very high precision. We analyzed user navigation strategies and found that the high precision techniques afford users to remain stationary while performing interactions. However, we also found that individual differences have a strong impact on the decision to walk or not, and that, sometimes, is more important than the technique affordance. We provided a validation for the distal pointing model through the analysis of expected difficulty of distal pointing tasks in light of each technique tested. We propose selection by progressive refinement, a new design concept for distal pointing selection techniques, whose goal is to offer the ability to achieve near perfect accuracy in selection at very cluttered environments. The idea of selection by progressive refinement is to gradually eliminate possible targets from the set of selectable objects until only one object is available for selection. We implemented SQUAD, a selection by progressive refinement distal pointing technique, and performed a controlled experiment comparing it to basic distal pointing. We found that there is a clear tradeoff between immediate selections that require high precision and selections by progressive refinement which always require low precision. We validated the model by fitting the distal pointing data and proposed a new model, which has a linear growth in time, for SQUAD selection. / Ph. D.
24

Design and Evaluation of Domain-Specific Interaction Techniques in the AEC Domain for Immersive Virtual Environments

Chen, Jian 29 November 2006 (has links)
Immersive virtual environments (VEs) are broadly applicable to situations where a user can directly perceive and interact with three-dimensional (3D) virtual objects. Currently, successful interactive applications of VEs are limited. Some interactive applications in the AEC (architecture / engineering / construction) domain have not yet benefited from applying VEs. A review of prior work has suggested that 3D interaction has not reached a level that meets real-world task requirements. Most interaction techniques pay little attention to the application contexts. When designers assemble these techniques to develop an interactive system, the interfaces often have very simple and not highly useful UIs. In this work, we describe a domain-specific design approach (DSD) that utilizes pervasive and accurate domain knowledge for interaction design. The purpose of this dissertation is to study the effects of domain knowledge on interaction design. The DSD approach uses a three-level interaction design framework to represents a continuous design space of interaction. The framework has generative power to suggest alternative interaction techniques. We choose the AEC domain as the subject of study. Cloning and object manipulation for massing study are the two example tasks to provide practical and empirical evidences for applying the DSD. This dissertation presents several important results of the knowledge use in the DSD approach. First, the DSD approach provides a theoretical foundation for designing 3D interaction. Techniques produced using DSD result in more useful real-world applications, at least in the domain of AEC. Second, the three-level interaction design framework forms a continuum of design and expands our understanding of 3D interaction design to a level that addresses real-world use. Third, this research proposes an integrated system design approach that integrates DSD and the usability engineering process. Fourth, this work produces a large set of empirical results and observations that demonstrate the effectiveness of domain-knowledge use in designing interaction techniques and applications. Finally, we apply domain-specific interaction techniques to real world applications and create a fairly complex application with improved usefulness. / Ph. D.
25

Walk-Centric User Interfaces for Mixed Reality

Santos Lages, Wallace 31 July 2018 (has links)
Walking is a natural part of our lives and is also becoming increasingly common in mixed reality. Wireless headsets and improved tracking systems allow us to easily navigate real and virtual environments by walking. In spite of the benefits, walking brings challenges to the design of new systems. In particular, designers must be aware of cognitive and motor requirements so that walking does not negatively impact the main task. Unfortunately, those demands are not yet fully understood. In this dissertation, we present new scientific evidence, interaction designs, and analysis of the role of walking in different mixed reality applications. We evaluated the difference in performance of users walking vs. manipulating a dataset during visual analysis. This is an important task, since virtual reality is increasingly being used as a way to make sense of progressively complex datasets. Our findings indicate that neither option is absolutely better: the optimal design choice should consider both user's experience with controllers and user's inherent spatial ability. Participants with reasonable game experience and low spatial ability performed better using the manipulation technique. However, we found that walking can still enable higher performance for participants with low spatial ability and without significant game experience. In augmented reality, specifying points in space is an essential step to create content that is registered with the world. However, this task can be challenging when information about the depth or geometry of the target is not available. We evaluated different augmented reality techniques for point marking that do not rely on any model of the environment. We found that triangulation by physically walking between points provides higher accuracy than purely perceptual methods. However, precision may be affected by head pointing tremors. To increase the precision, we designed a new technique that uses multiple samples to obtain a better estimate of the target position. This technique can also be used to mark points while walking. The effectiveness of this approach was demonstrated with a controlled augmented reality simulation and actual outdoor tests. Moving into the future, augmented reality will eventually replace our mobile devices as the main method of accessing information. Nonetheless, to achieve its full potential, augmented reality interfaces must support the fluid way we move in the world. We investigated the potential of adaptation in achieving this goal. We conceived and implemented an adaptive workspace system, based in the study of the design space and through user contextual studies. Our final design consists in a minimum set of techniques to support mobility and integration with the real world. We also identified a set of key interaction patterns and desirable properties of adaptation-based techniques, which can be used to guide the design of the next-generation walking-centered workspaces. / Ph. D. / Until recently, walking with virtual and augmented reality headsets was restricted by issues such as excessive weight, cables, tracking limitations, etc. As those limits go away, walking is becoming more common, making the user experience closer to the real world. If well explored, walking can also make some tasks easier and more efficient. Unfortunately, walking reduces our mental and motor performance and its consequences in interface design are not fully understood. In this dissertation, we present studies of the role of walking in three areas: scientific visualization in virtual reality, marking points in augmented reality, and accessing information in augmented reality. We show that although walking reduces our ability to perform those tasks, careful design can reduce its impact in a meaningful way.
26

[en] A 3D INTERACTION TOOL FOR ENGINEERING VIRTUAL ENVIRONMENTS USING MOBILE DEVICES / [pt] UMA FERRAMENTA DE INTERAÇÃO 3D PARA AMBIENTES VIRTUAIS DE ENGENHARIA UTILIZANDO DISPOSITIVOS MÓVEIS

DANIEL PIRES DE SA MEDEIROS 24 June 2014 (has links)
[pt] A interação em ambientes virtuais de engenharia se caracteriza pelo alto grau de precisão necessário para a realização de tarefas típicas desse ipo de ambiente. Para isso, normalmente são utilizados dispositivos de interação específícos que possuem 4 graus de liberdade ou mais. As atuais aplicações envolvendo interação 3D utilizam dispositivos de interação para a modelagem de objetos ou para a implementação de técnicas de navegação, seleção e manipulação de objetos em um ambiente virtual. Um problema relacionado é a necessidade de controlar tarefas naturalmente não-imersivas, como a entrada de símbolos (e.g., texto, fotos).Outro problema é a grande curva de aprendizado necessária para manusear tais dispositivos não convencionais. A adição de sensores popularização os smartphones e tablets possibilitaram a utilização desses dispositivos em ambientes virtuais de engenharia. Esses dispostitivos se diferenciam, além da popularidade e presença de sensores, pela possibilidade de inclusão de informações adicionais e a realização de tarefas naturalmente não-imersivas. Neste trabalho é apresentada uma ferramenta de interação 3D para tablets, que permite agregar todas as principais técnicas de interação 3D como navegação, seleção, manipulação, controle de sistema e entrada simbólica. Para avaliar a ferramenta proposta foi utilizada aplicação SimUEOP-Ambsim, um simulador de treinamento em plataformas de óleo e guias que tem a complexidade necessária e permite o uso de todas as técnicas implementadas. / [en] Interaction in engineering virtual environments is characterized by the necessity of the high precision level needed for the execution of specic tasks for this kind of environment. Generally this kind of task uses specicinteraction devices with 4 or more degrees of freedom (DOF). Current applications involving 3D interaction use interaction devices for object modelling or for the implementation of navigation, selection and manipulation tecniques in a virtual environment. A related problem is the necessity of controlling tasks that are naturally non-immersive, such as symbolic input (e.g., text, photos). Another problem is the large learning curve to handle such non-conventional devices. The addition of sensors and the popularization of smartphones and tablets, allowed the use of such devices in virtual engineering environments. These devices, besides their popularity and sensors, dier by the possibility of including additional information and performing naturally non-immersive tasks. This work presents a 3D interaction tablet-based tool, which allows the aggregation of all major 3D interaction tasks, such as navigation, selection, manipulation, system control and symbolic input. To evaluate the proposed tool we used the SimUEP-Ambsim application, a training simulator for oil and gas platforms that has the complexity needed and allows the use of all techniques implemented.
27

Collaboration interactive 3D en réalité virtuelle pour supporter des scénarios aérospatiaux / 3D collaborative interaction in virtual reality for aerospace scenarii

Clergeaud, Damien 17 October 2017 (has links)
De nos jours, l’industrie aérospatiale est composée d’acteurs internationaux.Due à la complexité de leur production (taille, nombre de composants,variété des systèmes, ...), la conception d’un avion ou d’un lanceurnécessite un grand nombre d’ingénieurs avec des domaines d’expertises variés.De plus, les industries aérospatiales sont distribuées de par le monde. Dans cecontexte complexe, il est nécessaire d’utiliser la réalité virtuelle afin de proposerdes environnements virtuels accessibles depuis des sites distants afin departager une expérience collaborative. Des problèmes particuliers, alors, surviennent,particulièrement par rapport à la perception des autres utilisateursimmergés.Cette thèse, en partenariat avec Airbus Group, se concentre sur la conceptionde techniques d’interaction collaboratives efficaces. Ces sessions collaborativespermettent de connecter plusieurs sites distants via le même environnementvirtuel. Ainsi, des experts de domaines variés peuvent travailler ensembleen étant immergés simultanément. Par exemple, si un problème survient durantles dernières étapes d’assemblage d’un lanceur, il peut être nécessairede rassembler des experts qui étaient impliqués en amont du projet (bureaud’étude pour la conception, usine pour la fabrication des systèmes), afin deconcevoir des solutions pour résoudre le problème.Dans cette thèse, nous proposons différentes techniques d’interactions afinde faciliter la collaboration à différents niveaux d’un projet industriel. Nousnous sommes intéressés à la communication et la perception entre collaborateursimmergés, à la prise d’annotation et la circulation de ces annotations età la collaboration asymétrique entre une salle de réunion et un environnementvirtuel à l’aide d’outil de réalité mixte. / The aerospace industry is no longer composed of local and individualbusinesses. Due to the complexity of the products (their size, the numberof components, the variety of systems, etc.), the design of an aircraft or alauncher involves a considerable number of engineers with various fields of expertise.Furthermore, aerospace companies often have industrial facilities allover the world. In such a complex setting, it is necessary to build virtual experimentsthat can be shared between different remote sites. Specific problemsthen arise, particularly in terms of the perception of other immersed users.We are working with Airbus Group in order to design efficient collaborativeinteraction methods. These collaborative sessions allow multiple sites to beconnected within the same virtual experiment and enable experts from differentfields to be immersed simultaneously. For instance, if a problem occurs duringthe final stages of a launcher assembly, it may be necessary to bring togetherexperts on different sites who were involved in previous steps ( initial design,manufacturing processes). In this thesis, we propose various interaction technique in order to ease thecollaboration at different moments of an industrial process. We contributedin the context of communication between immersed users, taking notes inthe virtual environment and sharing it outside virtual reality and asymmetriccollaboration between a physical meeting room and a virtual environment.
28

Immunology Virtual Reality (VR): Exploring Educational VR Experience Design for Science Learning

Zhang, Lei 14 May 2018 (has links)
Immunology Virtual Reality (VR) project is an immersive educational virtual reality experience that intends to provide an informal learning experience of specific immunology concepts to college freshmen in the Department of Biological Sciences at Virginia Tech (VT). The project is an interdisciplinary endeavor between my collaboration between people from different domain areas at VT: Creative Technologies, Education, Biological Sciences, and Computer Sciences. This thesis elaborates on the whole design process of how I created a working prototype of the project demo and shares insights from my design experience. / Master of Fine Arts / Immunology Virtual Reality is an immersive educational virtual reality experience in which a user takes on the role of an immune cell and migrates to fight off pathogen invasions at an infection site in the human body. It explores levels of interactivity and storytelling in educational VR and their impact on learning.
29

Interfaces utilisateur 3D, des terminaux mobiles aux environnements virtuels immersifs

Hachet, Martin 03 December 2010 (has links) (PDF)
Améliorer l'interaction entre un utilisateur et un environnement 3D est un défi de recherche primordial pour le développement positif des technologies 3D interactives dans de nombreux domaines de nos sociétés, comme l'éducation. Dans ce document, je présente des interfaces utilisateur 3D que nous avons développées et qui contribuent à cette quête générale. Le premier chapitre se concentre sur l'interaction 3D pour des terminaux mobiles. En particulier, je présente des techniques dédiées à l'interaction à partir de touches, et à partir de gestes sur les écrans tactiles des terminaux mobiles. Puis, je présente deux prototypes à plusieurs degrés de liberté basés sur l'utilisation de flux vidéos. Dans le deuxième chapitre, je me concentre sur l'interaction 3D avec les écrans tactiles en général (tables, écrans interactifs). Je présente Navidget, un exemple de technique d'interaction dédié au controle de la caméra virtuelle à partir de gestes 2D, et je discute des défis de l'interaction 3D sur des écrans multi-points. Finalement, le troisième chapitre de ce document est dédié aux environnements virtuels immersifs, avec une coloration spéciale vers les interfaces musicales. Je présente les nouvelles directions que nous avons explorées pour améliorer l'interaction entre des musiciens, le public, le son, et les environements 3D interactifs. Je conclue en discutant du futur des interfaces utilisateur 3D.
30

REAL-TIME CAPTURE AND RENDERING OF PHYSICAL SCENE WITH AN EFFICIENTLY CALIBRATED RGB-D CAMERA NETWORK

Su, Po-Chang 01 January 2017 (has links)
From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. With the recent explosive growth of Augmented Reality (AR) and Virtual Reality (VR) platforms, utilizing camera RGB-D camera networks to capture and render dynamic physical space can enhance immersive experiences for users. To maximize coverage and minimize costs, practical applications often use a small number of RGB-D cameras and sparsely place them around the environment for data capturing. While sparse color camera networks have been studied for decades, the problems of extrinsic calibration of and rendering with sparse RGB-D camera networks are less well understood. Extrinsic calibration is difficult because of inappropriate RGB-D camera models and lack of shared scene features. Due to the significant camera noise and sparse coverage of the scene, the quality of rendering 3D point clouds is much lower compared with synthetic models. Adding virtual objects whose rendering depend on the physical environment such as those with reflective surfaces further complicate the rendering pipeline. In this dissertation, I propose novel solutions to tackle these challenges faced by RGB-D camera systems. First, I propose a novel extrinsic calibration algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Second, I propose a novel rendering pipeline that can capture and render, in real-time, dynamic scenes in the presence of arbitrary-shaped reflective virtual objects. Third, I have demonstrated a teleportation application that uses the proposed system to merge two geographically separated 3D captured scenes into the same reconstructed environment. To provide a fast and robust calibration for a sparse RGB-D camera network, first, the correspondences between different camera views are established by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic using rigid transformation that is optimal only for pinhole cameras, different view transformation functions including rigid transformation, polynomial transformation, and manifold regression are systematically tested to determine the most robust mapping that generalizes well to unseen data. Third, the celebrated bundle adjustment procedure is reformulated to minimize the global 3D projection error so as to fine-tune the initial estimates. To achieve a realistic mirror rendering, a robust eye detector is used to identify the viewer's 3D location and render the reflective scene accordingly. The limited field of view obtained from a single camera is overcome by our calibrated RGB-D camera network system that is scalable to capture an arbitrarily large environment. The rendering is accomplished by raytracing light rays from the viewpoint to the scene reflected by the virtual curved surface. To the best of our knowledge, the proposed system is the first to render reflective dynamic scenes from real 3D data in large environments. Our scalable client-server architecture is computationally efficient - the calibration of a camera network system, including data capture, can be done in minutes using only commodity PCs.

Page generated in 0.1129 seconds