• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • 1
  • Tagged with
  • 11
  • 11
  • 5
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The influence of the interface on learning with educational software

Holst, Shirley J. January 1999 (has links)
No description available.
2

THE DRAG LANGUAGE

Ma, Weixi 01 January 2016 (has links)
This thesis describes the Drag language. Drag is a general purpose, gradually typed, lexically scoped, and multi-paradigm pro- gramming language. The essence of Drag is to build the abstract syntax trees of the programs directly and interactively. Our work includes the language specification and a prototype program. The language specification focuses on the syntax, the semantic model, and the type system. The prototype consists of an interactive editor and a compiler that targets several plat- forms, among which we focus on the LLVM platform in this thesis.
3

Physically based mechanical metaphors in architectural space planning

Arvin, Scott Anthony 30 September 2004 (has links)
Physically based space planning is a means for automating the conceptual design process by applying the physics of motion to space plan elements. This methodology provides for a responsive design process, allowing a designer to easily make decisions whose consequences propagate throughout the design. It combines the speed of automated design methods with the flexibility of manual design methods, while adding a highly interactive quality and a sense of collaboration with the design. The primary assumption is that a digital design tool based on a physics paradigm can facilitate the architectural space planning process. The hypotheses are that Newtonian dynamics can be used 1) to define mechanical metaphors to represent the elements in an architectural space plan, 2) to compute architectural space planning solutions, and 3) to interact with architectural space plans. I show that space plan elements can be represented as physical masses, that design objectives can be represented using mechanical metaphors such as springs, repulsion fields, and screw clamps, that a layout solution can be computed by using these elements in a dynamical simulation, and that the user can interact with that solution by applying forces that are also models of the same mechanical objects. I present a prototype software application that successfully implements this approach. A subjective evaluation of this prototype reveals that it demonstrates a feasible process for producing space plans, and that it can potentially improve the design process because of the quality of the manipulation and the enhanced opportunities for design exploration it provides to the designer. I found that an important characteristic of this approach is that representation, computation, and interaction are all defined using the same paradigm. This contrasts with most approaches to automated space planning, where these three characteristics are usually defined in completely different ways. Also emerging from this work is a new cognitive theory of design titled 'dynamical design imagery,' which proposes that the elements in a designer's mental imagery during the act of design are dynamic in nature and act as a dynamical system, rather than as static images that are modified in a piecewise algorithmic manner.
4

Physically based mechanical metaphors in architectural space planning

Arvin, Scott Anthony 30 September 2004 (has links)
Physically based space planning is a means for automating the conceptual design process by applying the physics of motion to space plan elements. This methodology provides for a responsive design process, allowing a designer to easily make decisions whose consequences propagate throughout the design. It combines the speed of automated design methods with the flexibility of manual design methods, while adding a highly interactive quality and a sense of collaboration with the design. The primary assumption is that a digital design tool based on a physics paradigm can facilitate the architectural space planning process. The hypotheses are that Newtonian dynamics can be used 1) to define mechanical metaphors to represent the elements in an architectural space plan, 2) to compute architectural space planning solutions, and 3) to interact with architectural space plans. I show that space plan elements can be represented as physical masses, that design objectives can be represented using mechanical metaphors such as springs, repulsion fields, and screw clamps, that a layout solution can be computed by using these elements in a dynamical simulation, and that the user can interact with that solution by applying forces that are also models of the same mechanical objects. I present a prototype software application that successfully implements this approach. A subjective evaluation of this prototype reveals that it demonstrates a feasible process for producing space plans, and that it can potentially improve the design process because of the quality of the manipulation and the enhanced opportunities for design exploration it provides to the designer. I found that an important characteristic of this approach is that representation, computation, and interaction are all defined using the same paradigm. This contrasts with most approaches to automated space planning, where these three characteristics are usually defined in completely different ways. Also emerging from this work is a new cognitive theory of design titled 'dynamical design imagery,' which proposes that the elements in a designer's mental imagery during the act of design are dynamic in nature and act as a dynamical system, rather than as static images that are modified in a piecewise algorithmic manner.
5

Direct Manipulation Of Virtual Objects

Nguyen, Long 01 January 2009 (has links)
Interacting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user's real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world. Complicating matters further, VEs presented by currently available visual displays may be inaccurate or distorted due to technological limitations. Fundamental physiological and psychological aspects of vision as they pertain to the task of object manipulation were thoroughly reviewed. Other sensory modalities--proprioception, haptics, and audition--and their cross-interactions with each other and with vision are briefly discussed. Visual display technologies, the primary component of any VE, were canvassed and compared. Current applications and research were gathered and categorized by different VE types and object interaction techniques. While object interaction research abounds in the literature, pockets of research gaps remain. Direct, dexterous, manual interaction with virtual objects in Mixed Reality (MR), where the real, seen hand accurately and effectively interacts with virtual objects, has not yet been fully quantified. An experimental test bed was designed to provide the highest accuracy attainable for salient visual cues in personal space. Optical alignment and user calibration were carefully performed. The test bed accommodated the full continuum of VE types and sensory modalities for comprehensive comparison studies. Experimental designs included two sets, each measuring depth perception and object interaction. The first set addressed the extreme end points of the Reality-Virtuality (R-V) continuum--Immersive Virtual Environment (IVE) and Reality Environment (RE). This validated, linked, and extended several previous research findings, using one common test bed and participant pool. The results provided a proven method and solid reference points for further research. The second set of experiments leveraged the first to explore the full R-V spectrum and included additional, relevant sensory modalities. It consisted of two full-factorial experiments providing for rich data and key insights into the effect of each type of environment and each modality on accuracy and timeliness of virtual object interaction. The empirical results clearly showed that mean depth perception error in personal space was less than four millimeters whether the stimuli presented were real, virtual, or mixed. Likewise, mean error for the simple task of pushing a button was less than four millimeters whether the button was real or virtual. Mean task completion time was less than one second. Key to the high accuracy and quick task performance time observed was the correct presentation of the visual cues, including occlusion, stereoscopy, accommodation, and convergence. With performance results already near optimal level with accurate visual cues presented, adding proprioception, audio, and haptic cues did not significantly improve performance. Recommendations for future research include enhancement of the visual display and further experiments with more complex tasks and additional control variables.
6

Direct Manipulation for Information Visualization / Manipulation Directe pour la Visualisation d’Information

Perin, Charles 17 November 2014 (has links)
La communauté de la Visualisation d'Information (Infovis) accorde une importance primordiale à la conception de techniques de visualisation nouvelles, efficaces, ou spécialisées. Alors qu'une technique de visualisation est composée à la fois de techniques de représentation et de techniques d'interaction, la conception de nouvelles techniques d'interaction pour l'Infovis passe souvent au second plan. Dans cette thèse, centrée sur l'interaction en Infovis, j'explore la conception de nouvelles techniques d'interaction afin de rendre des techniques de visualisation existantes plus efficaces, plus adaptées aux tâches utilisateur, et plus engageantes. Afin que ces techniques d'interaction soient efficaces, il est nécessaire d'abandonner les outils interactifs (widgets) standards et proposer des interfaces utilisateur allant au-delà des interfaces à fenêtres, icônes, menus et pointeur connues sous le nom d'interfaces WIMP (Window/Icon/Menu/Pointer).Dans cette thèse, je soutiens que la conception de nouvelles techniques d'interaction pour la visualisation devraient être basée sur le paradigme de la manipulation directe et sur le modèle de l'interaction instrumentale, et s'inspirer de paradigmes d'interaction établis en Interaction Homme-Machine (IHM) mais trop peu connus et reconnus en Infovis. En me basant sur plusieurs projets que j'ai menés au court de ma thèse, je démontre que la conception opportuniste d'interactions nouvelles peut rendre des techniques de visualisation plus efficaces. Ces différents projets soulèvent des problèmes de conception des techniques d'interaction, tels que le compromis entre la congruence cognitive d'une technique d'interaction et sa généricité, le problème de la conception d'interactions engageant l'utilisateur, et les mérites de l'interaction fluide et ininterrompue. Enfin, je propose un ensemble de règles dérivées des différents projets de cette thèse et je soumets des perspectives de travaux futurs, afin de contribuer au grand défi d'établir une théorie de l'interaction pour l'Infovis. / There is a tremendous effort from the information visualization (Infovis) community to design novel, more efficient or more specialized desktop visualization techniques. While visual representations and interactions are combined to create these visualizations, less effort is invested in the design of new interaction techniques for Infovis. In this thesis, I focus on interaction for Infovis and explore how to improve existing visualization techniques through efficient yet simple interactions. To become more efficient, the interaction techniques should reach beyond the standard widgets and Window/Icon/Menu/Pointer (WIMP) user interfaces. In this thesis, I argue that the design of novel interactions for visualization should be based on the direct manipulation paradigm, instrumental interaction, and take inspiration from advanced interactions investigated in HCI research but not well exploited yet in Infovis. I take examples from multiple projects I have designed to illustrate how opportunistic interactions can empower visualizations and I explore design implications raised by novel interaction techniques, such as the tradeoff between cognitive congruence and versatility, the problem of engaging interaction, and the benefits of seamless, fluid interaction. Finally, I provide guidelines and perspectives, addressing the grand challenge of building or consolidating the theory of interaction for Infovis.
7

Azim : Direction-Based Service System for Both Indoors and Outdoors

Iwasaki, Yohei, Kawaguchi, Nobuo, Inagaki, Yasuyoshi 03 1900 (has links)
No description available.
8

Uma arquitetura de suporte a interações 3D integrada a GPU / A GPU-based architeture for supporting 3D interactions

Batagelo, Harlen Costa 27 July 2007 (has links)
Orientador: Wu Shin-Ting / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-09T22:16:05Z (GMT). No. of bitstreams: 1 Batagelo_HarlenCosta_D.pdf: 45803621 bytes, checksum: 84f60003e85a2363b798560089526880 (MD5) Previous issue date: 2007 / Resumo: Tendo como hipótese de que o controle preciso do movimento de um cursor constitui uma das técnicas elementares para as tarefas de manipulação direta 3D, esta tese propõe uma arquitetura de suporte a controles configuráveis dos movimentos de cursores em relação a modelos deformados em hardware gráfico. De forma integrada ao fluxo programável de visualização, a arquitetura calcula atributos de geometria diferencial discreta dos modelos processados, codificando tais atributos em pixels de buffers de renderização não visíveis. Mostramos, através de estudos de casos, que o uso desses atributos é suficiente para estabelecer uma correspondência entre o espaço discreto do modelo renderizado na tela e o espaço contínuo do modelo submetido ao fluxo de visualização. Isto permite que os cursores sejam posicionados de forma consistente com aquilo que o usuário está visualizando, proporcionando uma interação mais acurada. Testes de desempenho e robustez são conduzidos para validar a arquitetura. Uma biblioteca de funções que encapsula a arquitetura é apresentada, juntamente com exemplos de tarefas de manipulação direta 3D implementadas através dela / Abstract: Based on the hypothesis that the precise control of the motion of a cursor constitutes one of the elementary techniques for 3D direct manipulation tools, this thesis proposes an architecture for supporting a configurable control of the motion of cursors with respect to models deformed on graphics hardware. Integrated with the actual programmable rendering pipeline, the architecture computes discrete differential geometric attributes of the processed models and encodes such attributes in pixels of off-screen render buffers. We show, through case studies, that these attributes are sufficient to establish a correspondence between the discrete space of the model rendered on the screen and the continuous space of the model submitted to the rendering pipeline. As a result, the cursors can be positioned consistently with what the user is actually viewing, thus providing a more accurate interaction. Efficiency and reliability tests are conducted to validate the architecture. A library of functions that encapsulates the architecture and examples of 3D direct manipulation tasks implemented with it are also presented. / Doutorado / Engenharia de Computação / Doutor em Engenharia Elétrica
9

Multi-Touch Interfaces for Public Exploration and Navigation in Astronomical Visualizations

Bosson, Jonathan January 2017 (has links)
OpenSpace is an interactive data visualization software system that portrays the entire known universe in a 3D simulation. Current navigation interface requires explanations, which prohibits OpenSpace to be displayed effectively in public exhibitions. Research has been shown that using large tangible touch surfaces with a multi-touch navigation interface is more engaging to users than mouse and keyboard as well as enhances the understanding of navigation control, thus decreasing the required instructions to learn the systems user interface. This thesis shows that combining a velocity-based interaction model together with a screen-space direct-manipulation formulation produces a user-friendly interface. Giving the user precise control of objects and efficient travels in between in the vastness of space. This thesis presents the work of integrating a multi-touch navigation interface with a combined formulation of velocity-based interaction and screen-space direct-manipulation into the software framework OpenSpace.
10

Gestes et manipulation directe pour la réalité virtuelle immersive / Gestures and direct manipulation for immersive virtual reality

Chapoulie, Emmanuelle 30 June 2014 (has links)
La réalité virtuelle est une technologie qui voit ses applications s’étendre à de nombreux domaines (médical, automobile, etc.). Cette thèse se place dans le contexte des espaces virtuels complètement immersifs, et a pour but d’étudier les effets des deux principaux types d’interfaces proposés (manette avec 6 degrés de liberté, et système de suivi de doigts) sur l’expérience des utilisateurs, dans le cadre de la manipulation d’objets 3D. Nous nous intéressons à des paramètres tels que la facilité d’utilisation, la sensation d’immersion, la rapidité et la précision offertes... Pour cela, nous proposons des expériences évaluant ces paramètres à travers des tâches dont le succès est mesurable, et qui ne sont pas spécifiques à un domaine. Dans une première étude, nous nous intéressons aux tâches complexes d’ordre général, faisant appel à des compétences requises dans les manipulations quotidiennes, telles que le fait d’attraper, de relâcher, de translater, de tourner et de maintenir en équilibre des objets tout en se déplaçant. Nous affinons ensuite notre étude en observant les effets de ces interfaces sur les mouvements eux-mêmes, en les décomposant en degrés de liberté individuels et groupés. Enfin, nous testons l’applicabilité de notre système de manipulation directe dans le cadre d’une étude préliminaire sur l’utilisation de la réalité virtuelle pour le traitement de la maladie d’Alzheimer. Ces études analysent les propriétés de ces interfaces dans le but de fournir des indications aidant au choix de l’interface la plus appropriée pour des applications futures. / Virtual reality is a technology with applications in numerous fields (medical, automotive, etc.). This thesis focuses on immersive virtual spaces, and aims at studying the effects of the two major types of interfaces proposed (6 degree of freedom flystick, and finger-tracking system) on the user experience, and specifically for 3D object manipulation. We are interested in parameters such as ease of use, sense of presence, speed and precision offered. To do so, we design experiments to evaluate these parameters via tasks with measurable success, and which are not field specific.In a first experiment, we study complex general purpose tasks, combining skills required in everyday manipulations, such as grabbing, releasing, translating, rotating, and balancing objects while walking. We then refine our study by observing the effects of those interfaces on the movements themselves, by decomposing them into individual and grouped degrees of freedom. Lastly, we evaluate the applicability of our direct manipulation system in the context of a preliminary study on the use of virtual reality for the treatment of Alzheimer’s disease. These studies analyze the properties of these interfaces to provide guidelines to the choice of the most appropriate interface for future experiments.

Page generated in 0.0926 seconds