Spelling suggestions: "subject:"interactive detechniques"" "subject:"interactive 3dtechniques""
1 |
Escultura virtual de objetos compostos com coerência temporal / Temporally coherent sculpture of composite objectsSampaio, Artur Pereira January 2017 (has links)
SAMPAIO, Artur Pereira. Temporally coherent sculpture of composite objects. 2017. 67 f. Tese (Doutorado em Ciência da Computação)-Universidade Federal do Ceará, Fortaleza, 2017. / Submitted by Jonatas Martins (jonatasmartins@lia.ufc.br) on 2017-09-26T19:48:12Z
No. of bitstreams: 1
2017_tese_apsampaio.pdf: 73477614 bytes, checksum: 49861ce00f0635a78ad41b8260e6be32 (MD5) / Approved for entry into archive by Rocilda Sales (rocilda@ufc.br) on 2017-09-27T11:14:56Z (GMT) No. of bitstreams: 1
2017_tese_apsampaio.pdf: 73477614 bytes, checksum: 49861ce00f0635a78ad41b8260e6be32 (MD5) / Made available in DSpace on 2017-09-27T11:14:56Z (GMT). No. of bitstreams: 1
2017_tese_apsampaio.pdf: 73477614 bytes, checksum: 49861ce00f0635a78ad41b8260e6be32 (MD5)
Previous issue date: 2017 / We address the problem of sculpting and deforming shapes composed of small, randomly placed objects. Objects may be tightly packed - such as pebbles, pills, seeds and grains, or may be sparsely distributed with an overarching shape - such as flocks of birds or schools of fishes. Virtual sculpture has rapidly become a standard in the entertainment industry, as evidenced by the extensive use of software such as ZBrush Pixologic (2017b). Composites, though, are still usually created in a static way by individually placing each
object or by sculpting a support surface and procedurally populating the final shape, which raises problems for the generalization of evolving shapes with visual continuity of the components. Large amounts of geometrical data are generated that must be maintained and processed, both by the CPU and by the GPU. Whenever the shape is stretched, pressed or deformed, one has to define how these compositing objects should turn, displace or disappear inside the volume, as well as how new instances should become visible to
the outside. It is difficult to rely on a physical system to perform that task in real time. The system we suggest can be constructed upon any uniform mesh-based representation that can be deformed and whose connectivity can be updated by operations such as edge splits, collapses, and flips. We introduce the notion of CompEls as composing elements used to populate the mesh, with aperiodic distribution. These can be automatically updated under deformation. The idea is to sculpt the shape as if it were filled with little objects, without handling the complexity of manipulating volumetric objects. For this purpose, we suggest exploiting the properties of the uniform sampling of the surface with distances between vertices greatly exceeding the CompEls distances. Both the surface and the CompEls are immersed into deformation fields, such that the update of the uniform sampling can be used to track the movement of the CompEls, to identify those which
should disappear inside the shape, and empty areas where further CompEls should be generated. This system uses GPU optimizations to efficiently perform the rendering of individual components. To our knowledge, no previous sculpting system allows the user to simultaneously see and sculpt agglomerates in such a fast and reliable fashion. / Esta tese trata do problema da escultura e deformação de formas compostas de pequenos objetos aleatoriamente distribuídos. Objetos podem estar fortemente agrupados - como pedregulhos, pílulas, sementes e grãos - ou estar escassamente distribuídos ao longo de uma forma global - como bandos de aves ou escolas de peixes. Escultura virtual tornou-se rapidamente o padrão na indústria de entretenimento, como evidenciado pelo extenso uso de aplicativos como o ZBrush. Objetos compostos, por outro lado, ainda são usualmente criados de maneira estática, por meio do posicionamento individual de cada objeto; ou por meio da escultura de uma superfície de suporte a ser povoada de maneira procedural. A generalização dessa abordagem é problemática quando se considera formas dinâmicas e a necessidade de continuidade visual dos elementos componentes. Uma grande quantidade de dados geométricos é gerada, que precisa ser mantida e processada, tanto pela CPU como pela GPU. Sempre que a forma é esticada, pressionada ou deformada, é necessária a definição de como esses objetos componentes devem girar, se deslocar ou desaparecer dentro do volume, assim como de que maneira novas instâncias devem se tornar visíveis na superfície. Depender de um sistema físico para a tarefa em tempo real é inviável em função da quantidade de dados. A técnica que propomos pode ser implementada sobre qualquer representação baseada em malhas uniformes, que possam ser deformadas, e cuja conectividade possa ser atualizada por meio de operações como colapso, subdivisão e rotação de arestas. Nós introduzimos o conceito de CompEls como elementos componentes utilizados para povoar a malha, com distribuição aperiódica. A ideia é esculpir a forma como se ela estivesse repleta de pequenos objetos, sem a preocupação com a complexidade da manipulação de objetos compostos. Para este fim, propomos explorar a as propriedades da amostragem uniforme da superfície, com distâncias entre vértices excedendo em grande medida as distâncias entre CompEls. Ambos, superfície e CompEls, são imersos em campos de deformação, de tal forma que a atualização da amostragem uniforme possa ser utilizada para rastrear o deslocamento de CompEls, para identificar aqueles que devem desaparecer no interior do objeto, e para identificar áreas vazias onde novos CompEls devem ser gerados. Esse algoritmo se utiliza de otimizações de GPU para realizar a rasterização eficiente de componentes individuais. Até onde sabemos, nenhum sistema anterior de escultura virtual permite que o usuário, de forma simultânea, veja e esculpa aglomerados de maneira simples e confiável.
|
2 |
earPod: Efficient Hierarchical Eyes-free Menu SelectionZhao, Shengdong 30 July 2009 (has links)
The research in this dissertation developed and evaluated a new method for menuing interaction that is intended to be better suited than current methods with respect to mobile eyes-free scenarios. The earPod prototype was developed and then evaluated in a series of four experiments. In the first two experiments, earPod was first compared against an iPod-like (visual) interface and then against a fuller set of competitive techniques that included dual vs. single modality presentations, audio vs. visual modalities, and radial vs. linear mappings. The third experiment consisted of a longitudinal study designed to understand the learning patterns that occurred with these techniques. The fourth experiment examined performance in a conventional (single task) desktop setting and in a driving simulator (i.e., a dual task situation where participants carried out the driving task while interacting with the mobile device).
The results of these experiments, comparing earPod with an iPod-like visual linear menu technique on fixed-sized static menus, indicated that earPod is comparable both in terms of speed and accuracy. Thus it seems likely that earPod should be an effective and efficient eyes-free menu selection technique. The comprehensive 3x2 study implemented in Experiment 2 showed that the benefit of earPod was largely due to the radial menu style design. While performance using it was comparable in both speed and accuracy with the visual linear menus, its performance was slower than for a visual radial style menu. In the multi-task simulated driving condition in Experiment 4, where concurrent tasks competed for visual attention, the eyes-free earPod interface was found to be beneficial in improving performance with respect to the safety related driving parameters of following distance and lateral movement in the lane. Thus auditory feedback appears to mitigate some of the risk associated with menu selection while driving. Overall, the results indicated that not only should earPod menuing be able to provide safer interaction in dual task settings, but also that, with sufficient training, audio only menu selection using innovative techniques such as those employed by earPod can be competitive with visual menuing systems even in desktop settings.
|
3 |
earPod: Efficient Hierarchical Eyes-free Menu SelectionZhao, Shengdong 30 July 2009 (has links)
The research in this dissertation developed and evaluated a new method for menuing interaction that is intended to be better suited than current methods with respect to mobile eyes-free scenarios. The earPod prototype was developed and then evaluated in a series of four experiments. In the first two experiments, earPod was first compared against an iPod-like (visual) interface and then against a fuller set of competitive techniques that included dual vs. single modality presentations, audio vs. visual modalities, and radial vs. linear mappings. The third experiment consisted of a longitudinal study designed to understand the learning patterns that occurred with these techniques. The fourth experiment examined performance in a conventional (single task) desktop setting and in a driving simulator (i.e., a dual task situation where participants carried out the driving task while interacting with the mobile device).
The results of these experiments, comparing earPod with an iPod-like visual linear menu technique on fixed-sized static menus, indicated that earPod is comparable both in terms of speed and accuracy. Thus it seems likely that earPod should be an effective and efficient eyes-free menu selection technique. The comprehensive 3x2 study implemented in Experiment 2 showed that the benefit of earPod was largely due to the radial menu style design. While performance using it was comparable in both speed and accuracy with the visual linear menus, its performance was slower than for a visual radial style menu. In the multi-task simulated driving condition in Experiment 4, where concurrent tasks competed for visual attention, the eyes-free earPod interface was found to be beneficial in improving performance with respect to the safety related driving parameters of following distance and lateral movement in the lane. Thus auditory feedback appears to mitigate some of the risk associated with menu selection while driving. Overall, the results indicated that not only should earPod menuing be able to provide safer interaction in dual task settings, but also that, with sufficient training, audio only menu selection using innovative techniques such as those employed by earPod can be competitive with visual menuing systems even in desktop settings.
|
4 |
Interactive Techniques Between Collaborative Handheld Devices and Wall DisplaysSchulte, Daniel Leon 12 August 2013 (has links) (PDF)
Handheld device users want to work collaboratively on large wall-sized displays with other handheld device users. However, no software frameworks exist to support this type of collaborative activity. This thesis introduces a collaborative application framework that allows users to collaborate with each other across handheld devices and large wall displays. The framework is comprised of a data storage system and a set of generic interactive techniques that can be utilized by applications. The data synchronization system allows data to be synchronized across multiple handheld devices and wall displays. The interactive techniques enable users to create data items and to form relationships between those data items. The framework is evaluated by creating two sample applications and by conducting a set of user study interactive tasks. The data recorded from these evaluations shows that the framework is easy to extend, and that with minimal training, the generic interactive techniques are easy to learn and effective.
|
5 |
Visualizing complex data : A use case evaluating an interactive visualization about food purchasesDragomoiris, Lampros January 2016 (has links)
Complex data are being stored daily in databases in an unstructured way. Visualizations techniques can be used to present complex data in a user friendly and understandable way. This thesis presents the implementation of a visualization interactive tool called Eco Donuts. It is part of a set of tools created to visualize complex food data called Ekopanelen. The feature Eco Donuts presents time-dependent food data which are ordered in categories. It gives the opportunity to users to explore their data over time by performing simple interactions. This thesis documents an exploratory study on how this visualization tool can be used to enhance the user experience and provide insights of complex data. The visualization feature was implemented and evaluated with ten participants. The participants were asked to evaluate the visualization tool by accomplishing nine different tasks. The sessions were recorded using a log system as well as video recording. This study shows that the proposed tool can be used to visualize complex information in a user friendly and presentable way.
|
6 |
Live SurfaceArmstrong, Christopher J. 24 February 2007 (has links) (PDF)
Live Surface allows users to segment and render complex surfaces from 3D image volumes at interactive (sub-second) rates using a novel, Cascading Graph Cut (CGC). Live Surface consists of two phases. (1) Preprocessing for generation of a complete 3D watershed hierarchy followed by tracking of all catchment basin surfaces. (2) User interaction in which, with each mouse movement, the 3D object is selected and rendered in real time. Real-time segmentation is ccomplished by cascading through the 3D watershed hierarchy from the top, applying graph cut successively at each level only to catchment basins bordering the segmented surface from the previous level. CGC allows the entire image volume to be segmented an order of magnitude faster than existing techniques that make use of graph cut. OpenGL rendering provides for display and update of the segmented surface at interactive rates. The user selects objects by tagging voxels with either (object) foreground or background seeds. Seeds can be placed on image cross-sections or directly on the 3D rendered surface. Interaction with the rendered surface improves the user's ability to steer the segmentation, augmenting or subtracting from the current selection. Segmentation and rendering, combined, is accomplished in about 0.5 seconds, allowing 3D surfaces to be displayed and updated dynamically as each additional seed is deposited. The immediate feedback of Live Surface allows for the segmentation of 3D image volumes with an interaction paradigm similar to the Live Wire (Intelligent Scissors) tool used in 2D images.
|
Page generated in 0.0893 seconds