Spelling suggestions: "subject:"bottcontext"" "subject:"focusfronting""
11 |
Focus and Context Methods for Particle-Based DataStaib, Joachim 18 February 2019 (has links)
Particle-based models play a central role in many simulation techniques used for example in thermodynamics, molecular biology, material sciences, or astrophysics. Such simulations are carried out by directly calculating interactions on a set of individual particles over many time steps. Clusters of particles form higher-order structures like drops or waves.
The interactive visual inspection of particle datasets allows gaining in-depth insight, especially for initial exploration tasks. However, their visualization is challenging in many ways. Visualizations are required to convey structures and dynamics on multiple levels, such as per-particle or per-structure. Structures are typically dense and highly dynamic over time and are thus likely subject to heavy occlusion. Furthermore, since simulation systems become increasingly powerful, the number of particles per time step increases steadily, reaching data set sizes of trillions of particles. This enormous amount of data is challenging not only from a computational perspective but also concerning comprehensibility.
In this work, the idea of Focus+Context is applied to particle visualizations. Focus+Context is based on presenting a selection of the data – the focus – in high detail, while the remaining data – the context – is shown in reduced detail within the same image. This enables efficient and scalable visualizations that retain as much relevant information as possible while still being comprehensible for a human researcher. Based on the formulation of the most critical challenges, various novel methods for the visualization of static and dynamic 3D and nD particle data are introduced. A new approach that builds on global illumination and extended transparency allows to visualize otherwise occluded structures and
steer visual saliency towards selected elements. To address the time-dependent nature of particle data, Focus+Context is then extended to time. By using an illustration-inspired visualization, the researcher is supported in assessing the dynamics of higher-order particle structures. To understand correlations and high dimensional structures in higher dimensional data, a new method is presented, based on the idea of depth of field.
|
12 |
Interactive rendering techniques for focus+context visualization of 3D geovirtual environmentsTrapp, Matthias January 2013 (has links)
This thesis introduces a collection of new real-time rendering techniques and applications for focus+context visualization of interactive 3D geovirtual environments such as virtual 3D city and landscape models. These environments are generally characterized by a large number of objects and are of high complexity with respect to geometry and textures. For these reasons, their interactive 3D rendering represents a major challenge. Their 3D depiction implies a number of weaknesses such as occlusions, cluttered image contents, and partial screen-space usage.
To overcome these limitations and, thus, to facilitate the effective communication of geo-information, principles of focus+context visualization can be used for the design of real-time 3D rendering techniques for 3D geovirtual environments (see Figure). In general, detailed views of a 3D geovirtual environment are combined seamlessly with abstracted views of the context within a single image. To perform the real-time image synthesis required for interactive visualization, dedicated parallel processors (GPUs) for rasterization of computer graphics primitives are used. For this purpose, the design and implementation of appropriate data structures and rendering pipelines are necessary. The contribution of this work comprises the following five real-time rendering methods:
• The rendering technique for 3D generalization lenses enables the combination of different 3D city geometries (e.g., generalized versions of a 3D city model) in a single image in real time. The method is based on a generalized and fragment-precise clipping approach, which uses a compressible, raster-based data structure. It enables the combination of detailed views in the focus area with the representation of abstracted variants in the context area.
• The rendering technique for the interactive visualization of dynamic raster data in 3D geovirtual environments facilitates the rendering of 2D surface lenses. It enables a flexible combination of different raster layers (e.g., aerial images or videos) using projective texturing for decoupling image and geometry data. Thus, various overlapping and nested 2D surface lenses of different contents can be visualized interactively.
• The interactive rendering technique for image-based deformation of 3D geovirtual environments enables the real-time image synthesis of non-planar projections, such as cylindrical and spherical projections, as well as multi-focal 3D fisheye-lenses and the combination of planar and non-planar projections.
• The rendering technique for view-dependent multi-perspective views of 3D geovirtual environments, based on the application of global deformations to the 3D scene geometry, can be used for synthesizing interactive panorama maps to combine detailed views close to the camera (focus) with abstract views in the background (context). This approach reduces occlusions, increases the usage the available screen space, and reduces the overload of image contents.
• The object-based and image-based rendering techniques for highlighting objects and focus areas inside and outside the view frustum facilitate preattentive perception.
The concepts and implementations of interactive image synthesis for focus+context visualization and their selected applications enable a more effective communication of spatial information, and provide building blocks for design and development of new applications and systems in the field of 3D geovirtual environments. / Die Darstellung immer komplexerer raumbezogener Information durch Geovisualisierung stellt die existierenden Technologien und den Menschen ständig vor neue Herausforderungen. In dieser Arbeit werden fünf neue, echtzeitfähige Renderingverfahren und darauf basierende Anwendungen für die Fokus-&-Kontext-Visualisierung von interaktiven geovirtuellen 3D-Umgebungen – wie virtuelle 3D-Stadt- und Landschaftsmodelle – vorgestellt. Die große Menge verschiedener darzustellender raumbezogener Information in 3D-Umgebungen führt oft zu einer hohen Anzahl unterschiedlicher Objekte und somit zu einer hohen Geometrie- und Texturkomplexität. In der Folge verlieren 3D-Darstellungen durch Verdeckungen, überladene Bildinhalte und eine geringe Ausnutzung des zur Verfügung stehenden Bildraumes an Informationswert.
Um diese Beschränkungen zu kompensieren und somit die Kommunikation raumbezogener Information zu verbessern, kann das Prinzip der Fokus-&-Kontext-Visualisierung angewendet werden. Hierbei wird die für den Nutzer wesentliche Information als detaillierte Ansicht im Fokus mit abstrahierter Kontextinformation nahtlos miteinander kombiniert. Um das für die interaktive Visualisierung notwendige Echtzeit-Rendering durchzuführen, können spezialisierte Parallelprozessoren für die Rasterisierung von computergraphischen Primitiven (GPUs) verwendet werden. Dazu ist die Konzeption und Implementierung von geeigneten Datenstrukturen und Rendering-Pipelines notwendig. Der Beitrag dieser Arbeit umfasst die folgenden fünf Renderingverfahren.
• Das Renderingverfahren für interaktive 3D-Generalisierungslinsen: Hierbei wird die Kombination unterschiedlicher 3D-Szenengeometrien, z. B. generalisierte Varianten eines 3DStadtmodells, in einem Bild ermöglicht. Das Verfahren basiert auf einem generalisierten Clipping-Ansatz, der es erlaubt, unter Verwendung einer komprimierbaren, rasterbasierten Datenstruktur beliebige Bereiche einer 3D-Szene freizustellen bzw. zu kappen. Somit lässt sich eine Kombination von detaillierten Ansichten im Fokusbereich mit der Darstellung einer abstrahierten Variante im Kontextbereich implementieren.
• Das Renderingverfahren zur Visualisierung von dynamischen Raster-Daten in geovirtuellen 3D-Umgebungen zur Darstellung von 2D-Oberflächenlinsen: Die Verwendung von projektiven Texturen zur Entkoppelung von Bild- und Geometriedaten ermöglicht eine flexible Kombination verschiedener Rasterebenen (z.B. Luftbilder oder Videos). Somit können verschiedene überlappende sowie verschachtelte 2D-Oberflächenlinsen mit unterschiedlichen Dateninhalten interaktiv visualisiert werden.
• Das Renderingverfahren zur bildbasierten Deformation von geovirtuellen 3D-Umgebungen: Neben der interaktiven Bildsynthese von nicht-planaren Projektionen, wie beispielsweise zylindrischen oder sphärischen Panoramen, lassen sich mit diesem Verfahren multifokale 3D-Fischaugen-Linsen erzeugen sowie planare und nicht-planare Projektionen miteinander kombinieren.
• Das Renderingverfahren für die Generierung von sichtabhängigen multiperspektivischen Ansichten von geovirtuellen 3D-Umgebungen: Das Verfahren basiert auf globalen Deformationen der 3D-Szenengeometrie und kann zur Erstellung von interaktiven 3D-Panoramakarten verwendet werden, welche beispielsweise detaillierte Absichten nahe der virtuellen Kamera (Fokus) mit abstrakten Ansichten im Hintergrund (Kontext) kombinieren. Dieser Ansatz reduziert Verdeckungen, nutzt den zur Verfügung stehenden Bildraum in verbesserter Weise aus und reduziert das Überladen von Bildinhalten.
• Objekt-und bildbasierte Renderingverfahren für die Hervorhebung von Fokus-Objekten und Fokus-Bereichen innerhalb und außerhalb des sichtbaren Bildausschnitts, um die präattentive Wahrnehmung eines Benutzers besser zu unterstützen.
Die in dieser Arbeit vorgestellten Konzepte, Entwürfe und Implementierungen von interaktiven Renderingverfahren zur Fokus-&-Kontext-Visualisierung sowie deren ausgewählte Anwendungen ermöglichen eine effektivere Kommunikation raumbezogener Information und repräsentieren softwaretechnische Bausteine für die Entwicklung neuer Anwendungen und Systeme im Bereich der geovirtuellen 3D-Umgebungen.
|
13 |
A Content-Aware Design Approach to Multiscale NavigationPindat, Cyprien 20 December 2013 (has links) (PDF)
Computer screens are very small compared to the size of large information spaces that arise in many domains. The visualization of such datasets requires multiscale navigation capabilities, enabling users to switch between zoomed-in detailed views and zoomed-out contextual views of the data. Designing interfaces that allow users to quickly identify objects of interest, get detailed views of those objects, relate them and put them in a broader spatial context, raise challenging issues. Multi-scale interfaces have been the focus of much research effort over the last twenty years.There are several design approaches to address multiscale navigation issues. In this thesis, we review and categorize these approaches according to their level of content awareness. We identify two main approaches: content-driven, which optimizes interfaces for navigation in specific content; and content-agnostic, that applies to any type of data. We introduce the content-aware design approach, which dynamically adapts the interface to the content. The latter design approach can be used to design multiscale navigation techniques both in 2D or 3D spaces. We introduce Arealens and Pathlens, two content-aware fisheye lenses that dynamically adapt their shape to the underlying content to better preserve the visual aspect of objects of interest. We describe the techniques and their implementation, and report on a controlled experiment that evaluates the usability of Arealens compared to regular fisheye lenses, showing clear performance improvements with the new technique for a multiscale visual search task. We introduce a new distortion-oriented presentation library enabling the design of fisheye lenses featuring several foci of arbitrary shapes. Then, we introduce Gimlens, a multi-view detail-in-context visualization technique that enables users to navigate complex 3D models by drilling holes into their outer layers to reveal objects that are buried into the scene. Gimlens adapts to the geometry of objects of interest so as to better manage visual occlusion problems, selection mechanism and coordination of lenses.
|
14 |
[en] CUTAWAY ALGORITHM WITH CONTEXT PRESERVATION FOR RESERVOIR MODEL VISUALIZATION / [pt] ALGORITMO DE CORTE COM PRESERVAÇÃO DE CONTEXTO PARA VISUALIZAÇÃO DE MODELOS DE RESERVATÓRIOLUIZ FELIPE NETTO 11 January 2017 (has links)
[pt] A simulação numérica de reservatório de petróleo é um processo amplamente utilizado na indústria de óleo e gás. O reservatório é representado por um modelo de células hexaédricas com propriedades associadas, e a simulação numérica procura prever o fluxo de fluído dentro do modelo. Especialistas analisam os resultados dessas simulações através da inspeção, num ambiente gráfico interativo, do modelo tridimensional. Neste trabalho, propõe-se um novo algoritmo de corte com preservação de contexto para auxiliar a inspeção do modelo. O principal objetivo é permitir que o especialista visualize o entorno de poços. Os poços representam o objeto de interesse que deve estar visível e o modelo tridimensional (o contexto) é preservado na medida do possível no entorno desses poços. Desta forma, torna-se possível avaliar a variação de propriedades associadas às células na vizinhança do objeto de interesse. O algoritmo proposto explora programação em placa gráfica e é válido para objetos de interesse arbitrários. Propõe-se também uma extensão do algoritmo para que a seção de corte seja desacoplada da câmera, permitindo analisar o modelo cortado de outros pontos de vista. A eficácia do algoritmo proposto é demonstrada através de resultados baseados em modelos reais de reservatório. / [en] Numerical simulation of black oil reservoir is widely used in the oil and gas industry. The reservoir is represented by a model of hexahedral cells with associated properties, and the numerical simulation is used to predict the fluid behavior in the model. Specialists make analysis of such simulations by inspecting, in a graphical interactive environment, the tridimensional model. In this work, we propose a new cutaway algorithm with context preservation to help the inspection of the model. The main goal is to allow the specialist to visualize the wells and their vicinity. The wells represent the object of interest that must be visible while preserving the tridimensional model (the context) in the vicinity as far as possible. In this way, it is possible to visualize the distribution of cell property together with the object of interest. The proposed algorithm makes use of graphics processing units and is valid for arbitrary objects of interest. It is also proposed an extension to the algorithm to allow the cut section to be decoupled from the camera, allowing analysis of the cut model from different points of view. The effectiveness of the proposed algorithm is demonstrated by a set of results based on actual reservoir models.
|
15 |
Occlusion Management in Conventional and Head-Mounted Display Visualization through the Relaxation of the Single Viewpoint/Timepoint ConstraintMeng-Lin Wu (6916283) 16 August 2019 (has links)
<div>In conventional computer graphics and visualization, images are synthesized following the planar pinhole camera (PPC) model. The PPC approximates physical imaging devices such as cameras and the human eye, which sample the scene with linear rays that originate from a single viewpoint, i.e. the pinhole. In addition, the PPC takes a snapshot of the scene, sampling it at a single instant in time, or timepoint, for each image. Images synthesized with these single viewpoint and single timepoint constraints are familiar to the user, as they emulate images captured with cameras or perceived by the human visual system. However, visualization using the PPC model suffers from the limitation of occlusion, when a region of interest (ROI) is not visible due to obstruction by other data. The conventional solution to the occlusion problem is to rely on the user to change the view interactively to gain line of sight to the scene ROIs. This approach of sequential navigation has the shortcomings of (1) inefficiency, as navigation is wasted when circumventing an occluder does not reveal an ROI, (2) inefficacy, as a moving or a transient ROI can hide or disappear before the user reaches it, or as scene understanding requires visualizing multiple distant ROIs in parallel, and (3) user confusion, as back-and-forth navigation for systematic scene exploration can hinder spatio-temporal awareness.</div><div><br></div><div>In this thesis we propose a novel paradigm for handling occlusions in visualization based on generalizing an image to incorporate samples from multiple viewpoints and multiple timepoints. The image generalization is implemented at camera model level, by removing the same timepoint restriction, and by removing the linear ray restriction, allowing for curved rays that are routed around occluders to reach distant ROIs. The paradigm offers the opportunity to greatly increase the information bandwidth of images, which we have explored in the context of both desktop and head-mounted display visualization, as needed in virtual and augmented reality applications. The challenges of multi-viewpoint multi-timepoint visualization are (1) routing the non-linear rays to find all ROIs or to reach all known ROIs, (2) making the generalized image easy to parse by enforcing spatial and temporal continuity and non-redundancy, (3) rendering the generalized images quickly as required by interactive applications, and (4) developing algorithms and user interfaces for the intuitive navigation of the compound cameras with tens of degrees of freedom. We have addressed these challenges (1) by developing a multiperspective visualization framework based on a hierarchical camera model with PPC and non-PPC leafs, (2) by routing multiple inflection point rays with direction coherence, which enforces visualization continuity, and without intersection, which enforces non-redundancy, (3) by designing our hierarchical camera model to provide closed-form projection, which enables porting generalized image rendering to the traditional and highly-efficient projection followed by rasterization pipeline implemented by graphics hardware, and (4) by devising naturalistic user interfaces based on tracked head-mounted displays that allow deploying and retracting the additional perspectives intuitively and without simulator sickness.</div>
|
16 |
Comparaison et combinaison de rendus visuels et sonores pour la conception d'interfaces homme-machine : des facteurs humains aux stratégies de présentation à base de distorsion / Comparison and combination of visual aud audio renderings to conceive human-computer interfaces : from human factors to distortion-based presentation strategiesBouchara, Tifanie 29 October 2012 (has links)
Bien que de plus en plus de données sonores et audiovisuelles soient disponibles, la majorité des interfaces qui permettent d’y accéder reposent uniquement sur une présentation visuelle. De nombreuses techniques de visualisation ont déjà été proposées utilisant une présentation simultanée de plusieurs documents et des distorsions permettant de mettre en relief l’information plus pertinente. Nous proposons de définir des équivalents auditifs pour la présentation de plusieurs fichiers sonores en concurrence, et de combiner de façon optimale les stratégies audio et visuelles pour la présentation de documents multimédia. Afin d’adapter au mieux ces stratégies à l’utilisateur, nous avons dirigé nos recherches sur l’étude des processus perceptifs et attentionnels impliqués dans l’écoute et l’observation d’objets audiovisuels concurrents, en insistant sur les interactions entre les deux modalités.Exploitant les paramètres de taille visuelle et de volume sonore, nous avons étendu le concept de lentille grossissante, utilisée dans les méthodes focus+contexte visuelles, aux modalités auditive et audiovisuelle. A partir de ce concept, une application de navigation dans une collection de documents vidéo a été développée. Nous avons comparé notre outil à un autre mode de rendu dit de Pan&Zoom à travers une étude d’utilisabilité. Les résultats, en particulier subjectifs, encouragent à poursuivre vers des stratégies de présentation multimodales associant un rendu audio aux rendus visuels déjà disponibles.Une seconde étude a concerné l’identification de sons d’environnement en milieu bruité en présence d’un contexte visuel. Le bruit simule la présence de plusieurs sources sonores simultanées telles qu’on pourrait les retrouver dans une interface où les documents audio et audiovisuels sont présentés ensemble. Les résultats de cette expérience ont confirmé l’avantage de la multimodalité en condition de dégradation. De plus, au-delà des buts premiers de la thèse, l’étude a confirmé l’importance de la congruence sémantique entre les composantes visuelle et sonore pour la reconnaissance d’objets et a permis d’approfondir les connaissances sur la perception auditive des sons d’environnement.Finalement, nous nous sommes intéressée aux processus attentionnels impliqués dans la recherche d’un objet parmi plusieurs, en particulier au phénomène de « pop-out » par lequel un objet saillant attire l’attention automatiquement. En visuel, un objet net attire l’attention au milieu d’objets flous et certaines stratégies de présentation visuelle exploitent déjà ce paramètre visuel. Nous avons alors étendu la notion de flou aux modalités auditives et audiovisuelles par analogie. Une série d’expériences perceptives a confirmé qu’un objet net parmi des objets flous attire l’attention, quelle que soit la modalité. Les processus de recherche et d’identification sont alors accélérés quand l’indice de netteté correspond à la cible, mais ralentis quand il s’agit d’un distracteur, mettant ainsi en avant un phénomène de guidage involontaire. Concernant l’interaction intermodale, la combinaison redondante des flous audio et visuel s’est révélée encore plus efficace qu’une présentation unimodale. Les résultats indiquent aussi qu’une combinaison optimale n’implique pas d’appliquer obligatoirement une distorsion sur les deux modalités. / Although more and more sound and audiovisual data are available, the majority of access interfaces are solely based on a visual presentation. Many visualization techniques have been proposed that use simultaneous presentation of multiple documents and distortions to highlight the most relevant information. We propose to define equivalent audio technique for the presentation of several competing sound files, and optimally combine such audio and visual presentation strategies for multimedia documents. To better adapt these strategies to the user, we studied attentional and perceptual processes involved in listening and watching simultaneous audio-visual objects, focusing on the interactions between the two modalities.Combining visual size and sound level parameters, we extended the visual concept of magnifying lens to auditory and audiovisual modalities. Exploiting this concept, a navigation application in a video collection has been developed. We compared our tool with another rendering mode called Pan & Zoom through a usability study. Results, especially subjective results, encourage further research to develop multimodal presentation strategies by combining an audio rendering to the visual renderings already available.A second study concerned the identification of environmental sounds in a noisy environment in the presence of a visual context. The noise simulated the presence of multiple competing sounds as would be observed in an interface where several multimedia documents are presented together. The experimental results confirmed the multimodality advantage in condition of audio degradation. Moreover, beyond the primary goals of the thesis, this study confirms the importance of the semantic congruency between visual and auditory components for object recognition and provides deeper knowledge about the auditory perception of environmental sounds.Finally, we investigated attentional processes involved in the search of a specific object among many, especially the “pop-out” phenomenon whereby a salient object automatically attracts attention. In vision, an sharp object attracts attention among blurred objects and some visual strategies already exploit this parameter to display the information. We extended by analogy the concept of visual blur to auditory and audiovisual modalities. A serie of experiments confirmed that a perceptual object among blurred objects attracts attention, regardless of the modality. The identification and search process is then accelerated when the sharpness parameter is applied to the target, but slow when it is applied to a distractor. These results highlight an involuntary attraction effect. Concerning the crossmodal interaction, a redundant combination of audio and visual blur proved to be more effective than a unimodal presentation. Results also indicate that optimal combination does not necessarily require a distortion of both modalities.
|
17 |
Interactive Visual Clutter Management in Scientific VisualizationTong, Xin January 2016 (has links)
No description available.
|
Page generated in 0.0431 seconds