• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 26
  • 9
  • 7
  • 4
  • 2
  • 1
  • Tagged with
  • 121
  • 121
  • 50
  • 35
  • 22
  • 21
  • 21
  • 21
  • 21
  • 19
  • 17
  • 17
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

High-resolution splatting

Kulka, Peter January 2001 (has links)
Volume rendering is a research area within scientific visualisation, where images are computed from volumetric data sets for visual exploration. Such data sets are typically generated by Computer aided Tomography, Magnetic Resonance Imaging, Positron Emission Tomography or gained from simulations. The data sets are usually interpreted using optical models that assign optical properties to the volume and define the illumination and shading behaviour. Volume rendering techniques may be divided into three classes: object-order, image-order or hybrid methods. Image-order or ray casting methods shoot rays from the view plane into the volume and simulate the variation of light intensities along those rays. Object-order techniques traverse the volume data set and project each volume element onto the view plane. Hybrid volume rendering techniques combine these two approaches. A very popular object-order rendering method is called splatting. This technique traverses the volume data set and projects the optical properties of each volume element onto the view plane. This thesis consists of two parts. The first part introduces two new splatting methods, collectively called high-resolution splatting, which are based on standard splatting. Both high-resolution splatting methods correct errors of splatting by applying major modifications. We propose the first method, called fast high-resolution splatting, as an alternative to standard splatting. It may be used for quick previewing, since it is faster than standard splatting and the resulting images are significantly sharper. Our second method, called complete high-resolution splatting, improves the volume reconstruction, which results in images that are very close to those produced by ray casting methods. The second part of the thesis incorporates wavelet analysis into high-resolution splatting. Wavelet analysis is a mathematical theory that decomposes volumes into multi-resolution hierarchies, which may be used to find coherence within volumes. The combination of wavelets with the high-resolution splatting method has the two advantages. Firstly the extended splatting method, called high-resolution wavelet splatting, can be directly applied to wavelet transformed volume data sets without performing an inverse transform. Secondly when visualising wavelet compressed volumes, only a small fraction of the wavelet coefficients need to be projected. For all three versions of the new high-resolution splatting method, complexity analyses, comprehensive error and performance analyses as well as implementation details are discussed.
22

Performance Modeling of In Situ Rendering

Larsen, Matthew 01 May 2017 (has links)
With the push to exascale, in situ visualization and analysis will play an increasingly important role in high performance computing. Tightly coupling in situ visualization with simulations constrains resources for both, and these constraints force a complex balance of trade-offs. A performance model that provides an a priori answer for the cost of using an in situ approach for a given task would assist in managing the trade-offs between simulation and visualization resources. In this work, we present new statistical performance models, based on algorithmic complexity, that accurately predict the run-time cost of a set of representative rendering algorithms, an essential in situ visualization task. To train and validate the models, we create data-parallel rendering algorithms within a light-weight in situ infrastructure, and we conduct a performance study of an MPI+X rendering infrastructure used in situ with three HPC simulation applications. We then explore feasibility issues using the model for selected in situ rendering questions.
23

Naturlig haptisk kraftåterkoppling från volymdata / Natural haptic feedback from volumetric density data

Lundin (Palmerius), Karljohan January 2001 (has links)
As the volumes are entering the world of computer graphics the pure volume visualisation becomes a more important tool in for example research and medical applications. But the advance in haptics --- force feedback from the computer --- is behind. In volume haptics no equal to the proxy method so popular in surface haptics has yet emerged. Some implementations of volume haptics even use surfaces as intermediate representations so that surface haptics can be used. The intention of this work was to create natural feeling haptic feedback from volumetric density data using pure volume haptics. The haptic algorithm would be implemented in Reachin API for the Reachin Desktop Display, together with other parts to build up a usable volume visualisation environment. To achieve the feeling of stiffness and friction dependent on tissue type, a proxy based method was developed. In the volume the proxy is constrained by virtual surfaces defined by the local gradient. This algorithm was implemented in a volume haptics node and for visualisation a volume renderer node was implemented. These nodes can be used to setup different volume visualisation environments using VRML.
24

A rendering method for simulated emission nebulae

Carlson, Adam January 2011 (has links)
Emission nebulae are some of the most beautiful stellar phenomena. The newly formed hot stars inside the nebulae ionize the surrounding gas making it glow in variety of colors. The focus of this work is to find a method for interactive rendering of simulated emission nebulae. A rendering program has been developed to render and generate nebulae. The emission light color is evaluated as a function of the accumulated density between the gas and the ionizing star. The rendering program can render a large variety of nebulae from any viewpoint with interactive performance on PC hardware. The method proposed in this work is visually accurate to real nebulae.
25

Large-Scale Multi-Resolution Representations for Accurate Interactive Image and Volume Operations

Sicat, Ronell Barrera 25 November 2015 (has links)
The resolutions of acquired image and volume data are ever increasing. However, the resolutions of commodity display devices remain limited. This leads to an increasing gap between data and display resolutions. To bridge this gap, the standard approach is to employ output-sensitive operations on multi-resolution data representations. Output-sensitive operations facilitate interactive applications since their required computations are proportional only to the size of the data that is visible, i.e., the output, and not the full size of the input. Multi-resolution representations, such as image mipmaps, and volume octrees, are crucial in providing these operations direct access to any subset of the data at any resolution corresponding to the output. Despite its widespread use, this standard approach has some shortcomings in three important application areas, namely non-linear image operations, multi-resolution volume rendering, and large-scale image exploration. This dissertation presents new multi-resolution representations for large-scale images and volumes that address these shortcomings. Standard multi-resolution representations require low-pass pre-filtering for anti- aliasing. However, linear pre-filters do not commute with non-linear operations. This becomes problematic when applying non-linear operations directly to any coarse resolution levels in standard representations. Particularly, this leads to inaccurate output when applying non-linear image operations, e.g., color mapping and detail-aware filters, to multi-resolution images. Similarly, in multi-resolution volume rendering, this leads to inconsistency artifacts which manifest as erroneous differences in rendering outputs across resolution levels. To address these issues, we introduce the sparse pdf maps and sparse pdf volumes representations for large-scale images and volumes, respectively. These representations sparsely encode continuous probability density functions (pdfs) of multi-resolution pixel and voxel footprints in input images and volumes. We show that the continuous pdfs encoded in the sparse pdf map representation enable accurate multi-resolution non-linear image operations on gigapixel images. Similarly, we show that sparse pdf volumes enable more consistent multi-resolution volume rendering compared to standard approaches, on both artificial and real world large-scale volumes. The supplementary videos demonstrate our results. In the standard approach, users heavily rely on panning and zooming interactions to navigate the data within the limits of their display devices. However, panning across the whole spatial domain and zooming across all resolution levels of large-scale images to search for interesting regions is not practical. Assisted exploration techniques allow users to quickly narrow down millions to billions of possible regions to a more manageable number for further inspection. However, existing approaches are not fully user-driven because they typically already prescribe what being of interest means. To address this, we introduce the patch sets representation for large-scale images. Patches inside a patch set are grouped and encoded according to similarity via a permutohedral lattice (p-lattice) in a user-defined feature space. Fast set operations on p-lattices facilitate patch set queries that enable users to describe what is interesting. In addition, we introduce an exploration framework—GigaPatchExplorer—for patch set-based image exploration. We show that patch sets in our framework are useful for a variety of user-driven exploration tasks in gigapixel images and whole collections thereof.
26

Multigraph visualization for feature classification of brain network data

Wang, Jiachen 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / A Multigraph is a set of graphs with a common set of nodes but different sets of edges. Multigraph visualization has not received much attention so far. In this thesis, I will introduce an interactive application in brain network data analysis that has a strong need for multigraph visualization. For this application, multigraph was used to represent brain connectome networks of multiple human subjects. A volumetric data set was constructed from the matrix representation of the multigraph. A volume visualization tool was then developed to assist the user to interactively and iteratively detect network features that may contribute to certain neurological conditions. I applied this technique to a brain connectome dataset for feature detection in the classification of Alzheimer's Disease (AD) patients. Preliminary results showed significant improvements when interactively selected features are used.
27

Real-time Rendering of Burning Objects in Video Games

Amarasinghe, Dhanyu Eshaka 08 1900 (has links)
In recent years there has been growing interest in limitless realism in computer graphics applications. Among those, my foremost concentration falls into the complex physical simulations and modeling with diverse applications for the gaming industry. Different simulations have been virtually successful by replicating the details of physical process. As a result, some were strong enough to lure the user into believable virtual worlds that could destroy any sense of attendance. In this research, I focus on fire simulations and its deformation process towards various virtual objects. In most game engines model loading takes place at the beginning of the game or when the game is transitioning between levels. Game models are stored in large data structures. Since changing or adjusting a large data structure while the game is proceeding may adversely affect the performance of the game. Therefore, developers may choose to avoid procedural simulations to save resources and avoid interruptions on performance. I introduce a process to implement a real-time model deformation while maintaining performance. It is a challenging task to achieve high quality simulation while utilizing minimum resources to represent multiple events in timely manner. Especially in video games, this overwhelming criterion would be robust enough to sustain the engaging player's willing suspension of disbelief. I have implemented and tested my method on a relatively modest GPU using CUDA. My experiments conclude this method gives a believable visual effect while using small fraction of CPU and GPU resources.
28

Parzsweep: A Novel Parallel Algorithm for Volume Rendering of Regular Datasets

Ramswamy, Lakshmy 10 May 2003 (has links)
The sweep paradigm for volume rendering has previously been successfully applied with irregular grids. This thesis describes a parallel volume rendering algorithm called PARZSweep for regular grids that utilizes the sweep paradigm. The sweep paradigm is a concept where a plane sweeps the data volume parallel to the viewing direction. As the sweeping proceeds in the increasing order of z, the faces incident on the vertices are projected onto the viewing volume to constitute to the image. The sweeping ensures that all faces are projected in the correct order and the image thus obtained is very accurate in its details. PARZSweep is an extension of a serial algorithm for regular grids called RZSweep. The hypothesis of this research is that a parallel version of RZSweep can be designed and implemented which will utilize multiple processors to reduce rendering times. PARZSweep follows an approach called image-based task scheduling or tiling. This approach divides the image space into tiles and allocates each tile to a processor for individual rendering. The sub images are composite to form a complete final image. PARZSweep uses a shared memory architecture in order to take advantage of inherent cache coherency for faster communication between processor. Experiments were conducted comparing RZSweep and PARZSweep with respect to prerendering times, rendering times and image quality. RZSweep and PARZSweep have approximately the same prerendering costs, produce exactly the same images and PARZSweep substantially reduced rendering times. PARZSweep was evaluated for scalability with respect to the number of tiles and number of processors. Scalability results were disappointing due to uneven data distribution.
29

Visibility acceleration for large-scale volume visualization

Gao, Jinzhu 20 July 2004 (has links)
No description available.
30

Remote user-driven exploration of large scale volume data

Shareef, Naeem O. 14 July 2005 (has links)
No description available.

Page generated in 0.0964 seconds