Return to search

Web-based Stereo Rendering for Visualization and Annotation of Scientific Volumetric Data

Advancement in high-throughput microscopy technology such as the Knife-Edge
Scanning Microscopy (KESM) is enabling the production of massive amounts of high-resolution
and high-quality volumetric data of biological microstructures. To fully
utilize these data, they should be efficiently distributed to the scientific research community
through the Internet and should be easily visualized, annotated, and analyzed.
Given the volumetric nature of the data, visualizing them in 3D is important. However,
since we cannot assume that every end user has high-end hardware, an approach
that has minimal hardware and software requirements will be necessary, such as a
standard web browser running on a typical personal computer. There are several web
applications that facilitate the viewing of large collections of images. Google Maps
and Google Maps-like interfaces such as Brainmaps.org allow users to pan and zoom
2D images efficiently. However, they do not yet support the rendering of volumetric
data in their standard web interface.
The goal of this thesis is to develop a light-weight volumetric image viewer using
existing web technologies such as HTML, CSS and JavaScript while exploiting the
properties of stereo vision to facilitate the viewing and annotations of volumetric data.
The choice of stereogram over other techniques was made since it allows the usage of
raw image stacks produced by the 3D microscope without any extra computation on
the data at all. Operations to generate stereo images using 2D image stacks include
distance attenuation and binocular disparity. By using HTML and JavaScript that are computationally cheap, we can accomplish both tasks dynamically in a standard
web browser, by overlaying the images with intervening semi-opaque layers.
The annotation framework has also been implemented and tested. In order for
annotation to work in this environment, it should also be in the form of stereogram
and should aid the merging of stereo pairs. The current technique allows users to
place a mark (dot) on one image stack, and its projected position onto the other
image stack is calculated dynamically on the client side. Other extra metadata such
as textual descriptions can be entered by the user as well. To cope with the occlusion
problem caused by changes in the z direction, the structure traced by the user will
be displayed on the side, together with the data stacks. Using the same stereo-gram
creation techniques, the traces made by the user is dynamically generated and shown
as stereogram.
We expect the approach presented in this thesis to be applicable to a broader
scientific domain, including geology and meteorology.

Identiferoai:union.ndltd.org:tamu.edu/oai:repository.tamu.edu:1969.1/ETD-TAMU-2008-12-226
Date16 January 2010
CreatorsEng, Daniel C.
ContributorsChoe, Yoonsuck
Source SetsTexas A and M University
Languageen_US
Detected LanguageEnglish
TypeBook, Thesis, Electronic Thesis
Formatapplication/pdf

Page generated in 0.0017 seconds