51 |
Theoretical and experimental concepts to increase the performance of structured illumination microscopyStröhl, Florian January 2018 (has links)
The aim of the work described in this thesis is to improve the understanding, implementation, and overall capabilities of structured illumination microscopy (SIM). SIM is a superresolution technique that excels in gentle live-cell volumetric imaging tasks. Many modalities of SIM were developed over the last decade that tailored SIM into the versatile and powerful technique that it is today. Nevertheless, the field of SIM continues to evolve and there is plenty of room for novel concepts. Specifically, in this thesis, a generalised framework for a theoretical description of SIM variants is introduced, the constraints of optical components for a flexible SIM system are discussed and the set-up is realised, the important aspect of deconvolution in SIM is highlighted and further developed, and finally novel SIM modalities introduced that improve its time-resolution, gentleness, and volumetric imaging capabilities. Based on the generalised theory, the computational steps for the extraction of superresolution information from SIM raw data are outlined and the essential concept of spatial frequency un-mixing explained for standard SIM as well as for multifocal SIM. Multifocal SIM hereby acts as a parallelised confocal as well as widefield technique and thus serves as link between the two modalities. Using this novel scheme deconvolution methods for SIM are then further developed to allow a holistic reconstruction procedure. Deconvolution is of great important in the SIM reconstruction process, and hence rigorous derivations of advanced deconvolution methods are provided and further developed to enable generalised ‘multi-image’ Richardson-Lucy deconvolution in SIM, called joint Richardson-Lucy deconvolution (jRL). This approach is demonstrated to robustly produce optically sectioned multifocal SIM images and, through the incorporation of a 3D imaging model, also volumetric standard SIM images within the jRL framework. For standard SIM this approach enabled acquisition speed doubling, because the recovery of superresolved images from a reduced number of raw frames through constrained jRL was made possible. The method is validated in silico and in vitro. For the study of yet faster moving samples deconvolution microscopy is found to be the method of choice. To enable optical sectioning, a key feature of SIM, in deconvolution microscopy, a new modality of optical sectioning microscopy is introduced that can be implemented as a single-shot technique. Via polarised excitation and detection in orthogonal directions in conjunction with structured illumination the theoretical framework is rigorously derived and validated.
|
52 |
Generating Light Estimation for Mixed-reality Devices through Collaborative Visual SensingJanuary 2018 (has links)
abstract: Mixed reality mobile platforms co-locate virtual objects with physical spaces, creating immersive user experiences. To create visual harmony between virtual and physical spaces, the virtual scene must be accurately illuminated with realistic physical lighting. To this end, a system was designed that Generates Light Estimation Across Mixed-reality (GLEAM) devices to continually sense realistic lighting of a physical scene in all directions. GLEAM optionally operate across multiple mobile mixed-reality devices to leverage collaborative multi-viewpoint sensing for improved estimation. The system implements policies that prioritize resolution, coverage, or update interval of the illumination estimation depending on the situational needs of the virtual scene and physical environment.
To evaluate the runtime performance and perceptual efficacy of the system, GLEAM was implemented on the Unity 3D Game Engine. The implementation was deployed on Android and iOS devices. On these implementations, GLEAM can prioritize dynamic estimation with update intervals as low as 15 ms or prioritize high spatial quality with update intervals of 200 ms. User studies across 99 participants and 26 scene comparisons reported a preference towards GLEAM over other lighting techniques in 66.67% of the presented augmented scenes and indifference in 12.57% of the scenes. A controlled lighting user study on 18 participants revealed a general preference for policies that strike a balance between resolution and update rate. / Dissertation/Thesis / Masters Thesis Computer Science 2018
|
53 |
Multiresolution image-space rendering for interactive global illuminationNichols, Gregory Boyd 01 July 2010 (has links)
Global illumination adds tremendous visual richness to rendered images. Unfortunately, such illumination proves quite costly to compute, and is therefore often coarsely approximated by interactive applications, or simply omitted altogether. Global illumination is often quite low-frequency, aside from sharp changes at discontinuities. This thesis describes three novel multiresolution image-space methods that exploit this characteristic to accelerate rendering speeds. These techniques run completely on the GPU at interactive rates and require no precomputation, allowing fully dynamic lighting, geometry, and camera.
The first approach, multiresolution splatting, is a novel multiresolution method for rendering indirect illumination. This work extends reflective shadow maps, an image space method that splats contributions from secondary light sources into eye-space. Splats are refined into multiresolution patches, rendering indirect contributions at low resolution where lighting changes slowly and at high resolution near discontinuities; this greatly reduces GPU fill rate and enhances performance.
The second method, image space radiosity, significantly improves the performance of multiresolution splatting, introducing an efficient stencil-based parallel refinement technique. This method also adapts ideas from object-space hierarchical radiosity methods to image space, introducing two adaptive sampling methods that allow much finer sampling of the reflective shadow map where needed. These modifications significantly improve temporal coherence while maintaining performance.
The third approach adapts these techniques to accelerate the rendering of direct illumination from large area light sources. Visibility is computed using a coarse screen-space voxelization technique, allowing binary visibility queries using ray marching. This work also proposes a new incremental refinement method that considers both illumination and visibility variations. Both diffuse and non-diffuse surfaces are supported, and illumination can vary over the surface of the light, enabling dynamic content such as video screens.
|
54 |
Image Analysis using the Physics of Light ScatteringNillius, Peter January 2004 (has links)
Any generic computer vision algorithm must be able to copewith the variations in appearance of objects due to differentillumination conditions. While these variations in the shadingof a surface may seem a nuisance, they in fact containinformation about the world. This thesis tries to provide anunderstanding what information can be extracted from theshading in a single image and how to achieve this. One of thechallenges lies in finding accurate models for the wide varietyof conditions that can occur. Frequency space representations are powerful tools foranalyzing shading theoretically. Surfaces act as low-passfilters on the illumination making the reflected lightband-limited. Hence, it can be represented by a finite numberof components in the Fourier domain, despite having arbitraryillumination. This thesis derives a basis for shading byrepresenting the illumination in spherical harmonics and theBRDF in a basis for isotropic reflectance. By analyzing thecontributing variance of this basis it is shown how to createfinite dimensional representations for any surface withisotropic reflectance. The finite representation is used to analytically derive aprincipal component analysis (PCA) basis of the set of imagesdue to the variations in the illumination and BRDF. The PCA isperformed model-based so that the variations in the images aredescribed by the variations in the illumination and the BRDF.This has a number of advantages. The PCA can be performed overa wide variety of conditions, more than would be practicallypossible if the images were captured or rendered. Also, thereis an explicit mapping between the principal components and theillumination and BRDF so that the PCA basis can be used as aphysical model. By combining a database of captured illumination and adatabase of captured BRDFs a general basis for shading iscreated. This basis is used to investigate materialclassification from a single image with known geometry butarbitrary unknown illumination. An image is classified byestimating the coecients in this basis and comparing them to adatabase. Experiments on synthetic data show that materialclassification from reflectance properties is hard. There aremis-classifications and the materials seem to cluster intogroups. The materials are grouped using a greedy algorithm.Experiments on real images show promising results. Keywords:computer vision, shading, illumination,reflectance, image irradiance, frequency space representations,spherical harmonics, analytic PCA, model-based PCA, materialclassification, illumination estimation
|
55 |
Vector occluders: an empirical approximation for rendering global illumination effects in real-timeSherif, William 01 February 2013 (has links)
Precomputation has been previously used as a means to get global illumination effects
in real-time on consumer hardware of the day. Our work uses Sloan’s 2002 PRT method
as a starting point, and builds on it with two new ideas.
We first explore an alternative representation for PRT data. “Cpherical harmonics”
(CH) are introduced as an alternative to spherical harmonics, by substituting the
Chebyshev polynomial in the place of the Legendre polynomial as the orthogonal
polynomial in the spherical harmonics definition. We show that CH can be used instead
of SH for PRT with near-equivalent performance.
“Vector occluders” (VO) are introduced as a novel, precomputed, real-time, empirical
technique for adding global illumination effects including shadows, caustics and
interreflections to a locally illuminated scene on static geometry. VO encodes PRT data
as simple vectors instead of using SH. VO can handle point lights, whereas a standard
SH implementation cannot. / UOIT
|
56 |
Global illumination and approximating reflectance in real-timeNowicki, Tyler B. 10 April 2007 (has links)
Global illumination techniques are used to improve the realism of 3D scenes. Calculating accurate global illumination requires a method for solving the rendering equation. However, the integral form of this equation cannot be evaluated. This thesis presents research in non real-time illumination techniques which are evaluated with a finite number of light rays. This includes a new technique which improves realism of the scene over traditional techniques.
All computer rendering requires distortion free texture mapping to appear plausible to the eye. Inverse texture mapping, however, can be numerically unstable and computationally expensive. Alternative techniques for texture mapping and texture coordinate generation were developed to simplify rendering.
Real-time rendering is improved by pre-calculating non real-time reflections. The results of this research demonstrate that a polynomial approximation of reflected light can be more accurate than a constant approximation. The solution improves realism and makes use of new features in graphics hardware. / May 2007
|
57 |
Gaze strategies for coping with glare under intense contra light viewing conditions – A pilot studyLorentz, Nicholas January 2011 (has links)
Purpose: This is a pilot study to investigate gaze strategies for coping with glare when performing a simple visual task under intense contra light viewing conditions.
Method: Twenty-four normally sighted participants were recruited for this study. They consisted of a young subgroup (n=12), aged 21-29 (mean = 25.3 ± 2.5), and an older subgroup (n=12), aged 51-71 (mean = 57.3 ± 6.1). Visual acuity (VA) and Brightness Acuity testing (BAT) were used to assess central vision. Participants were required to locate and approach (from 15m) a small platform that was contra lit by a powerful light source. Upon arrival at the platform, participants were required to insert a small ball into a similarly sized receptacle. An ASL Mobile Eye (Bedford, MA) eye tracker was used to monitor gaze position throughout until the task was completed. Scene and pupil videos were recorded for each participant and analyzed frame by frame to locate the participant’s eye movements.
Results: Two participants (one from each subgroup) adopted aversion gaze strategies wherein they avoided looking at the contra lit task for more than 50% of the task completion time. For the remainder of the experimental trial, these two participants were either looking toward the glare source or blinking. The other twenty-two participants opted to endure the contra light condition by gazing directly into the glare for the majority of the task completion time. An individual t-test between the younger
iv
subgroup’s BA scores vs. the older subgroup’s BA scores was statistically significant (p<0.05).
Significantly poorer BAT scores were found in the older subgroup, however, individual participant’s BAT scores did not necessarily predict the ability to cope with a contra lit glare source. Although, statistically significant differences were not found between the two subgroups when examining their VA and length of time to complete the course, a trend was found, as the older subgroup consistently had poorer VA scores and took longer to complete the course.
Further research must be completed with a larger sample size to fully understand the glare aversion strategies one must elicit when dealing with a contra lit glare source within the built environment, and to confirm the three glare strategies proposed by this pilot study.
|
58 |
Image Analysis using the Physics of Light ScatteringNillius, Peter January 2004 (has links)
<p>Any generic computer vision algorithm must be able to copewith the variations in appearance of objects due to differentillumination conditions. While these variations in the shadingof a surface may seem a nuisance, they in fact containinformation about the world. This thesis tries to provide anunderstanding what information can be extracted from theshading in a single image and how to achieve this. One of thechallenges lies in finding accurate models for the wide varietyof conditions that can occur.</p><p>Frequency space representations are powerful tools foranalyzing shading theoretically. Surfaces act as low-passfilters on the illumination making the reflected lightband-limited. Hence, it can be represented by a finite numberof components in the Fourier domain, despite having arbitraryillumination. This thesis derives a basis for shading byrepresenting the illumination in spherical harmonics and theBRDF in a basis for isotropic reflectance. By analyzing thecontributing variance of this basis it is shown how to createfinite dimensional representations for any surface withisotropic reflectance.</p><p>The finite representation is used to analytically derive aprincipal component analysis (PCA) basis of the set of imagesdue to the variations in the illumination and BRDF. The PCA isperformed model-based so that the variations in the images aredescribed by the variations in the illumination and the BRDF.This has a number of advantages. The PCA can be performed overa wide variety of conditions, more than would be practicallypossible if the images were captured or rendered. Also, thereis an explicit mapping between the principal components and theillumination and BRDF so that the PCA basis can be used as aphysical model.</p><p>By combining a database of captured illumination and adatabase of captured BRDFs a general basis for shading iscreated. This basis is used to investigate materialclassification from a single image with known geometry butarbitrary unknown illumination. An image is classified byestimating the coecients in this basis and comparing them to adatabase. Experiments on synthetic data show that materialclassification from reflectance properties is hard. There aremis-classifications and the materials seem to cluster intogroups. The materials are grouped using a greedy algorithm.Experiments on real images show promising results.</p><p><b>Keywords:</b>computer vision, shading, illumination,reflectance, image irradiance, frequency space representations,spherical harmonics, analytic PCA, model-based PCA, materialclassification, illumination estimation</p>
|
59 |
The iconography of the creation of Adam and Eve in early Christian manuscript recensions /Swern, Marjorie Carol. January 1965 (has links)
Thesis (M.A.)--Ohio State University, 1965. / Available online via OhioLINK's ETD Center
|
60 |
Global illumination and approximating reflectance in real-timeNowicki, Tyler B. 10 April 2007 (has links)
Global illumination techniques are used to improve the realism of 3D scenes. Calculating accurate global illumination requires a method for solving the rendering equation. However, the integral form of this equation cannot be evaluated. This thesis presents research in non real-time illumination techniques which are evaluated with a finite number of light rays. This includes a new technique which improves realism of the scene over traditional techniques.
All computer rendering requires distortion free texture mapping to appear plausible to the eye. Inverse texture mapping, however, can be numerically unstable and computationally expensive. Alternative techniques for texture mapping and texture coordinate generation were developed to simplify rendering.
Real-time rendering is improved by pre-calculating non real-time reflections. The results of this research demonstrate that a polynomial approximation of reflected light can be more accurate than a constant approximation. The solution improves realism and makes use of new features in graphics hardware.
|
Page generated in 0.1011 seconds