Spelling suggestions: "subject:"illumination"" "subject:"llumination""
111 |
Surface Reflectance Estimation and Natural Illumination StatisticsDror, Ron O., Adelson, Edward H., Willsky, Alan S. 01 September 2001 (has links)
Humans recognize optical reflectance properties of surfaces such as metal, plastic, or paper from a single image without knowledge of illumination. We develop a machine vision system to perform similar recognition tasks automatically. Reflectance estimation under unknown, arbitrary illumination proves highly underconstrained due to the variety of potential illumination distributions and surface reflectance properties. We have found that the spatial structure of real-world illumination possesses some of the statistical regularities observed in the natural image statistics literature. A human or computer vision system may be able to exploit this prior information to determine the most likely surface reflectance given an observed image. We develop an algorithm for reflectance classification under unknown real-world illumination, which learns relationships between surface reflectance and certain features (statistics) computed from a single observed image. We also develop an automatic feature selection method.
|
112 |
How do Humans Determine Reflectance Properties under Unknown Illumination?Fleming, Roland W., Dror, Ron O., Adelson, Edward H. 21 October 2001 (has links)
Under normal viewing conditions, humans find it easy to distinguish between objects made out of different materials such as plastic, metal, or paper. Untextured materials such as these have different surface reflectance properties, including lightness and gloss. With single isolated images and unknown illumination conditions, the task of estimating surface reflectance is highly underconstrained, because many combinations of reflection and illumination are consistent with a given image. In order to work out how humans estimate surface reflectance properties, we asked subjects to match the appearance of isolated spheres taken out of their original contexts. We found that subjects were able to perform the task accurately and reliably without contextual information to specify the illumination. The spheres were rendered under a variety of artificial illuminations, such as a single point light source, and a number of photographically-captured real-world illuminations from both indoor and outdoor scenes. Subjects performed more accurately for stimuli viewed under real-world patterns of illumination than under artificial illuminations, suggesting that subjects use stored assumptions about the regularities of real-world illuminations to solve the ill-posed problem.
|
113 |
Recognition of Surface Reflectance Properties from a Single Image under Unknown Real-World IlluminationDror, Ron O., Edward H. Adelson,, Willsky, Alan S. 21 October 2001 (has links)
This paper describes a machine vision system that classifies reflectance properties of surfaces such as metal, plastic, or paper, under unknown real-world illumination. We demonstrate performance of our algorithm for surfaces of arbitrary geometry. Reflectance estimation under arbitrary omnidirectional illumination proves highly underconstrained. Our reflectance estimation algorithm succeeds by learning relationships between surface reflectance and certain statistics computed from an observed image, which depend on statistical regularities in the spatial structure of real-world illumination. Although the algorithm assumes known geometry, its statistical nature makes it robust to inaccurate geometry estimates.
|
114 |
Surface Reflectance Recognition and Real-World Illumination StatisticsDror, Ron O. 01 October 2002 (has links)
Humans distinguish materials such as metal, plastic, and paper effortlessly at a glance. Traditional computer vision systems cannot solve this problem at all. Recognizing surface reflectance properties from a single photograph is difficult because the observed image depends heavily on the amount of light incident from every direction. A mirrored sphere, for example, produces a different image in every environment. To make matters worse, two surfaces with different reflectance properties could produce identical images. The mirrored sphere simply reflects its surroundings, so in the right artificial setting, it could mimic the appearance of a matte ping-pong ball. Yet, humans possess an intuitive sense of what materials typically "look like" in the real world. This thesis develops computational algorithms with a similar ability to recognize reflectance properties from photographs under unknown, real-world illumination conditions. Real-world illumination is complex, with light typically incident on a surface from every direction. We find, however, that real-world illumination patterns are not arbitrary. They exhibit highly predictable spatial structure, which we describe largely in the wavelet domain. Although they differ in several respects from the typical photographs, illumination patterns share much of the regularity described in the natural image statistics literature. These properties of real-world illumination lead to predictable image statistics for a surface with given reflectance properties. We construct a system that classifies a surface according to its reflectance from a single photograph under unknown illuminination. Our algorithm learns relationships between surface reflectance and certain statistics computed from the observed image. Like the human visual system, we solve the otherwise underconstrained inverse problem of reflectance estimation by taking advantage of the statistical regularity of illumination. For surfaces with homogeneous reflectance properties and known geometry, our system rivals human performance.
|
115 |
Perceiving Illumination Inconsistencies in ScenesOstrovsky, Yuri, Cavanagh, Patrick, Sinha, Pawan 05 November 2001 (has links)
The human visual system is adept at detecting and encoding statistical regularities in its spatio-temporal environment. Here we report an unexpected failure of this ability in the context of perceiving inconsistencies in illumination distributions across a scene. Contrary to predictions from previous studies [Enns and Rensink, 1990; Sun and Perona, 1996a, 1996b, 1997], we find that the visual system displays a remarkable lack of sensitivity to illumination inconsistencies, both in experimental stimuli and in images of real scenes. Our results allow us to draw inferences regarding how the visual system encodes illumination distributions across scenes. Specifically, they suggest that the visual system does not verify the global consistency of locally derived estimates of illumination direction.
|
116 |
A graphics architecture for ray tracing and photon mappingLing, Junyi 01 November 2005 (has links)
Recently, methods were developed to render various global illumination effects with rasterization GPUs. Among those were hardware based ray tracing and photon mapping. However, due to current GPU??s inherent architectural limitations, the efficiency and throughput of these methods remained low. In this thesis, we propose a coherent rendering system that addresses these issues. First, we introduce new photon mapping and ray racing acceleration algorithms that facilitate data coherence and spatial locality, as well as eliminating unnecessary random memory accesses. A high level abstraction of the combined ray tracing and photon mapping streaming pipeline is introduced. Based on this abstraction, an efficient ray tracing and photon mapping GPU is designed. Using an event driven simulator, developed for this GPU, we verify and validate the proposed algorithms and architecture. Simulation results have validated better interactive performances compared to the current GPUs.
|
117 |
Automatic segmentation of skin lesions from dermatological photographsGlaister, Jeffrey Luc January 2013 (has links)
Melanoma is the deadliest form of skin cancer if left untreated. Incidence rates of melanoma have been increasing, especially among young adults, but survival rates are high if detected early. Unfortunately, the time and costs required for dermatologists to screen all patients for melanoma are prohibitively expensive. There is a need for an automated system to assess a patient's risk of melanoma using photographs of their skin lesions. Dermatologists could use the system to aid their diagnosis without the need for special or expensive equipment.
One challenge in implementing such a system is locating the skin lesion in the digital image. Most existing skin lesion segmentation algorithms are designed for images taken using a special instrument called the dermatoscope. The presence of illumination variation in digital images such as shadows complicates the task of finding the lesion. The goal of this research is to develop a framework to automatically correct and segment the skin lesion from an input photograph. The first part of the research is to model illumination variation using a proposed multi-stage illumination modeling algorithm and then using that model to correct the original photograph. Second, a set of representative texture distributions are learned from the corrected photograph and a texture distinctiveness metric is calculated for each distribution. Finally, a texture-based segmentation algorithm classifies regions in the photograph as normal skin or lesion based on the occurrence of representative texture distributions. The resulting segmentation can be used as an input to separate feature extraction and melanoma classification algorithms.
The proposed segmentation framework is tested by comparing lesion segmentation results and melanoma classification results to results using other state-of-the-art algorithms. The proposed framework has better segmentation accuracy compared to all other tested algorithms. The segmentation results produced by the tested algorithms are used to train an existing classification algorithm to identify lesions as melanoma or non-melanoma. Using the proposed framework produces the highest classification accuracy and is tied for the highest sensitivity and specificity.
|
118 |
Fast photorealistic techniques to simulate global illumination in videogames and virtual environmentsMéndez Feliu, Àlex 15 June 2007 (has links)
Per al càlcul de la il·luminació global per a la síntesi d'imatges d'escenaris virtuals s'usen mètodes físicament acurats com a radiositat o el ray-tracing. Aquests mètodes són molt potents i capaços de generar imatges de gran realisme, però són molt costosos. A aquesta tesi presenta algunes tècniques per simular i/o accelerar el càlcul de la il·luminació global. La tècnica de les obscurances es basa en la suposició que com més amagat és un punt a l'escena, més fosc s'ha de veure. Es calcula analitzant l'entorn geomètric del punt i ens dóna un valor per a la seva il·luminació indirecta, que no és físicament acurat, però sí aparentment realista.Aquesta tècnica es millora per a entorns en temps real com els videojocs. S'aplica també a entorns de ray-tracing per a la generació d'imatges realistes. En aquest context, el càlcul de seqüències de frames per a l'animació de llums i càmeres s'accelera enormement reusant informació entre frames.Les obscurances serveixen per a simular la il·luminació indirecta d'una escena. La llum directa es calcula apart i de manera independent. El desacoblament de la llum directa i la indirecta és una gran avantatge, i en treurem profit. Podem afegir fàcilment l'efecte de coloració entre objectes sense afegir temps de càlcul. Una altra avantatge és que per calcular les obscurances només hem d'analitzar un entorn limitat al voltant del punt.Per escenes virtuals difuses, la radiositat es pot precalcular i l'escena es pot navegar amb apariència realista, però si un objecte de l'escena es mou en un entorn dinàmic en temps real, com un videojoc, el recàlcul de la il·luminació global de l'escena és prohibitiu. Com les obscurances es calculen en un entorn limitat, es poden recalcular en temps real per a l'entorn de l'objecte que es mou a cada frame i encara aconseguir temps real.A més, podem fer servir les obscurances per a calcular imatges de gran qualitat, o per seqüències d'imatges per una animació, com en el ray-tracing. Això ens permet tractar materials no difusos i investigar l'ús de tècniques normalment difuses com les obscurances en entorns generals. Quan la càmera està estàtica, l'ús d'animació de llum només afecta la il·luminació directa, i si usem obscurances per a la llum indirecta, gràcies al seu desacoblament, el càlcul de sèries de frames per a una animació és molt ràpid. El següent pas és afegir animació de càmera, reusant els valors de les obscurances entre frames. Aquesta última tècnica de reús d'informació de la il·luminació del punt d'impacte entre frames la podem usar per a tècniques acurades d'il·luminació global com el path-tracing, i nosaltres estudiem com reusar aquesta informació de manera no esbiaixada. A més, estudiem diferents tècniques de mostreig per a la semi-esfera, i les obscurances es calculen amb una nova tècnica, aplicant depth peeling amb GPU. / To compute global illumination solutions for rendering virtual scenes, physically accurate methods based on radiosity or ray-tracing are usually employed. These methods, though powerful and capable of generating images with high realism, are very costly. In this thesis, some techniques to simulate and/or accelerate the computation of global illumination are studied. The obscurances technique is based on the supposition that the more occluded is a point in the scene, the darker it will appear. It is computed by analyzing the geometric environment of the point and gives a value for the indirect illumination for the point that is, though not physically accurate, visually realistic. This technique is enhanced and improved in real-time environments as videogames. It is also applied to ray-tracing frameworks to generate realistic images. In this last context, sequences of frames for animation of lights and cameras are dramatically accelerated by reusing information between frames.The obscurances are computed to simulate the indirect illumination of a scene. The direct lighting is computed apart and in an independent way. The decoupling of direct and indirect lighting is a big advantage, and we will take profit from this. We can easily add color bleeding effects without adding computation time. Another advantage is that to compute the obscurances we only need to analyze a limited environment around the point. For diffuse virtual scenes, the radiosity can be precomputed and we can navigate the scene with a realistic appearance. But when a small object moves in a dynamic real-time virtual environment, as a videogame, the recomputation of the global illumination of the scene is prohibitive. Thanks to the limited reach of the obscurance computation, we can recompute the obscurances only for the limited environment of the moving object for every frame and still have real-time frame rates. Obscurances can also be used to compute high quality images, or sequences of images for an animation, in a ray-tracing-like. This allows us to deal with non-diffuse materials and to research the use of a commonly diffuse technique as obscurances in general environments. For static cameras, using light animation only affects to direct lighting, and if we use obscurances for the indirect lighting, thanks to the decoupling of direct and indirect illumination, the computation of a series of frames for the animation is very fast. The next step is to add camera animation, reusing the obscurances results between frames. Using this last technique of reusing the illumination of the hit points between frames for a true global illumination technique as path tracing, we study how we can reuse this information in an unbiased way. Besides, a study of different sampling techniques for the hemisphere is made, obscurances are computed with the depth-peeling technique and using GPU.
|
119 |
Real-time DVR Illumination Methods for Ultrasound DataSundén, Erik January 2010 (has links)
Ultrasound (US) volume data is noisy, so traditional methods for direct volume rendering (DVR) are less appropriate. Improved methods or new techniques are required. There are furthermore a high performance requirement and limited pre-processing to be considered in order for it to be used interactively, since the volume data might be time-varying. There exist numerous techniques for improving visual perception of volume rendering, and while some perform well and produce a visually enhanced result, many are designed and compared for use with medical data that has a high signal-to-noise ratio. This master thesis describe and compare recent methods for DVR illumination, in the form of ambient occlusion or direct/indirect lighting from an external light source. New designs and modifications are introduced for efficiently and effectively enhancing the visual quality of DVR with US data. Furthermore, this thesis addresses the issue of how clipping is performed during rendering and for the different illumination techniques, which is commonly used in ultrasound visualization. This diploma work was conducted at Siemens Corporate Research in Princeton, NJ where the partially open source framework XIP is developed. The framework was extended further to include modern methods for DVR illumination that are described in detail within this thesis. Finally, presented results show that several methods can be used to visually enhance the visualization within highly interactive frame-rates.
|
120 |
Surface Light Field Generation, Compression and RenderingMiandji, Ehsan January 2012 (has links)
We present a framework for generating, compressing and rendering of SurfaceLight Field (SLF) data. Our method is based on radiance data generated usingphysically based rendering methods. Thus the SLF data is generated directlyinstead of re-sampling digital photographs. Our SLF representation decouplesspatial resolution from geometric complexity. We achieve this by uniform samplingof spatial dimension of the SLF function. For compression, we use ClusteredPrincipal Component Analysis (CPCA). The SLF matrix is first clustered to lowfrequency groups of points across all directions. Then we apply PCA to eachcluster. The clustering ensures that the within-cluster frequency of data is low,allowing for projection using a few principal components. Finally we reconstructthe CPCA encoded data using an efficient rendering algorithm. Our reconstructiontechnique ensures seamless reconstruction of discrete SLF data. We applied ourrendering method for fast, high quality off-line rendering and real-time illuminationof static scenes. The proposed framework is not limited to complexity of materialsor light sources, enabling us to render high quality images describing the full globalillumination in a scene.
|
Page generated in 0.066 seconds