Spelling suggestions: "subject:"Active illumination"" "subject:"Active llumination""
1 |
Active Illumination for the RealWorldAchar, Supreeth 01 July 2017 (has links)
Active illumination systems use a controllable light source and a light sensor to measure properties of a scene. For such a system to work reliably across a wide range of environments it must be able to handle the effects of global light transport, bright ambient light, interference from other active illumination devices, defocus, and scene motion. The goal of this thesis is to develop computational techniques and hardware arrangements to make active illumination devices based on commodity-grade components that work under real world conditions. We aim to combine the robustness of a scanning laser rangefinder with the speed, measurement density, compactness, and economy of a consumer depth camera. Towards this end, we have made four contributions. The first is a computational technique for compensating for the effects of motion while separating the direct and global components of illumination. The second is a method that combines triangulation and depth from illumination defocus cues to increase the working range of a projector-camera system. The third is a new active illumination device that can efficiently image the epipolar component of light transport between a source and sensor. The device can measure depth using active stereo or structured light and is robust to many global light transport effects. Most importantly, it works outdoors in bright sunlight despite using a low power source. Finally, we extend the proposed epipolar-only imaging technique to time-of-flight sensing and build a low-power sensor that is robust to sunlight, global illumination, multi-device interference, and camera shake. We believe that the algorithms and sensors proposed and developed in this thesis could find applications in a diverse set of fields including mobile robotics, medical imaging, gesture recognition, and agriculture.
|
2 |
Active scene illumination metods for privacy-preserving indoor occupant localizationZhao, Jinyuan 29 September 2019 (has links)
Indoor occupant localization is a key component of location-based smart-space applications. Such applications are expected to save energy and provide productivity gains and health benefits. Many traditional camera-based indoor localization systems use visual information to detect and analyze the states of room occupants. These systems, however, may not be acceptable in privacy-sensitive scenarios since high-resolution images may reveal room and occupant details to eavesdroppers. To address visual privacy concerns, approaches have been developed using extremely-low-resolution light sensors, which provide limited visual information and preserve privacy even if hacked. These systems preserve visual privacy and are reasonably accurate, but they fail in the presence of noise and ambient light changes.
This dissertation focuses on two-dimensional localization of an occupant on the floor plane, where three goals are considered in the development of an indoor localization system: accuracy, robustness and visual privacy preservation. Unlike techniques that preserve user privacy by degrading full-resolution data, this dissertation focuses on an array of single-pixel light sensors. Furthermore, to make the system robust to noise, ambient light changes and sensor failures, the scene is actively illuminated by modulating an array of LED light sources, which allows algorithms to use light transported from sources to sensors (described as light transport matrix) instead of raw sensor readings. Finally, to assure accurate localization, both principled model-based algorithms and learning-based approaches via active scene illumination are proposed.
In the proposed model-based algorithm, the appearance of an object is modeled as a change in floor reflectivity in some area. A ridge regression algorithm is developed to estimate the change of floor reflectivity from change in the light transport matrix caused by appearance of the object. The region of largest reflectivity change identifies object location. Experimental validation demonstrates that the proposed algorithm can accurately localize both flat objects and human occupants, and is robust to noise, illumination changes and sensor failures. In addition, a sensor design using aperture grids is proposed which further improves localization accuracy. As for learning-based approaches, this dissertation proposes a convolutional neural network, which reshapes the input light transport matrix to take advantage of spatial correlations between sensors. As a result, the proposed network can accurately localize human occupants in both simulations and the real testbed with a small number of training samples. Moreover, unlike model-based approaches, the proposed network does not require modeling assumptions or knowledge of room, sources and sensors.
|
3 |
Fatigue Monitoring SystemRatecki, Tomasz 14 May 2010 (has links)
This work provides an innovative solution for monitoring fatigue for users behind workstations. A web camera was adjusted to work in near infrared range and a system of 880 nm IR diodes was implemented to create an IR vision system to localize and track the eye pupils. The software developed monitors and tracks eyes for signs of fatigue by measuring PERCLOS. The software developed runs on the workstation and is designed to draw limited computational power, so as to not interfere with the user task. To overcome low-frame rate imposed by the hardware limitations and to improve real time monitoring, two-phases detection and tacking algorithm is implemented. The proposed system successfully monitors fatigue at a rate of 8 fps. The system is well suited to monitor users in command centers, flight control centers, airport traffic dispatches, military operation and command centers, etc., but the work can be extended to wearable devices and other environments.
|
4 |
Programmable Image-Based Light Capture for PrevisualizationLindsay, Clifford 02 April 2013 (has links)
Previsualization is a class of techniques for creating approximate previews of a movie sequence in order to visualize a scene prior to shooting it on the set. Often these techniques are used to convey the artistic direction of the story in terms of cinematic elements, such as camera movement, angle, lighting, dialogue, and character motion. Essentially, a movie director uses previsualization (previs) to convey movie visuals as he sees them in his "minds-eye". Traditional methods for previs include hand-drawn sketches, Storyboards, scaled models, and photographs, which are created by artists to convey how a scene or character might look or move. A recent trend has been to use 3D graphics applications such as video game engines to perform previs, which is called 3D previs. This type of previs is generally used prior to shooting a scene in order to choreograph camera or character movements. To visualize a scene while being recorded on-set, directors and cinematographers use a technique called On-set previs, which provides a real-time view with little to no processing. Other types of previs, such as Technical previs, emphasize accurately capturing scene properties but lack any interactive manipulation and are usually employed by visual effects crews and not for cinematographers or directors. This dissertation's focus is on creating a new method for interactive visualization that will automatically capture the on-set lighting and provide interactive manipulation of cinematic elements to facilitate the movie maker's artistic expression, validate cinematic choices, and provide guidance to production crews. Our method will overcome the drawbacks of the all previous previs methods by combining photorealistic rendering with accurately captured scene details, which is interactively displayed on a mobile capture and rendering platform.
This dissertation describes a new hardware and software previs framework that enables interactive visualization of on-set post-production elements. A three-tiered framework, which is the main contribution of this dissertation is; 1) a novel programmable camera architecture that provides programmability to low-level features and a visual programming interface, 2) new algorithms that analyzes and decomposes the scene photometrically, and 3) a previs interface that leverages the previous to perform interactive rendering and manipulation of the photometric and computer generated elements. For this dissertation we implemented a programmable camera with a novel visual programming interface. We developed the photometric theory and implementation of our novel relighting technique called Symmetric lighting, which can be used to relight a scene with multiple illuminants with respect to color, intensity and location on our programmable camera. We analyzed the performance of Symmetric lighting on synthetic and real scenes to evaluate the benefits and limitations with respect to the reflectance composition of the scene and the number and color of lights within the scene. We found that, since our method is based on a Lambertian reflectance assumption, our method works well under this assumption but that scenes with high amounts of specular reflections can have higher errors in terms of relighting accuracy and additional steps are required to mitigate this limitation. Also, scenes which contain lights whose colors are a too similar can lead to degenerate cases in terms of relighting. Despite these limitations, an important contribution of our work is that Symmetric lighting can also be leveraged as a solution for performing multi-illuminant white balancing and light color estimation within a scene with multiple illuminants without limits on the color range or number of lights. We compared our method to other white balance methods and show that our method is superior when at least one of the light colors is known a priori.
|
5 |
A novel 3D recovery method by dynamic (de)focused projectionLertrusdachakul, Intuon 30 November 2011 (has links) (PDF)
This paper presents a novel 3D recovery method based on structured light. This method unifies depth from focus (DFF) and depth from defocus (DFD) techniques with the use of a dynamic (de)focused projection. With this approach, the image acquisition system is specifically constructed to keep a whole object sharp in all of the captured images. Therefore, only the projected patterns experience different defocused deformations according to the object's depths. When the projected patterns are out of focus, their Point Spread Function (PSF) is assumed to follow a Gaussian distribution. The final depth is computed by the analysis of the relationship between the sets of PSFs obtained from different blurs and the variation of the object's depths. Our new depth estimation can be employed as a stand-alone strategy. It has no problem with occlusion and correspondence issues. Moreover, it handles textureless and partially reflective surfaces. The experimental results on real objects demonstrate the effective performance of our approach, providing reliable depth estimation and competitive time consumption. It uses fewer input images than DFF, and unlike DFD, it ensures that the PSF is locally unique.
|
6 |
A novel 3D recovery method by dynamic (de)focused projection / Nouvelle méthode de reconstruction 3D par projection dynamique (dé)focaliséeLertrusdachakul, Intuoun 30 November 2011 (has links)
Ce mémoire présente une nouvelle méthode pour l’acquisition 3D basée sur la lumière structurée. Cette méthode unifie les techniques de depth from focus (DFF) et depth from defocus (DFD) en utilisant une projection dynamique (dé)focalisée. Avec cette approche, le système d’acquisition d’images est construit de manière à conserver la totalité de l’objet nette sur toutes les images. Ainsi, seuls les motifs projetés sont soumis aux déformations de défocalisation en fonction de la profondeur de l’objet. Quand les motifs projetés ne sont pas focalisés, leurs Point Spread Function (PSF) sont assimilées à une distribution gaussienne. La profondeur finale est calculée en utilisant la relation entre les PSF de différents niveaux de flous et les variations de la profondeur de l’objet. Notre nouvelle estimation de la profondeur peut être utilisée indépendamment. Elle ne souffre pas de problèmes d’occultation ou de mise en correspondance. De plus, elle gère les surfaces sans texture et semi-réfléchissante. Les résultats expérimentaux sur des objets réels démontrent l’efficacité de notre approche, qui offre une estimation de la profondeur fiable et un temps de calcul réduit. La méthode utilise moins d’images que les approches DFF et contrairement aux approches DFD, elle assure que le PSF est localement unique / This paper presents a novel 3D recovery method based on structured light. This method unifies depth from focus (DFF) and depth from defocus (DFD) techniques with the use of a dynamic (de)focused projection. With this approach, the image acquisition system is specifically constructed to keep a whole object sharp in all of the captured images. Therefore, only the projected patterns experience different defocused deformations according to the object’s depths. When the projected patterns are out of focus, their Point Spread Function (PSF) is assumed to follow a Gaussian distribution. The final depth is computed by the analysis of the relationship between the sets of PSFs obtained from different blurs and the variation of the object’s depths. Our new depth estimation can be employed as a stand-alone strategy. It has no problem with occlusion and correspondence issues. Moreover, it handles textureless and partially reflective surfaces. The experimental results on real objects demonstrate the effective performance of our approach, providing reliable depth estimation and competitive time consumption. It uses fewer input images than DFF, and unlike DFD, it ensures that the PSF is locally unique.
|
Page generated in 0.1185 seconds