Return to search

Edge-resolved non-line-of-sight imaging

Over the past decade, the possibility of forming images of objects hidden from line-of-sight (LOS) view has emerged as an intriguing and potentially important expansion of computational imaging and computer vision technology. This capability could help soldiers anticipate danger in a tunnel system, autonomous vehicles avoid collision, and first responders safely traverse a building. In many scenarios where non-line-of-sight (NLOS) vision is desired, the LOS view is obstructed by a wall with a vertical edge. In this thesis we show that through modeling and computation, the impediment to LOS itself can be exploited for enhanced resolution of the hidden scene.

NLOS methods may be active, where controlled illumination of the hidden scene is used, or passive, relying only on already present light sources. In both active and passive NLOS imaging, measured light returns to the sensor after multiple diffuse bounces. Each bounce scatters light in all directions, eliminating directional information. When the scene is hidden behind a wall with a vertical edge, that edge occludes light as a function of its incident azimuthal angle around the edge. Measurements acquired on the floor adjacent to the occluding edge thus contain rich azimuthal information about the hidden scene. In this thesis, we explore several edge-resolved NLOS imaging systems that exploit the occlusion provided by a vertical edge. In addition to demonstrating novel edge-resolved NLOS imaging systems with real experimental data, this thesis includes modeling, performance bound analyses, and inversion algorithms for the proposed systems.

We first explore the use of a single vertical edge to form a 1D (in azimuthal angle) reconstruction of the hidden scene. Prior work demonstrated that temporal variation in a video of the floor may be used to image moving components of the hidden scene. In contrast, our algorithm reconstructs both moving and stationary hidden scenery from a single photograph, without assuming uniform floor albedo. We derive a forward model that describes the measured photograph as a nonlinear combination of the unknown floor albedo and the light from behind the wall. The inverse problem, which is the joint estimation of floor albedo and a 1D reconstruction of the hidden scene, is solved via optimization, where we introduce regularizers that help separate light variations in the measured photograph due to floor pattern and hidden scene, respectively.

Next, we combine the resolving power of a vertical edge with information from the relationship between intensity and radial distance to form 2D reconstructions from a single passive photograph. We derive a new forward model, accounting for radial falloff, and propose two inversion algorithms to form 2D reconstructions from a single photograph of the penumbra. The performances of both algorithms are demonstrated on experimental data corresponding to several different hidden scene configurations. A Cramer-Rao bound analysis further demonstrates the feasibility and limitations of this 2D corner camera.

Our doorway camera exploits the occlusion provided by the two vertical edges of a doorway for more robust 2D reconstruction of the hidden scene. This work provides and demonstrates a novel inversion algorithm to jointly estimate two views of change in the hidden scene, using the temporal difference between photographs acquired on the visible side of the doorway. A Cramer-Rao bound analysis is used to demonstrate the 2D resolving power of the doorway camera over other passive acquisition strategies and to motivate the novel biangular reconstruction grid.

Lastly, we present the active corner camera. Most existing active NLOS methods illuminate the hidden scene using a pulsed laser directed at a relay surface and collect time-resolved measurements of returning light. The prevailing approaches are inherently limited by the need for laser scanning, a process that is generally too slow to image hidden objects in motion. Methods that avoid laser scanning track the moving parts of the hidden scene as one or two point targets. In this work, based on more complete optical response modeling yet still without multiple illumination positions, we demonstrate accurate reconstructions of objects in motion and a `map’ of the stationary scenery behind them. This new ability to count, localize, and characterize the sizes of hidden objects in motion, combined with mapping of the stationary hidden scene could greatly improve indoor situational awareness in a variety of applications.

Identiferoai:union.ndltd.org:bu.edu/oai:open.bu.edu:2144/45475
Date17 January 2023
CreatorsSeidel, Sheila W.
ContributorsGoyal, Vivek K.
Source SetsBoston University
Languageen_US
Detected LanguageEnglish
TypeThesis/Dissertation

Page generated in 0.0017 seconds