Spelling suggestions: "subject:"[een] STRUCTURED LIGHT"" "subject:"[enn] STRUCTURED LIGHT""
41 |
High-Speed, Large Depth-of-Field and Automated Microscopic 3D ImagingLiming Chen (18419367) 22 April 2024 (has links)
<p dir="ltr">Over the last few decades, three-dimensional (3D) optical imaging and sensing techniques have attracted much attention from both academia and industries. Owing to its capability of gathering more information than conventional 2D imaging, it has been successfully adopted in many applications on the macro scale which ranges from sub-meters to meters such as entertainment, commercial electronics, manufacturing, and construction. For example, the iPhone “FaceID” sensor is used for facial recognition, and the Microsoft Kinect is used to track body motion in video games. With recent advances in many technical fields, such as semiconductor packaging, additive manufacturing, and micro-robots, there is an increasing need for microscopic 3D imaging, and several techniques including interferometry, confocal microscopy, focus variation, and structured light have been developed and adopted in these industries. Among these techniques, the structured light 3D imaging technique is considered one of the most promising techniques for in-situ metrology, owing to its advantage of simple configuration and high measurement speed. However, several challenges must be addressed in employing the structured-light 3D imaging technique in these fields.</p><p dir="ltr">The first challenge is the limited measurement range caused by the limited depth of field (DOF). Given the necessity for large magnification in the microscopic structured light system, the DOF becomes notably shallow, especially when pin-hole lenses are adopted. This issue is exacerbated by the fact that the measured objects in the aforementioned industries could contain miniaturized features spanning a broad height range. To address this problem, we introduce the idea of the focus stacking technique, wherein the focused pixels gathered from various focus settings are merged to form an all-in-focus image, into the structured-light 3D imaging. We further developed a computational framework that utilizes the phase information and fringe contrast of the projected fringe patterns to mitigate the influence of object textures.</p><p dir="ltr">The second challenge is the 3D imaging speed. The 3D measurement speed is a crucial factor for in-situ applications. We improved the large DOF 3D imaging speed by reducing the required fringe images from two aspects: 1) We developed a calibration method for multifocus pin-hole mode, which can eliminate the necessity of the 2D image alignment. The conventional method based on circle patterns will be affected during the feature extraction process by the significant camera defocusing. In contrast, our proposed method is more robust since it uses virtual features extracted from a reconstructed white flat surface under a pre-calibrated focus setting. 2)We developed a phase unwrapping method with the assistance of the electrically tunable lens (ETL), which is an optical component we used to capture fringe images under various focus settings. The proposed phase unwrapping method leverages the focal plane position of each focus setting to estimate a rough depth map for the geometric-constraint phase unwrapping algorithm. By doing this, the method eliminates the limitation on the effective working depth range and becomes feasible in large DOF 3D imaging.</p><h4>Even with all previous methodologies, the efficiency of large DOF 3D imaging is still not high enough under certain circumstances. One of the major reasons is that we can still only use a series of pre-defined focus settings to run the focus stacking, since we have no prior on the measured objects. This issue could lead to low measurement efficiency when the depth range of the measured objects does not cover the whole enlarged DOF. To improve the performance of the system under such situations, we developed a method that introduces another computational imaging technique: the focal sweep technique, to help determine the optimal focus settings adapting to different measured objects.</h4><h4>In summary, this dissertation contributed to high-speed, large depth-of-field, and automated 3D imaging, which can be used in micro-scale applications from the following aspects: (1) enlarging the DOF of the microscopic 3D imaging using the focus stacking technique; (2) developing methods to improve the speed of large DOF microscopic 3D imaging; and (3) developing a method to improve the efficiency of the focus stacking under certain circumstances. These contributions can potentially enable the structured-light 3D imaging technique to be an alternative 3D microscopy approach for many academic studies and industry applications.</h4><p></p>
|
42 |
Linear, Discrete, and Quadratic Constraints in Single-image 3D ReconstructionEcker, Ady 14 February 2011 (has links)
In this thesis, we investigate the formulation, optimization and ambiguities in single-image 3D surface reconstruction from geometric and photometric constraints. We examine linear, discrete and quadratic constraints for shape from planar curves, shape from texture, and shape from shading.
The problem of recovering 3D shape from the projection of planar curves on a surface is strongly motivated by perception studies. Applications include single-view modeling and uncalibrated structured light. When the curves intersect, the problem leads to a linear system for which a direct least-squares method is sensitive to noise. We derive a more stable solution and show examples where the same method produces plausible surfaces from the projection of parallel (non-intersecting) planar cross sections.
The problem of reconstructing a smooth surface under constraints that have discrete ambiguities arise in areas such as shape from texture, shape from shading, photometric stereo and shape from defocus. While the problem is computationally hard, heuristics based on semidefinite programming may reveal the shape of the surface.
Finally, we examine the shape from shading problem without boundary conditions as a polynomial system. This formulation allows, in generic cases, a complete solution for ideal polyhedral objects. For the general case we propose a semidefinite programming relaxation procedure, and an exact line search iterative procedure with a new smoothness term that favors folds at edges. We use this numerical technique to inspect shading ambiguities.
|
43 |
[en] A STUDY OF TECHNIQUES FOR SHAPE ACQUISITION USING STEREO AND STRUCTURED LIGHT AIMED FOR ENGINEERING / [pt] UM ESTUDO DAS TÉCNICAS DE OBTENÇÃO DE FORMA A PARTIR DE ESTÉREO E LUZ ESTRUTURADA PARA ENGENHARIAGABRIEL TAVARES MALIZIA ALVES 26 August 2005 (has links)
[pt] Há uma crescente demanda pela criação de modelos
computacionais
representativos de objetos reais para projetos de
engenharia. Uma alternativa
barata e eficaz consiste na utilização de técnicas de
Visão Computacional baseada
em câmeras e projetores disponíveis no mercado de
computadores pessoais. Este
trabalho avalia um sistema óptico estéreo ativo para
capturar formas geométricas
de objetos utilizando um par de câmeras e um projetor
digital. O sistema se baseia
em idéias de trabalhos anteriores, com duas contribuições
nesta dissertação. A
primeira é uma técnica mais robusta de detecção de pontos
notáveis em padrões
de calibração das câmeras. A segunda contribuição consiste
num novo método de
ajuste de cilindros que visa aplicar o sistema estudado na
inspeção de instalações
de dutos industriais. As conclusões apresentadas procuram
avaliar a robustez e
precisão do sistema proposto como um instrumento de
medidas em Engenharia. / [en] There has been a growing demand for creation of computer
models based on
real models for engineering projects. A cheap and
effective alternative consists in
using Computer Vision techniques based on cameras and
projectors available at
the personal computer market. This work evaluates a stereo
optic system for
capturing geometric shapes from objects using a pair of
cameras and a single
digital projector. The system is based on former works and
a pair of contributions
is obtained at this dissertation. The first contribution
is a more robust technique
for finding corners and points at cameras calibration
patterns. And the second one
consists on a new method for cylinder fit for inspecting
industrial piping facilities
with the studied system. The final conclusions evaluate
the robustness and
precision from the proposed system as a measurement tool
for Engineering.
|
44 |
Multi-Scale, Multi-Modal, High-Speed 3D Shape MeasurementYatong An (6587408) 10 June 2019 (has links)
<div>With robots expanding their applications in more and more scenarios, practical problems from different scenarios are challenging current 3D measurement techniques. For instance, infrastructure inspection robots need large-scale and high-spatial-resolution 3D data for crack and defect detection, medical robots need 3D data well registered with temperature information, and warehouse robots need multi-resolution 3D shape measurement to adapt to different tasks. In the past decades, a lot of progress has been made in improving the performance of 3D shape measurement methods. Yet, measurement scale and speed and the fusion of multiple modalities of 3D shape measurement techniques remain vital aspects to be improved for robots to have a more complete perception of the real scene. In this dissertation, we will focus on the digital fringe projection technique, which usually can achieve high-accuracy 3D data, and expand the capability of that technique to complicated robot applications by 1) extending the measurement scale, 2) registering with multi-modal information, and 3) improving the measurement speed of the digital fringe projection technique.</div><div><br></div><div>The measurement scale of the digital fringe projection technique mainly focused on a small scale, from several centimeters to tens of centimeters, due to the lack of a flexible and convenient calibration method for a large-scale digital fringe projection system. In this study, we first developed a flexible and convenient large-scale calibration method and then extended the measurement scale of the digital fringe projection technique to several meters. The meter scale is needed in many large-scale robot applications, including large infrastructure inspection. Our proposed method includes two steps: 1) accurately calibrate intrinsics (i.e., focal lengths and principal points) with a small calibration board at close range where both the camera and projector are out of focus, and 2) calibrate the extrinsic parameters (translation and rotation) from camera to projector with the assistance of a low-accuracy large-scale 3D sensor (e.g., Microsoft Kinect). The two-step strategy avoids fabricating a large and accurate calibration target, which is usually expensive and inconvenient for doing pose adjustments. With a small calibration board and a low-cost 3D sensor, we calibrated a large-scale 3D shape measurement system with a FOV of (1120 x 1900 x 1000) mm^3 and verified the correctness of our method.</div><div><br></div><div> Multi-modal information is required in applications such as medical robots, which may need both to capture the 3D geometry of objects and to monitor their temperature. To allow robots to have a more complete perception of the scene, we further developed a hardware system that can achieve real-time 3D geometry and temperature measurement. Specifically, we proposed a holistic approach to calibrate both a structured light system and a thermal camera under exactly the same world coordinate system, even though these two sensors do not share the same wavelength; and a computational framework to determine the sub-pixel corresponding temperature for each 3D point, as well as to discard those occluded points. Since the thermal 2D imaging and 3D visible imaging systems do not share the same spectrum of light, they can perform sensing simultaneously in real time. We developed a hardware system that achieved real-time 3D geometry and temperature measurement at 26Hz with 768 x 960 points per frame.</div><div><br></div><div> In dynamic applications, where the measured object or the 3D sensor could be in motion, the measurement speed will become an important factor to be considered. Previously, people projected additional fringe patterns for absolute phase unwrapping, which slowed down the measurement speed. To achieve higher measurement speed, we developed a method to unwrap a phase pixel by pixel by solely using geometric constraints of the structured light system without requiring additional image acquisition. Specifically, an artificial absolute phase map $\Phi_{min}$, at a given virtual depth plane $z = z_{min}$, is created from geometric constraints of the calibrated structured light system, such that the wrapped phase can be pixel-by-pixel unwrapped by referring to $\Phi_{min}$. Since $\Phi_{min}$ is defined in the projector space, the unwrapped phase obtained from this method is an absolute phase for each pixel. Experimental results demonstrate the success of this proposed novel absolute-phase unwrapping method. However, the geometric constraint-based phase unwrapping method using a virtual plane is constrained in a certain depth range. The depth range limitations cause difficulties in two measurement scenarios: measuring an object with larger depth variation, and measuring a dynamic object that could move beyond the depth range. To address the problem of depth limitation, we further propose to take advantage of an additional 3D scanner and use additional external information to extend the maximum measurement range of the pixel-wise phase unwrapping method. The additional 3D scanner can provide a more detailed reference phase map $\Phi_{ref}$ to assist us to do absolute phase unwrapping without the depth constraint. Experiments demonstrate that our method, assisted by an additional 3D scanner, can work for a large depth range, and the maximum speed of the low-cost 3D scanner is not necessarily an upper bound of the speed of the structured light system. Assisted by Kinect V2, our structured light system achieved 53Hz with a resolution 1600 x 1000 pixels when we measured dynamic objects that were moving in a large depth range.</div><div><br></div><div> In summary, we significantly advanced the 3D shape measurement technology for robots to have a more complete perception of the scene by enhancing the digital fringe projection technique in measurement scale (space domain), speed (time domain), and fusion with other modality information. This research can potentially enable robots to have a better understanding of the scene for more complicated tasks, and broadly impact many other academic studies and industrial practices.</div>
|
45 |
Interactive Holographic CinemaPortales, Christopher 2012 May 1900 (has links)
In mainstream media and entertainment, holography is often misrepresented as single perspective non-stereoscopic imagery suggesting three-dimensionality. Traditional holographic artists, however, utilize a laser setup to record and reconstruct wavefronts to describe a scene in multi-perspective natural parallax vision ("auto-stereoscopic"). Although these approaches are mutually exclusive in practice, they share a similar goal of staging three-dimensional (3D) imagery for a window-like viewing experience. This thesis presents a non-waveform digital computer approach for recording, reconstructing, and experiencing holographic visualizations in a cinematic context. By recording 3D information from a scene using the structured light method, a custom computer program performs stereoscopic reconstruction in real-time during presentation. Artists and computer users could then use a hardware device, such as the Microsoft Kinect, to explore the holographic cinematic form interactively.
|
46 |
Erfassungsplanung nach dem Optimierungsprinzip am Beispiel des StreifenprojektionsverfahrensHoltzhausen, Stefan 08 September 2015 (has links) (PDF)
Die vorliegende Arbeit befasst sich mit der Erfassung von Oberflächen mittels Streifenprojektionsverfahren. Dabei wird ein Berechnungsmodell erarbeitet, welches den durch eine Aufnahme erfassten Bereich der Objektoberfläche berechnet und bewertet. Mithilfe einer optimalen Positionierung von Einzelaufnahmen ist es möglich, ein Objekt bei festgelegten Randbedingungen zeitsparend zu erfassen.
|
47 |
Real-time 3-D Reconstruction by Means of Structured Light IlluminationLiu, Kai 01 January 2010 (has links)
Structured light illumination (SLI) is the process of projecting a series of light striped patterns such that, when viewed at an angle, a digital camera can reconstruct a 3-D model of a target object's surface. But by relying on a series of time multiplexed patterns, SLI is not typically associated with video applications. For this purpose of acquiring 3-D video, a common SLI technique is to drive the projector/camera pair at very high frame rates such that any object's motion is small over the pattern set. But at these high frame rates, the speed at which the incoming video can be processed becomes an issue. So much so that many video-based SLI systems record camera frames to memory and then apply off-line processing. In order to overcome this processing bottleneck and produce 3-D point clouds in real-time, we present a lookup-table (LUT) based solution that in our experiments, using a 640 by 480 video stream, can generate intermediate phase data at 1063.8 frames per second and full 3-D coordinate point clouds at 228.3 frames per second. These achievements are 25 and 10 times faster than previously reported studies. At the same time, a novel dual-frequency pattern is developed which combines a high-frequency sinusoid component with a unit-frequency sinusoid component, where the high-frequency component is used to generate robust phase information and the unit-frequency component is used to reduce phase unwrapping ambiguities. Finally, we developed a gamma model for SLI, which can correct the non-linear distortion caused by the optical devices. For three-step phase measuring profilometry (PMP), analysis of the root mean squared error of the corrected phase showed a 60х reduction in phase error when the gamma calibration is performed versus 33х reduction without calibration.
|
48 |
Linear, Discrete, and Quadratic Constraints in Single-image 3D ReconstructionEcker, Ady 14 February 2011 (has links)
In this thesis, we investigate the formulation, optimization and ambiguities in single-image 3D surface reconstruction from geometric and photometric constraints. We examine linear, discrete and quadratic constraints for shape from planar curves, shape from texture, and shape from shading.
The problem of recovering 3D shape from the projection of planar curves on a surface is strongly motivated by perception studies. Applications include single-view modeling and uncalibrated structured light. When the curves intersect, the problem leads to a linear system for which a direct least-squares method is sensitive to noise. We derive a more stable solution and show examples where the same method produces plausible surfaces from the projection of parallel (non-intersecting) planar cross sections.
The problem of reconstructing a smooth surface under constraints that have discrete ambiguities arise in areas such as shape from texture, shape from shading, photometric stereo and shape from defocus. While the problem is computationally hard, heuristics based on semidefinite programming may reveal the shape of the surface.
Finally, we examine the shape from shading problem without boundary conditions as a polynomial system. This formulation allows, in generic cases, a complete solution for ideal polyhedral objects. For the general case we propose a semidefinite programming relaxation procedure, and an exact line search iterative procedure with a new smoothness term that favors folds at edges. We use this numerical technique to inspect shading ambiguities.
|
49 |
Reconstrução tridimensional digital de objetos à curta distância por meio de luz estruturadaReiss, Mário Luiz Lopes January 2007 (has links)
Neste trabalho apresenta-se o desenvolvimento e avaliação de um sistema de reconstrução 3D por luz estruturada. O sistema denominado de Scan3DSL é baseado em uma câmara digital de pequeno formato e um projetor de padrões. O modelo matemático para a reconstrução 3D é baseado na equação paramétrica da reta formada pelo raio de luz projetado combinado com as equações de colinearidade. Uma estratégia de codificação de padrões foi desenvolvida para permitir o reconhecimento dos padrões projetados em um processo automático. Uma metodologia de calibração permite a determinação dos vetores diretores de cada padrão projetado e as coordenadas do centro de perspectiva do projetor de padrões. O processo de calibração é realizado com a aquisição de múltiplas imagens em um plano de calibração com tomadas em diferentes orientações e posições. Um conjunto de algoritmos de processamento de imagens foi implementado para propiciar a localização precisa dos padrões e de algumas feições, como o centro de massa e quinas. Para avaliar a precisão e as potencialidades da metodologia, um protótipo foi construído, integrando uma única câmara e um projetor de padrões. Experimentos mostram que um modelo de superfície pode ser obtido em um tempo total de processamento inferior a 10 segundos, e com erro absoluto em profundidade em torno de 0,2 mm. Evidencia-se com isso a potencialidade de uso em várias aplicações. / The purpose of this work is to present a structured light system developed. The system named Scan3DSL is based on off-the-shelf digital cameras and a projector of patterns. The mathematical model for 3D reconstruction is based on the parametric equation of the projected straight line combined with the collinearity equations. A pattern codification strategy was developed to allow fully automatic pattern recognition. A calibration methodology enables the determination of the direction vector of each pattern and the coordinates of the perspective centre of the pattern projector. The calibration processes are carried out with the acquisition of several images of a flat surface from different distances and orientations. Several processes were combined to provide a reliable solution for patterns location. In order to assess the accuracy and the potential of the methodology, a prototype was built integrating in a single mount a projector of patterns and a digital camera. The experiments using reconstructed surfaces with real data indicated a relative accuracy of 0.2 mm in depth could be achieved, in a processing time less than 10 seconds.
|
50 |
Reconstrução tridimensional digital de objetos à curta distância por meio de luz estruturadaReiss, Mário Luiz Lopes January 2007 (has links)
Neste trabalho apresenta-se o desenvolvimento e avaliação de um sistema de reconstrução 3D por luz estruturada. O sistema denominado de Scan3DSL é baseado em uma câmara digital de pequeno formato e um projetor de padrões. O modelo matemático para a reconstrução 3D é baseado na equação paramétrica da reta formada pelo raio de luz projetado combinado com as equações de colinearidade. Uma estratégia de codificação de padrões foi desenvolvida para permitir o reconhecimento dos padrões projetados em um processo automático. Uma metodologia de calibração permite a determinação dos vetores diretores de cada padrão projetado e as coordenadas do centro de perspectiva do projetor de padrões. O processo de calibração é realizado com a aquisição de múltiplas imagens em um plano de calibração com tomadas em diferentes orientações e posições. Um conjunto de algoritmos de processamento de imagens foi implementado para propiciar a localização precisa dos padrões e de algumas feições, como o centro de massa e quinas. Para avaliar a precisão e as potencialidades da metodologia, um protótipo foi construído, integrando uma única câmara e um projetor de padrões. Experimentos mostram que um modelo de superfície pode ser obtido em um tempo total de processamento inferior a 10 segundos, e com erro absoluto em profundidade em torno de 0,2 mm. Evidencia-se com isso a potencialidade de uso em várias aplicações. / The purpose of this work is to present a structured light system developed. The system named Scan3DSL is based on off-the-shelf digital cameras and a projector of patterns. The mathematical model for 3D reconstruction is based on the parametric equation of the projected straight line combined with the collinearity equations. A pattern codification strategy was developed to allow fully automatic pattern recognition. A calibration methodology enables the determination of the direction vector of each pattern and the coordinates of the perspective centre of the pattern projector. The calibration processes are carried out with the acquisition of several images of a flat surface from different distances and orientations. Several processes were combined to provide a reliable solution for patterns location. In order to assess the accuracy and the potential of the methodology, a prototype was built integrating in a single mount a projector of patterns and a digital camera. The experiments using reconstructed surfaces with real data indicated a relative accuracy of 0.2 mm in depth could be achieved, in a processing time less than 10 seconds.
|
Page generated in 0.0798 seconds