Spelling suggestions: "subject:"depth from focus"" "subject:"depth from locus""
1 |
The application of The Depth-from-Focus TechniquesPAN, JIA-WEI 12 July 2000 (has links)
Three different topics associated with their respective applications are proposed in this thesis. The first application is the implementation of a PC-Based Vision Inspecting Machine. The Second topic is to carry out a advanced Auot-focusing technology. And the third topic is focused on the implementation of a Depth-from-Focus Technology.
|
2 |
Estimation de l'échelle absolue par vision passive monofocale et application à la mesure 3D de néoplasies en imagerie coloscopique / Absolute Scale Estimation Using Passive Monofocal Vision and its Application to 3D Measurement of Neoplasias in ColonoscopyChadebecq, François 04 November 2015 (has links)
La majorité des dispositifs de métrologie basés vision sont équipés de systèmes optiques stéréo ou de systèmes de mesure externes dits actifs. Les méthodes de reconstruction tridimensionnelle (Structure-from-Motion, Shape-from-Shading) applicables à la vision monoculaire souffrent généralement de l’ambiguïté d’échelle. Cette dernière est inhérente au processus d’acquisition d’images qui implique la perte de l’information de profondeur de la scène. La relation entre la taille des objets et la distance de la prise de vue est équivoque.Cette étude a pour objet l’estimation de l’échelle absolue d’une scène par vision passive monofocale. Elle vise à apporter une solution à l’ambiguïté d’échelle uniquement basée vision, pour un système optique monoculaire dont les paramètres internes sont fixes. Elle se destine plus particulièrement à la mesure des lésions en coloscopie. Cette procédure endoscopique (du grec endom : intérieur et scopie : vision) permet l’exploration et l’intervention au sein du côlon à l’aide d’un dispositif flexible (coloscope) embarquant généralement un système optique monofocal. Dans ce contexte, la taille des néoplasies (excroissances anormales de tissu) constitue un critère diagnostic essentiel. Cette dernière est cependant difficile à évaluer et les erreurs d’estimations visuelles peuvent conduire à la définition d’intervalles de temps de surveillance inappropriés. La nécessité de concevoir un système d’estimation de la taille des lésions coloniques constitue la motivation majeure de cette étude. Nous dressons dans la première partie de ce manuscrit un état de l’art synoptique des différents systèmes de mesure basés vision afin de positionner notre étude dans ce contexte. Nous présentons ensuite le modèle de caméra monofocal ainsi que le modèle de formation d’image qui lui a été associé. Ce dernier est la base essentielle des travaux menés dans le cadre de cette thèse. La seconde partie du manuscrit présente la contribution majeure de notre étude. Nous dressons tout d’abord un état de l’art détaillé des méthodes de reconstruction 3D basées sur l’analyse de l’information de flou optique (DfD (Depth-from-Defocus) et DfF (Depth-from-Defocus)). Ces dernières sont des approches passives permettant, sous certaines contraintes d’asservissement de la caméra, de résoudre l’ambiguïté d’échelle. Elles ont directement inspiré le système de mesure par extraction du point de rupture de netteté présenté dans le chapitre suivant. Nous considérons une vidéo correspondant à un mouvement d’approche du système optique face à une région d’intérêt dont on souhaite estimer les dimensions. Notre système de mesure permet d’extraire le point de rupture nette/flou au sein de cette vidéo. Nous démontrons que, dans le cas d’un système optique monofocale, ce point unique correspond à une profondeur de référence pouvant être calibrée. Notre système est composé de deux modules. Le module BET (Blur EstimatingTracking) permet le suivi et l’estimation conjointe de l’information de mise au point d’une région d’intérêt au sein d’une vidéo. Le module BMF (Blur Model Fitting) permet d’extraire de façon robuste le point de rupture de netteté grâce à l’ajustement d’un modèle de flou optique. Une évaluation de notre système appliqué à l’estimation de la taille des lésions coloniques démontre sa faisabilité. Le dernier chapitre de ce manuscrit est consacré à une perspective d’extension de notre approche par une méthode générative. Nous présentons, sous la forme d’une étude théorique préliminaire, une méthode NRSfM (Non-Rigid Structure-from-Motion) permettant la reconstruction à l’échelle de surfaces déformables. Cette dernière permet l’estimation conjointe de cartes de profondeurs denses ainsi que de l’image de la surface aplanie entièrement mise au point. (...) / Vision-based metrology devices generally embed stereoscopic sensors or active measurement systems. Most of the passive 3D reconstruction techniques (Structure-from-Motion, Shape from-Shading) adapted to monocular vision suffer from scale ambiguity. Because the processing of image acquisition implies the loss of the depth information, there is an ambiguous relationship between the depth of a scene and the size of an imaged object. This study deals with the estimation of the absolute scale of a scene using passive monofocal vision. Monofocal vision describes monocular system for which optical parameters are fixed. Such optical systems are notably embedded within endoscopic systems used in colonoscopy. This minimally invasive technique allows endoscopists to explore the colon cavity and remove neoplasias (abnormal growths of tissue). Their size is an essential diagnostic criterion for estimating their rate of malignancy. However, it is difficult to estimate and erroneous visual estimations lead to neoplasias surveillance intervals being inappropriately assigned. The need to design a neoplasia measurement system is the core motivation for our study. In the first part of this manuscript, we review state-of-the-art vision-based metrology devices to provide context for our system. We then introduce monofocal optical systems and the specific image formation model used in our study. The second part deals with the main contribution of our work. We first review in detail state of the art DfD (Depth-from-Defocus) and DfF (Depth-from-Defocus) approaches. They are passive computer vision techniques that enable us to resolve scale ambiguity. Our core contribution is introduced in the following chapter. We define the Infocus-Breakpoint (IB) that allows us to resolve scale from a regular video. The IB is the lower limit of the optical system’s depth of field. Our system relies on two novel technical modules: Blur-Estimating Tracking (BET) and Blur-Model Fitting (BMF). BET allows us to simultaneously track an area of interest and estimate the optical blur information. BMF allows us to robustly extract the IB by fitting an optical blur model to the blur measurement estimated by the BET module. For the optical system is monofocal, the IB corresponds to a reference depth that can be calibrated. In the last chapter, we evaluate our method and propose a neoplasia measurement system adapted to the constraints in colonoscopy examination. The last part of this manuscript is dedicated to a prospect of extension of our method by a generative approach. We present, as a preliminary study, a new NRSfM (Non-Rigid Structure-from-Motion) method allowing the scaled Euclidean 3D reconstruction of deformable surfaces. This approach is based on the simultaneous estimation of dense depth maps corresponding to a set of deformations as well as the in-focus color map of the flattened surface. We first review state-of-the-art methods for 3D reconstruction of deformable surfaces. We then introduce our new generative model as well as an alternation method allowing us to infer it.
|
3 |
3-D Scene Reconstruction for Passive Ranging Using Depth from Defocus and Deep LearningEmerson, David R. 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Depth estimation is increasingly becoming more important in computer vision. The requirement for autonomous systems to gauge their surroundings is of the utmost importance in order to avoid obstacles, preventing damage to itself and/or other systems or people. Depth measuring/estimation systems that use multiple cameras from multiple views can be expensive and extremely complex. And as these autonomous systems decrease in size and available power, the supporting sensors required to estimate depth must also shrink in size and power consumption.
This research will concentrate on a single passive method known as Depth from Defocus (DfD), which uses an in-focus and out-of-focus image to infer the depth of objects in a scene. The major contribution of this research is the introduction of a new Deep Learning (DL) architecture to process the the in-focus and out-of-focus images to produce a depth map for the scene improving both speed and performance over a range of lighting conditions. Compared to the previous state-of-the-art multi-label graph cuts algorithms applied to the synthetically blurred dataset the DfD-Net produced a 34.30% improvement in the average Normalized Root Mean Square Error (NRMSE). Similarly the DfD-Net architecture produced a 76.69% improvement in the average Normalized Mean Absolute Error (NMAE). Only the Structural Similarity Index (SSIM) had a small average decrease of 2.68% when compared to the graph cuts algorithm. This slight reduction in the SSIM value is a result of the SSIM metric penalizing images that appear to be noisy. In some instances the DfD-Net output is mottled, which is interpreted as noise by the SSIM metric.
This research introduces two methods of deep learning architecture optimization. The first method employs the use of a variant of the Particle Swarm Optimization (PSO) algorithm to improve the performance of the DfD-Net architecture. The PSO algorithm was able to find a combination of the number of convolutional filters, the size of the filters, the activation layers used, the use of a batch normalization layer between filters and the size of the input image used during training to produce a network architecture that resulted in an average NRMSE that was approximately 6.25% better than the baseline DfD-Net average NRMSE. This optimized architecture also resulted in an average NMAE that was 5.25% better than the baseline DfD-Net average NMAE. Only the SSIM metric did not see a gain in performance, dropping by 0.26% when compared to the baseline DfD-Net average SSIM value.
The second method illustrates the use of a Self Organizing Map clustering method to reduce the number convolutional filters in the DfD-Net to reduce the overall run time of the architecture while still retaining the network performance exhibited prior to the reduction. This method produces a reduced DfD-Net architecture that has a run time decrease of between 14.91% and 44.85% depending on the hardware architecture that is running the network. The final reduced DfD-Net resulted in a network architecture that had an overall decrease in the average NRMSE value of approximately 3.4% when compared to the baseline, unaltered DfD-Net, mean NRMSE value. The NMAE and the SSIM results for the reduced architecture were 0.65% and 0.13% below the baseline results respectively. This illustrates that reducing the network architecture complexity does not necessarily reduce the reduction in performance.
Finally, this research introduced a new, real world dataset that was captured using a camera and a voltage controlled microfluidic lens to capture the visual data and a 2-D scanning LIDAR to capture the ground truth data. The visual data consists of images captured at seven different exposure times and 17 discrete voltage steps per exposure time. The objects in this dataset were divided into four repeating scene patterns in which the same surfaces were used. These scenes were located between 1.5 and 2.5 meters from the camera and LIDAR. This was done so any of the deep learning algorithms tested would see the same texture at multiple depths and multiple blurs. The DfD-Net architecture was employed in two separate tests using the real world dataset.
The first test was the synthetic blurring of the real world dataset and assessing the performance of the DfD-Net trained on the Middlebury dataset. The results of the real world dataset for the scenes that were between 1.5 and 2.2 meters from the camera the DfD-Net trained on the Middlebury dataset produced an average NRMSE, NMAE and SSIM value that exceeded the test results of the DfD-Net tested on the Middlebury test set. The second test conducted was the training and testing solely on the real world dataset. Analysis of the camera and lens behavior led to an optimal lens voltage step configuration of 141 and 129. Using this configuration, training the DfD-Net resulted in an average NRMSE, NMAE and SSIM of 0.0660, 0.0517 and 0.8028 with a standard deviation of 0.0173, 0.0186 and 0.0641 respectively.
|
4 |
Using a Focus Measure to Automate the Location of Biological Tissue Surfaces in Brightfield MicroscopyElozory, Daniel Toby 01 January 2011 (has links)
The study of microstructures in brightfield microscopy using unbiased stereology plays a large and growing role in bioscience research. Stereology enables objective quantitative analysis of biological structures within a tissue sample. A first step in the stereology process is to calculate the thickness of a tissue sample by locating the top and bottom surfaces of the sample. The aim of this project is to fully automate this location process by using the relative optical focus measure as an indicator of tissue surface boundary.
The current method for identification of focus bounding planes requires a trained user to manually select the top and bottom of the tissue at each sample position examined. To automate finding the correct focal planes, i.e. the "just out of focus" planes at the top and bottom surfaces of the tissue sections, a novel approach was developed. Several gray scale focusing functions were analyzed, but while the traditional emphasis of microscopy focus functions is to find global maximums on the focus curve, in this project the aim was to find the sharp "knees" on the focus curve. Starting with a low focus measure value when the focal plane of the objective lens is out of focus above the tissue sample, the objective focal plane is moved downward through the tissue. The ideal focus measure should increase sharply as the upper surface of the tissue passes into the depth of field of the objective lens. As the focal plane is moved through the tissue, the focus measure value rises and falls as objects within the tissue come in and out of focus. As the bottom tissue surface passes into the depth of field the ideal focus measure should reflect some level of focus, dropping precipitously as the surface passes out of the depth of field into the unfocused region below the tissue.
|
5 |
Fokusovací techniky optického měření 3D vlastností / Focus techniques of optical measurement of 3D featuresMacháček, Jan January 2021 (has links)
This thesis deals with optical distance measurement and 3D scene measurement using focusing techniques with focus on confocal microscopy, depth from focus and depth from defocus. Theoretical part of the thesis is about different approaches to depth map generation and also about micro image defocusing technique for measuring refractive index of transparent materials. Then the camera calibration for focused techniques is described. In the next part of the thesis is described experimentally verification of depth from focus and depth from defocus techniques. For the first technique are shown results of depth map generation and for the second technique is shown comparison between measured distance values and real distance values. Finally, the discussed techniques are compared and evaluated.
|
Page generated in 0.0673 seconds