Spelling suggestions: "subject:"[een] STRUCTURED LIGHT"" "subject:"[enn] STRUCTURED LIGHT""
51 |
Desenvolvimento e análise de um digitalizador câmera-projetor de alta definição para captura de geometria e fotometriaSilva, Roger Correia Pinheiro 26 August 2011 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-03-02T14:44:36Z
No. of bitstreams: 1
rogercorreiapinheirosilva.pdf: 22838442 bytes, checksum: 0bd115f462fc7572058a542e9ed91fcc (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-06T19:52:42Z (GMT) No. of bitstreams: 1
rogercorreiapinheirosilva.pdf: 22838442 bytes, checksum: 0bd115f462fc7572058a542e9ed91fcc (MD5) / Made available in DSpace on 2017-03-06T19:52:42Z (GMT). No. of bitstreams: 1
rogercorreiapinheirosilva.pdf: 22838442 bytes, checksum: 0bd115f462fc7572058a542e9ed91fcc (MD5)
Previous issue date: 2011-08-26 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Um sistema câmera-projetor é capaz de capturar informação geométrica tridimensional
de objetos e ambientes do mundo real. A captura de geometria em tal sistema baseia-se
na projeção de luz estruturada sobre um objeto através do projetor, e na captura da
cena modulada através da câmera. Com o sistema previamente calibrado, a deformação
da luz projetada causada pelo objeto fornece a informação necessária para reconstruir a
geometria do mesmo por meio de triangulação.
Este trabalho descreve o desenvolvimento de um digitalizador câmera-projetor de alta
definição (com resoluções de até 1920x1080 e 1280x720); são detalhadas as etapas e processos
que conduzem à reconstrução de geometria, como calibração câmera-projetor, calibração de cores, processamento da imagem capturada e triangulação. O digitalizador
desenvolvido utiliza a codificação de luz estruturada (b; s)-BCSL, que emprega a projeção
de uma sequência de faixas verticais coloridas sobre a cena. Este esquema de codificação
flexível oferece um número variado de faixas para projeção: quanto maior o número de
faixas, mais detalhada a geometria capturada.
Um dos objetivos deste trabalho é estimar o número limite de faixas (b,s)-BCSL possível dentro das resoluções atuais de vídeo de alta definição. Este número limite é aquele
que provê reconstrução densa da geometria alvo, e ao mesmo tempo possui baixo nível
de erro. Para avaliar a geometria reconstruída pelo digitalizador para os diversos números
de faixas, é proposto um protocolo para avaliação de erro. O protocolo desenvolvido
utiliza planos como objetos para mensurar a qualidade de reconstrução geométrica. A
partir da nuvem de pontos gerada pelo digitalizador, a equação do plano para a mesma é
estimada por meio de mínimos quadrados. Para um número fixo de faixas, são feitas cinco
digitalizações independentes do plano: cada digitalização leva a uma equação; também é
computado o plano médio, estimado a partir da união das cinco nuvens de pontos. Uma
métrica de distância no espaço projetivo é usada para avaliar a precisão e a acurácia de
cada número de faixas projetados.
Além da avaliação quantitativa, a geometria de vários objetos é apresentada para uma
avaliação qualitativa. Os resultados demonstram que a quantidade de faixas limite para
vídeos de alta resolução permite uma grande densidade de pontos mesmo em superfícies
com alta variação de cores. / A camera-projector system is capable of capturing three-dimensional geometric information
of objects and real-world environments. The capture of geometry in such system
is based on the projection of structured light over an object by the projector, and the
capture of the modulated scene through the camera. With a calibrated system, the deformation
of the projected light caused by the object provides the information needed to
reconstruct its geometry through triangulation.
The present work describes the development of a high definition camera-projector system
(with resolutions up to 1920x1080 and 1280x720). The steps and processes that lead
to the reconstruction of geometry, such as camera-projector calibration, color calibration,
image processing and triangulation, are detailed. The developed scanner uses the
(b; s)-BCSL structured light coding, which employs the projection of a sequence of colored
vertical stripes on the scene. This coding scheme offers a flexible number of stripes for
projection: the higher the number of stripes, more detailed is the captured geometry.
One of the objectives of this work is to estimate the limit number of (b; s)-BCSL
stripes possible within the current resolutions of high definition video. This limit number
is the one that provides dense geometry reconstruction, and at the same has low error. To
evaluate the geometry reconstructed by the scanner for a different number of stripes, we
propose a protocol for error measurement. The developed protocol uses planes as objects
to measure the quality of geometric reconstruction. From the point cloud generated by
the scanner, the equation for the same plane is estimated by least squares. For a fixed
number of stripes, five independent scans are made for the plane: each scan leads to one
equation; the median plane, estimated from the union of the five clouds of points, is also
computed. A distance metric in the projective space is used to evaluate the precision and
the accuracy of each number of projected stripes.
In addition to the quantitative evaluation, the geometry of many objects are presented
for qualitative evaluation. The results show that the limit number of stripes for high
resolution video allows high density of points even on surfaces with high color variation.
|
52 |
Bringing 3D and quantitative data in flexible endoscopyMertens, Benjamin 10 July 2014 (has links)
In a near future, the computation power will be widely used in endoscopy rooms. It will enable the augmented reality already implemented in some surgery. Before reaching this, a preliminary step is the development of a 3D reconstruction endoscope. In addition to that, endoscopists suffer from a lack of quantitative data to evaluate dimensions and distances, notably for the polyp size measurement.<p>In this thesis, a contribution to more a robust 3D reconstruction endoscopic device is proposed. Structured light technique is used and implemented using a diffractive optical element. Two patterns are developed and compared: the first is based on the spatial-neighbourhood coding strategy, the second on the direct-coding strategy. The latter is implemented on a diffractive optical element and used in an endoscopic 3D reconstruction device. It is tested in several conditions and shows excellent quantitative results but the robustness against bad visual conditions (occlusions, liquids, specular reflection,) must be improved. <p>Based on this technology, an endoscopic ruler is developed. It is dedicated to answer endoscopists lack of measurement system. The pattern is simplified to a single line to be more robust. Quantitative data show a sub-pixel accuracy and the device is robust in all tested cases. The system has then been validated with a gastroenterologist to measure polyps. Compared to literature in this field, this device performs better and is more accurate. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
|
53 |
Simultaneous real-time object recognition and pose estimation for artificial systems operating in dynamic environmentsVan Wyk, Frans Pieter January 2013 (has links)
Recent advances in technology have increased awareness of the necessity for automated systems in
people’s everyday lives. Artificial systems are more frequently being introduced into environments
previously thought to be too perilous for humans to operate in. Some robots can be used to extract
potentially hazardous materials from sites inaccessible to humans, while others are being developed
to aid humans with laborious tasks.
A crucial aspect of all artificial systems is the manner in which they interact with their immediate surroundings.
Developing such a deceivingly simply aspect has proven to be significantly challenging, as
it not only entails the methods through which the system perceives its environment, but also its ability
to perform critical tasks. These undertakings often involve the coordination of numerous subsystems,
each performing its own complex duty. To complicate matters further, it is nowadays becoming
increasingly important for these artificial systems to be able to perform their tasks in real-time.
The task of object recognition is typically described as the process of retrieving the object in a database
that is most similar to an unknown, or query, object. Pose estimation, on the other hand, involves
estimating the position and orientation of an object in three-dimensional space, as seen from an observer’s
viewpoint. These two tasks are regarded as vital to many computer vision techniques and and
regularly serve as input to more complex perception algorithms.
An approach is presented which regards the object recognition and pose estimation procedures as
mutually dependent. The core idea is that dissimilar objects might appear similar when observed
from certain viewpoints. A feature-based conceptualisation, which makes use of a database, is implemented
and used to perform simultaneous object recognition and pose estimation. The design
incorporates data compression techniques, originally suggested by the image-processing community,
to facilitate fast processing of large databases.
System performance is quantified primarily on object recognition, pose estimation and execution time
characteristics. These aspects are investigated under ideal conditions by exploiting three-dimensional
models of relevant objects. The performance of the system is also analysed for practical scenarios
by acquiring input data from a structured light implementation, which resembles that obtained from
many commercial range scanners.
Practical experiments indicate that the system was capable of performing simultaneous object recognition
and pose estimation in approximately 230 ms once a novel object has been sensed. An average
object recognition accuracy of approximately 73% was achieved. The pose estimation results were
reasonable but prompted further research. The results are comparable to what has been achieved using
other suggested approaches such as Viewpoint Feature Histograms and Spin Images. / Dissertation (MEng)--University of Pretoria, 2013. / gm2014 / Electrical, Electronic and Computer Engineering / unrestricted
|
54 |
Geometric And Radiometric Estimation In A Structured-Light 3D ScannerDhillon, Daljit Singh J S 05 1900 (has links) (PDF)
Measuring 3D surface geometry with precision and accuracy is an important part of many engineering and scientific tasks. 3D Scanning techniques measure surface geometry by estimating the locations of sampled surface points. In recent years, Structured-Light 3D scanners have gained significant popularity owing to their ability to produce highly accurate scans in real-time at a low cost. In this thesis we describe an approach for Structured-Light 3D scanning using a digital camera and a digital projector. We utilise the projective geometric relationships between the projector and the camera to carry out both an implicit calibration of the system and to solve for 3D structure. Our approach to geometric calibration is flexible, reliable and amenable to robust estimation. In addition, we model and account for the radiometric non-linearities in the projector such as gamma distortion. Finally, we apply a post-processing step to efficiently smooth out high-frequency surface noise while retaining the structural details. Consequently, the proposed work reduces the computational load and set-up time of a Structured-Light 3D scanner; thereby speeding up the whole scanning process while retaining the ability to generate highly accurate results. We demonstrate the accuracy of our scanning results on real-world objects of varying degrees of surface complexity.
Introduction
The projective geometry for a pair of pin-hole viewing devices is completely defined by their intrinsic calibration and their relative motion or extrinsic calibration in the form of matrices. For a Euclidean reconstruction, the geometry elements represented by the calibration matrices must be parameterised and estimated in some form. The use of a projector as the ‘second viewing’ device has led to numerous approaches to model and estimate its intrinsic parameters and relative motion with respect to the camera's 3D co-ordinate system. Proposed thesis work assimilates the benefits of projective geometry constructs such as Homography and the invariance of the cross-ratios to simplify the system calibration and the 3D estimation processes by an implicit modeling of the projector's intrinsic parameters and its relative motion. Though linear modeling of the projective geometry between a camera-projector view-pair captures the most essential aspects of the underlying geometry, it does not accommodate system non-linearities due to radiometric distortions of a projector device. We propose an approach that uses parametric splines to model the systematic errors introduced by radiometric non-linearities and thus correct for them. For 3D surfaces reconstructed as point-clouds, noise manifests itself as some high-frequency variations for the resulting mesh. Various pre and/or post processing techniques are proposed in the literature to model and minimize the effects of noise. We use simple bilateral filtering of the depth-map for the reconstructed surface to smoothen the surface while retaining its structural details.
Modeling Projective Relations
In our approach for calibrating the projective-geometric structure of a projector-camera view-pair, the frame of reference for measurements is attached to the camera. The camera is calibrated using a commonly used method. To calibrate the scanner system, one common approach is to project sinusoidal patterns onto the reference planes to generate reference phase maps. By relating the phase-information between the projector and image pixels, a dense mapping is obtained. However, this is an over-parameterisation of the calibration information. Since the reference object is a plane, we can use the projective relationships induced by a plane to implicitly calibrate the projector geometry. For the estimation of the three-dimensional structure of the imaged object, we utilise the invariance of cross-ratios along with the calibration information of two reference planes. Our formulation is also extensible to utilise more than two reference plane to compute more than one estimate of the location of an unknown surface point. Such estimates are amenable to statistical analysis which allows us to derive both the shape of an object and associate reliability scores to each estimated point location.
Radiometric Correction
Structured-light based 3D scanners commonly employ phase-shifted sinusoidal patterns to solve for the correspondence problem. For scanners using projective geometry between a camera and a projector, the projector's radiometric non-linearities introduce systematic errors in establishing correspondences. Such errors manifest as visual artifacts which become pronounced when fewer phase-shifted sinusoidal patterns are used. While these artifacts can be avoided by using a large number of phase-shifts, doing so also increases the acquisition time. We propose to model and rectify such systematic errors using parametric representations. Consequently, while some existing methods retain the complete reference phase maps to account for such distortions, our approach describes the deviations using a few model parameters. The proposed approach can be used to reduce the number of phase-shifted sinusoidal patterns required for codification while suppressing systematic artifacts. Additionally, our method avoids the 1D search steps that are needed when a complete reference phase map is used, thus reducing the computational load for 3D estimation. The effectiveness of our method is demonstrated with reconstruction of some geometric surfaces and a cultural figurine.
Filtering Noise
For a structured-light 3D scanner, various sources of noise in the environment and the devices lead to inaccuracies in estimating the codewords (phase map) for an unknown surface, during reconstruction. We examine the effects of such noise factors on our proposed methods for geometric and radiometric estimation. We present a quantitative evaluation for our proposed method by scanning the objects of known geometric properties or measures and then computing the deviations from the expected results. In addition, we evaluate the errors introduced due to inaccuracies in system calibration by computing the variance statistics from multiple estimates for the reconstructed 3D points, where each estimate is computed using a different pair of reference planes. Finally, we discuss the efficacy of certain filtering techniques in reducing the high-frequency surface noise when applied to: (a) the images of the unknown surface at a pre-processing stage, or (b) the respective phase (or depth) map at a post-processing stage.
Conclusion
In this thesis, we motivate the need for a procedurally simple and computationally less demanding approach for projector calibration. We present a method that uses homographies induced by a pair of reference planes to calibrate a structured-light scanner. By using the projective invariance of the cross-ratio, we solved for the 3D geometry of a scanned surface. We demonstrate the fact that 3D geometric information can be derived using our approach with accuracy on the order of 0.1 mm. Proposed method reduces the image acquisition time for calibration and the computational needs for 3D estimation. We demonstrate an approach to effectively model radiometric distortions for the projector using cubic splines. Our approach is shown to give significant improvement over the use of complete reference phase maps and its performance is comparable to that of a sate-of-the-art method, both quantitatively as well as qualitatively. In contrast with that method, proposed method is computationally less expensive, procedurally simpler and exhibits consistent performance even at relatively higher levels of noise in phase estimation. Finally, we use a simple bilateral filtering on the depth-map for the region-of-interest. Bilateral filtering provides the best trade-off between surface smoothing and the preservation of its structural details. Our filtering approach avoids computationally expensive surface normal estimation algorithms completely while improving surface fidelity.
|
55 |
Projekce dat do scény / Projector camera cooperationWalter, Viktor January 2016 (has links)
The focus of this thesis is the cooperation of cameras and projectors in projection of data into a scene. It describes the means and theory necessary to achieve such cooperation, and suggests tasks for demonstration. A part of this project is also a program capable of using a camera and a projector to obtain necessary parameters of these devices. The program can demonstrate the quality of this calibration by projecting a pattern onto an object according to its current pose, as well as reconstruct the shape of an object with structured light. The thesis also describes some challenges and observations from development and testing of the program.
|
56 |
Simutaneous real-time object recognition and pose estimation for artificial systems operating in dynamic environmentsVan Wyk, Frans-Pieter January 2013 (has links)
Recent advances in technology have increased awareness of the necessity for automated systems in
people’s everyday lives. Artificial systems are more frequently being introduced into environments
previously thought to be too perilous for humans to operate in. Some robots can be used to extract
potentially hazardous materials from sites inaccessible to humans, while others are being developed
to aid humans with laborious tasks.
A crucial aspect of all artificial systems is the manner in which they interact with their immediate surroundings.
Developing such a deceivingly simply aspect has proven to be significantly challenging, as
it not only entails the methods through which the system perceives its environment, but also its ability
to perform critical tasks. These undertakings often involve the coordination of numerous subsystems,
each performing its own complex duty. To complicate matters further, it is nowadays becoming
increasingly important for these artificial systems to be able to perform their tasks in real-time.
The task of object recognition is typically described as the process of retrieving the object in a database
that is most similar to an unknown, or query, object. Pose estimation, on the other hand, involves
estimating the position and orientation of an object in three-dimensional space, as seen from an observer’s
viewpoint. These two tasks are regarded as vital to many computer vision techniques and regularly serve as input to more complex perception algorithms.
An approach is presented which regards the object recognition and pose estimation procedures as
mutually dependent. The core idea is that dissimilar objects might appear similar when observed
from certain viewpoints. A feature-based conceptualisation, which makes use of a database, is implemented
and used to perform simultaneous object recognition and pose estimation. The design
incorporates data compression techniques, originally suggested by the image-processing community,
to facilitate fast processing of large databases.
System performance is quantified primarily on object recognition, pose estimation and execution time
characteristics. These aspects are investigated under ideal conditions by exploiting three-dimensional
models of relevant objects. The performance of the system is also analysed for practical scenarios
by acquiring input data from a structured light implementation, which resembles that obtained from
many commercial range scanners.
Practical experiments indicate that the system was capable of performing simultaneous object recognition
and pose estimation in approximately 230 ms once a novel object has been sensed. An average
object recognition accuracy of approximately 73% was achieved. The pose estimation results were
reasonable but prompted further research. The results are comparable to what has been achieved using
other suggested approaches such as Viewpoint Feature Histograms and Spin Images. / Dissertation (MEng)--University of Pretoria, 2013. / gm2014 / Electrical, Electronic and Computer Engineering / unrestricted
|
57 |
Reverse Engineering med hjälp av 3D-skanning / Reverse Engineering using 3D-scanningWu, Christy January 2021 (has links)
Inom området maskinteknik finns idag ett stort intresse för Reverse Engineering med hjälp av 3Dskanning. Tekniken utgår ifrån att skapa Computer-Aided-Design (CAD) modeller av reala objekt. Föreliggande projekt utfördes vid institutionen för tillämpad fysik och elektronik vid Umeå universitetet i syfte att utvärdera prestationen av Reverse Engineering av objekt som är utmanade at rita direkt i CAD-program. Fyra olika fysiska objekt valdes för analys: en bult, en tolvkantshylsa, en propeller och ett snäckhjul; det sistnämnda tillhandahållen av företaget Rototilt Group AB. Objekten avbildades med en 3D-skannare som använder sig av metoden strukturerat ljus för att läsa in objektens form och har en noggrannhet på 0,04 mm. De 3Davbildade objekten redigerades sedan och CAD-ritningar skapades. Slutligen skrevs CADmodellerna ut med hjälp av en 3D-skrivare och en toleransanalys med gränsvärdet 0,2 mmutfördes för att jämföra dimensionerna av originalobjekten, de olika digitala modellerna samt de utskrivna objekten. Resultatet visar att Reverse Engineering (med vissa begränsningar) är en bra metod för objekt som är utmanade att modellera i CAD. Med tekniken kan fysiska objektrekonstrueras till CAD-modeller snabbt och med hög noggrannhet. / In the field of mechanical engineering, there is an increasing interest in Reverse Engineering using 3D-scanning. The technology is based on creating Computer-Aided-Design (CAD) models of real objects. The present project was carried out at the Department of Applied Physics and Electronics at Umeå University in order to evaluate the performance of Reverse Engineering of objects that are challenging to draw directly in CAD programs. Four different physical objects were selected for analysis: a bolt, a hex socket, a propeller and a worm wheel; the latter provided by the company Rototilt Group AB. A structured light 3D-scanner with a specified accuracy of 0,04 mm was used to image the objects. The 3D images were then post-processed and transferred to CAD software to create the CAD drawings. Finally, the CAD-models were printed with a 3D printer and a tolerance analysis with a limit of 0,2 mm was performed to compare the dimensions of the original objects, the different digital models and the printed objects. The results show that Reverse Engineering (with some limitations) is a good method for objects that are difficult to model in CAD. The technique is well-suited to reconstruct physical objects into CAD-models quickly and with high accuracy.
|
58 |
Submillimeter 3D surface reconstruction of concrete floorsHagström, Björn, Wallström, Hampus January 2022 (has links)
During the creation of any concrete floor the concrete needs to be grinded down from it's very rough newly poured form to a more usable floor surface. Concrete floor grinding is very special in that the work area is often immensely large while the height difference on the surface is incredibly small, in-fact the the largest local difference of the surface from a peek to a valley during the grinding process is submillimeter and goes down to micrometer scale. Today's methods for measuring concrete surfaces are very few and all output one dimensional profiles of the surface in very time consuming processes which makes them unsuitable for real-time analysis of the surfaces during the grinding process. Because of this, the effectiveness of the work is dependent on the experience and intuition of the operator of the grinding machine as they have to make the decision of when to move on to the next step in the grinding process. Therefore it is desirable to create a better method for concrete surface measurement that can measure big areas in a short period of time. In this project a structured light method using sinusoidal phase shifting is implemented and evaluated with an easily movable setup that can measure the height of a concrete surface over an area. The method works by encoding the surface with a phase using a projector and analysing how the phase encoding warps when imaging it from an angle. By triangulation this can be made into a height map of the measured area. The end results show that the method is promising for this application and can detect the submillimeter differences. However, more suitable hardware and a more reliable calibration procedure are required to move this prototype towards a more practical measuring device.
|
59 |
Investigations of Flow Patterns in Ventilated Rooms Using Particle Image Velocimetry : Applications in a Scaled Room with Rapidly Varying Inflow and over a Wall-Mounted RadiatorSattari, Amir January 2015 (has links)
This thesis introduces and describes a new experimental setup for examining the effects of pulsating inflow to a ventilated enclosure. The study aimed to test the hypothesis that a pulsating inflow has potential to improve ventilation quality by reducing the stagnation zones through enhanced mixing. The experimental setup, which was a small-scale, two-dimensional (2D), water-filled room model, was successfully designed and manufactured to be able to capture two-dimensional velocity vectors of the entire field using Particle Image Velocimetry (PIV). Using in-house software, it was possible to conclude that for an increase in pulsation frequency or alternatively in the flow rate, the stagnation zones were reduced in size, the distribution of vortices became more homogeneous over the considered domain, and the number of vortices in all scales had increased. Considering the occupied region, the stagnation zones were moved away in a favorable direction from a mixing point of view. In addition, statistical analysis unveiled that in the far-field occupied region of the room model, stronger eddies were developed that we could expect to give rise to improved mixing. As a fundamental experimental study performed in a 2D, small-scale room model with water as operating fluid, we can logically conclude that the positive effect of enhanced mixing through increasing the flow rate could equally be accomplished through applying a pulsating inflow. In addition, this thesis introduces and describes an experimental setup for study of air flow over a wall-mounted radiator in a mockup of a real room, which has been successfully designed and manufactured. In this experimental study, the airflow over an electric radiator without forced convection, a common room-heating technique, was measured and visualized using the 2D PIV technique. Surface blackening due to particle deposition calls for monitoring in detail the local climate over a heating radiator. One mechanism causing particle deposition is turbophoresis, which occurs when the flow is turbulent. Because turbulence plays a role in particle deposition, it is important to identify where the laminar flow over radiator becomes turbulent. The results from several visualization techniques and PIV measurements indicated that for a room with typical radiator heating, the flow over the radiator became agitated after a dimensionless length, 5.0–6.25, based on the radiator thickness. Surface properties are among the influencing factors in particle deposition; therefore, the geometrical properties of different finishing techniques were investigated experimentally using a structured light 3D scanner that revealed differences in roughness among different surface finishing techniques. To investigate the resistance to airflow along the surface and the turbulence generated by the surfaces, we recorded the boundary layer flow over the surfaces in a special flow rig, which revealed that the types of surface finishing methods differed very little in their resistance and therefore their influence on the deposition velocity is probably small. / Det övergripande syftet med den första studien i avhandlingen var att undersöka hypotesen att ett pulserande inflöde till ett ventilerade utrymme har en potential till att förbättra ventilationens kvalitet genom att minska stagnationszoner och därigenom öka omblandningen. För genomförande av studien byggdes en experimentuppställning i form av en tvådimensionell (2D) småskalig modell av ett ventilerat rum. Strömningsmediet i modellen var vatten. Det tvådimensionella hastighetsfältet registrerades över hela modellen med hjälp av Particle Image Velocimetry (PIV). Vid ett stationärt tillflöde bildas ett stagnationsområde i centrum av rumsmodellen. Vid ett pulserade inflöde genererades sekundära virvlar. Med en egen utvecklad programvara var det möjligt att kvantifiera statistiken hos virvlarna. Det pulserade inflödet gjorde att inom området där det vid stationärt tillflöde fanns en stagnationszon ökade antalet virvlar i alla storlekar och fördelningen av virvlar blev mera homogen än tidigare. Detta kan förväntas ge upphov till förbättrad omblandning. Baserat på en grundläggande experimentell studie utförd i en småskalig tvådimensionell rumsmodell med vatten som strömningsmedium kan vi logiskt dra slutsatsen att ett pulserande tilluftsflöde har en potential att förbättra omblandningen. I en fortsatt studie i avhandlingen visuliserades och mättes hastighetsfältet och därefter beräknades statistiska värden av exempelvis medelhastighet, standardavvikelse och skjuvspänning hos hastighetsfluktuationerna i luftströmmen över en väggmonterad radiator med 2D-PIV-teknik. Bakgrunden till studien är att en bidragande orsak till partikelavsättning på väggytor är turbofores som uppträder vid en turbulent luftström. Studien genomfördes genom uppbyggnad av en fullskalig rumsmodell. Eftersom turbulens spelar en roll vid partikelavsättning genom turbofores är det viktigt att identifiera var det laminära flödet över radiatorn blir turbulent. Resultaten baserat på visualisering och PIV-mätningar indikerade att, för ett rum med denna typ av radiatoruppvärmning, blev flödet över radiatorn turbulent efter en dimensionslös längd lika med 5,0‒6,25 gånger radiatorns tjocklek. Ytors egenskaper är viktiga vid partikelavsättning. Därför har de geometriska egenskaperna hos några olika metoder för ytbehandling undersökts experimentellt med hjälp av en scanner för strukturerat 3D-ljus. Resultaten visar på skillnader i ytråhet hos de olika ytbehandlingsmetoderna. För att undersöka motståndet mot luftströmning längs ytan och den turbulens som genereras av ytorna registrerade vi gränsskiktsflödet över ytorna i en speciell luftströmningsrigg. Detta påvisade att motståndet hos de olika typerna av ytbehandlingsmetoder skilde sig mycket litet åt och därför är troligt vid deras påverkan på depositionshastigheten mycket liten. / <p>QC 20150525</p>
|
60 |
Reconstruction active par projection de lumière non structuréeMartin, Nicolas 04 1900 (has links)
Cette thèse porte sur la reconstruction active de modèles 3D à l’aide d’une caméra et d’un projecteur. Les méthodes de reconstruction standards utilisent des motifs de lumière codée qui ont leurs forces et leurs faiblesses. Nous introduisons de nouveaux motifs basés sur la lumière non structurée afin de pallier aux manques des méthodes existantes. Les travaux présentés s’articulent autour de trois axes : la robustesse, la précision et finalement la comparaison des patrons de lumière non structurée aux autres méthodes.
Les patrons de lumière non structurée se différencient en premier lieu par leur robustesse aux interréflexions et aux discontinuités de profondeur. Ils sont conçus de sorte à homogénéiser la quantité d’illumination indirecte causée par la projection sur des surfaces difficiles. En contrepartie, la mise en correspondance des images projetées et capturées est plus complexe qu’avec les méthodes dites structurées. Une méthode d’appariement probabiliste et efficace est proposée afin de résoudre ce problème.
Un autre aspect important des reconstructions basées sur la lumière non structurée est la capacité de retrouver des correspondances sous-pixels, c’est-à-dire à un niveau de précision plus fin que le pixel. Nous présentons une méthode de génération de code de très grande longueur à partir des motifs de lumière non structurée. Ces codes ont l’avantage double de permettre l’extraction de correspondances plus précises tout en requérant l’utilisation de moins d’images. Cette contribution place notre méthode parmi les meilleures au niveau de la précision tout en garantissant une très bonne robustesse.
Finalement, la dernière partie de cette thèse s’intéresse à la comparaison des méthodes existantes, en particulier sur la relation entre la quantité d’images projetées et la qualité de la reconstruction. Bien que certaines méthodes nécessitent un nombre constant d’images, d’autres, comme la nôtre, peuvent se contenter d’en utiliser moins aux dépens d’une qualité moindre. Nous proposons une méthode simple pour établir une correspondance optimale pouvant servir de référence à des fins de comparaison. Enfin, nous présentons des méthodes hybrides qui donnent de très bons résultats avec peu d’images. / This thesis deals with active 3D reconstruction from camera-projector systems. Standard reconstruction methods use coded light patterns that come with their strengths and weaknesses. We introduce unstructured light patterns that feature several improvements compared to the current state of the art. The research presented revolves around three main axes : robustness, precision and comparison of existing unstructured light patterns to existing methods.
Unstructured light patterns stand out first and foremost by their robustness to interreflections and depth discontinuities. They are specifically designed to homogenize the indirect lighting generated by their projection on hard to scan surfaces. The downside of these patterns is that matching projected and captured images is not straightforward anymore. A probabilistic correspondence method is formulated to solve this problem efficiently.
Another important aspect of reconstruction obtained with unstructured light pat- terns is their ability to recover subpixel correspondences, that is with a precision finer than the pixel level. We present a method to produce long codes using unstructured light. These codes enable us to extract more precise correspondences while requiring less patterns. This contribution makes our method one of the most accurate - yet robust to standard challenges - method of active reconstruction in the domain.
Finally, the last part of this thesis adresses the comparison of existing reconstruction methods on several aspects, but mainly on the impact of using less and less patterns on the quality of the reconstruction. While some methods need a fixed number of images, some, like ours, can accommodate fewer patterns in exchange for some quality loss. We devise a simple method to capture an optimal correspondence map that can be used as a groundtruth for comparison purposes. Last, we present several hybrid methods that perform quite well even with few images.
|
Page generated in 0.0377 seconds