• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 44
  • 42
  • 39
  • 23
  • 18
  • 11
  • 6
  • 6
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 532
  • 258
  • 160
  • 106
  • 100
  • 84
  • 84
  • 63
  • 60
  • 52
  • 50
  • 50
  • 47
  • 45
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Recherche de neutrino stérile par l'expérience STEREO : optimisation du blindage et calibration de l'échelle d'énergie / Search for a sterile neutrino with the STEREO experiment : shielding optimisation and energy calibration

Kandzia, Felix 11 December 2017 (has links)
La recherche de neutrinos stériles et légers est, à l’heure actuelle, l’un des enjeux majeurs de laphysique des neutrinos. Une indication de leur existence résulte de l’anomalie des antineutrinosde réacteur, qui découle du déficit de 6% entre les taux prédits et les taux observéspar les expériences à courte distance de réacteurs. Ce déficit peut être interprété comme uneoscillation à courte distance des neutrinos. L’objectif de l’expérience STEREO, situé auprès duréacteur de recherche de l’Institut Laue Langevin (ILL), à Grenoble, France, est d’étudier cetteoscillation. La cible du détecteur de neutrinos est placée entre 8,9 et 11,1m du coeur compactdu réacteur d’ILL. Le détecteur consiste d’environ 2t d’un scintillateur liquide, dopé avec duGd. Le volume actif est séparé dans le sens de la longueur en six cellules. Les antineutrinos sontdétectés par la désintégration bêta inverse, où ils interagissent avec un proton libre (ion H+) etproduisent un positron et un neutron. Les deux particules sont détectées dans le scintillateurpar une coïncidence retardée où le positron crée un signal prompt et le neutron est capturéaprès un temps de modération. La lumière produite par le scintillateur est mesurée par lesphotomultiplicateurs (PM). Le détecteur est complété par un“gamma catcher” qui entoure la cible et par un veto à muons.Ce manuscrit présente des études concernant la préparation et la mise en exploitation del’expérience STEREO. La conception du blindage magnétique des PM a été menée sur la basede simulations par éléments finis afin d’examiner différentes options, d’étudier en détail lesperformances de l’option retenue ainsi que de déterminer la qualité nécessaire des matériauxutilisés. Sur la base de ces études, la collaboration a retenu un plan de blindage en deuxcouches: une couche de fer doux à l’extérieur, couvrant le détecteur et le veto à muons, et unecouche de mu-métal autour de la cible. Ce blindage réduit les champs magnétiques externes à laposition des PM de la cible à moins de 60μT pour toutes les configurations connues de champsexternes. Ceci réduit à moins de 2% une variation de l’amplification des PM induite par deschangements des champs magnétiques.D’autre part, des études du bruit de fond sur le site de STEREO ont été menées. Unecartographie du bruit de fond du rayonnement gamma a été effectuée avec des détecteurs augermanium et un scintillateur NaI, afin de valider l’efficacité du blindage installé. Uneestimation du taux de bruit de fond est présentée et comparée au taux mesuré avec STEREO.Dans l’état actuel de l’analyse des données, le bruit de fond de coïncidences fortuites est inférieurau bruit de fond corrélé induit par les muons cosmiques. Après une première phase d’exploitationde STEREO, un “doigt de gant” en fin de vie situé à l’avant de STEREO a dû être retiré.Un bouchon était adapté à l’extrémité de ce doigt de gant afin de réduire le bruit du fondpour STEREO. Ce dispositif n’ayant pas pu être réinstallé à la suite l’enlèvement du doigt degant, un nouveau blindage a été proposé par l’ILL. Une série des simulations neutroniques etphotoniques (MCNP) a été effectué pour étudier l’effet de ce changement sur le bruit de fondautour de STEREO et pour décider si le blindage proposé était suffisant. Les deux scénariosavant et après l’enlèvement ont été comparés et selon cette simulation, la situation du bruit defond devrait être améliorée.Enfin une procédure a été proposée et appliquée pour analyser les données de calibration del’échelle d’énergie de STEREO. La procédure a été élaborée pour être applicable pour toutes lessources de calibration disponibles et pour minimiser les incertitudes systématiques. Le résultatpeut être utilisé pour ajuster les paramètres de la simulation Geant4 du détecteur développée parla collaboration, par comparaison avec des données mesurées et après pour déterminer l’échellede l’énergie avec la précision requise de < 2%. / Light sterile neutrinos are currently a topic actively discussed in neutrino physics. Oneindication of their possible existence and their participation in neutrino oscillations is the ReactorAntineutrino Anomaly, which states a deficit of about 6% between predicted and observedantineutrino fluxes in short baseline reactor neutrino experiments. The STEREO experimentaddresses this anomaly by searching for neutrino oscillations at baselines of 8.9-11.1m from thecompact core of the research reactor of the Institut Laue Langevin (ILL), Grenoble, France. Forthis purpose a Gd-loaded liquid scintillator detector was designed with an active target massof about 2 t. The target volume is subdivided in six optically separated cells along the line ofpropagation of the neutrinos. The electron antineutrinos emitted from the reactor are detectedvia the inverse beta decay on hydrogen nuclei, where a positron and a neutron are created. Thesetwo particles are detected in the scintillator in delayed coincidence, with the prompt signal fromthe positron and a delayed signal from neutron capture. The scintillation light created in theprocesses is read out by photomultiplier tubes (PMTs) on top of the detector cells. The detectoris completed by a gamma catcher and a muon veto.This manuscript covers parts of the preparation and the commissioning of the STEREOexperiment. As basis for the design process of the magnetic shielding for STEREO’s PMTsa series of finite element simulations was performed. The studies of different general layoutsand required material qualities as well as of details of the final design are summarised. Underconsideration of these studies the collaboration opted for a shielding design, a double layer setupwith an outer soft iron and inner mumetal layer, which has the required shielding efficiency toreduce the magnetic field at the position of the detector PMTs below 60 μT for all known externalmagnetic field configurations. This limits the maximum PMT gain change due to variations ofthe external magnetic fields to < 2%.Furthermore different studies have been performed concerning the on-site background situation.A mapping of the-ray background was conducted with high purity germanium detectorsand a NaI scintillator detector, in order to validate the efficiency of the installed shielding. Thefocus lied on the characterisation of the count rate in the neutron capture energy window. Anestimation of the background rate is presented and compared to the rate obtained in STEREO.At the current state of the analysis the background of accidental coincidences in STEREO is aminor contribution compared to the muon induced correlated background. In addition a seriesof MCNP simulations was performed to determine the impact of a beamtube removal in thevicinity of STEREO on the overall reactor-related background situation. The beamtube wasclosed by a dedicated shielding, optimised for background reduction for STEREO, which couldnot be reinstalled after removal of the tube. A new shielding at the end of the former beamtubewas proposed by the ILL. Its shielding effect was studied with MCNP and compared tothe previous configuration in order to assess whether the new shielding suffices or needs to beimproved. According to these simulations the background situation is expected to improve.Finally a procedure is proposed and applied for the analysis of the energy calibration ofthe STEREO detector. The procedure is designed to be applicable to all available calibrationsources and to minimise systematic uncertainties. It can be used to adjust parameters in theexisting Geant4-based simulation of the detector, developed by the collaboration, by comparisonto measured data and later to determine the energy scale with the required precision of < 2%.
152

Polarization stereoscopic imaging prototype / Prototype d'imagerie polarimétrique stéréoscopique

Iqbal, Mohammad 02 November 2011 (has links)
La polarisation de la lumière, phénomène physique parfaitement maîtrisé, a été introduit depuis une dizaine d'années seulement dans le domaine de l'imagerie. En effet, tout comme l'œil humain, les capteurs ne sont pas, par construction, sensible à la polarisation de la lumière. Cette propriété particulièrement intéressante ne peut être obtenue qu'en ajoutant des composants optiques aux caméras classiques. L'objectif de ce travail de thèse est de développer un système à la fois stéréoscopique et sensible à l'état de polarisation. En effet, de nombreux insectes dans la nature, comme les abeilles par exemple, ont la capacité à s'orienter dans l'espace et à extraire des informations pertinentes issues de la polarisation. Le prototype ainsi développé doit permettre de reconstruire en trois dimensions des points d'intérêt tout en associant à ces points un ensemble de paramètres relatifs à l'état de polarisation. Le système proposé ici est constitué de deux caméras équipés chacune de deux composants à cristaux liquides permettant d'obtenir deux images avec des orientations de polarisation différentes. Pour chaque acquisition, quatre images sont obtenues : deux pour chacune des caméras. Le verrou majeur soulevé ici est la possibilité de remonter à des informations de polarisation à partir de deux caméras différentes. Après une première étape de calibration géométrique et photométrique, la mise en correspondance des points d'intérêt est rendue délicate en raison des composants optiques placés devant les objectifs. Une étude approfondie des différentes méthodes de mise en correspondance a permis de sélectionner la méthode la moins sensible aux effets de polarisation. Une fois les points mis en correspondance, les paramètres de polarisation de chacun des points sont calculés à partir des quatre valeurs issues des quatre images acquises. Les résultats obtenus sur des scènes réelles montrent la faisabilité et l'intérêt d'un tel système pour des applications robotiques. / The polarization of light was introduced last ten years ago in the field of imaging system is a physical phenomenon that can be controlled for the purposes of the vision system. As that found in the human eyes, in general the imaging sensors are not under construction which is sensitive to the polarization of light. These properties can be measured by adding optical components on a conventional camera. The purpose of this thesis is to develop an imaging system that is sensitive both to the stereoscopic and to the state of polarization. As well as the visual system on a various of insects in nature such as bees, that are have capability to move in space by extracted relevant information from the polarization. The developed prototype should be possible to reconstruct threedimensional of points of interest with the issues associated with a set of parameters of the state of polarization. The proposed system consists of two cameras, each camera equipped with liquid crystal components to obtain two images with different directions of polarization. For each acquisition, four images are acquired: two for each camera. Raised by the key of main capability to return polarization information from two different cameras. After an initial calibration step; geometric and photometric, the mapping of points of interest process is made difficult because of the optical components placed in front of different lenses. A detailed study of different methods of mapping was used to select sensitivity to the polarization effects. Once points are mapped, the polarization parameters of each point are calculated from the four values from four images acquired. The results on real scenes show the feasibility and desirability of this imaging system for robotic applications.
153

Codage audio stéréo avancé / Advanced stereo audio coding

Capobianco, Julien 03 June 2015 (has links)
Depuis une dizaine d’années, des techniques de codage joint, exploitant les relations et les redondances entre canaux audios, ont été développées afin de réduire davantage la quantité d’information nécessaire à la représentation des signaux multicanaux. Dans cette thèse, nous étudions plus particulièrement le codage des signaux audio stéréo en l’absence d’informations à priori sur la nature des sources en présences, leur nombre et la manière dont elles sont spatialisées. Cette situation correspond à l’immense majorité des enregistrements commerciaux dans l’industrie de la musique et du multimédia de manière générale. Nous étudions des approches paramétrique et signal de la problématique de codage de ces sources, où les deux sont souvent mêlées. Dans ce contexte, trois types d’approches sont utilisés. L’approche paramétrique spatiale consiste à réduire le nombre de canaux audio de la source à coder et à recréer le nombre de canaux d’origine à partir des canaux réduits et de paramètres spatiaux, extraits des canaux d’origine. L’approche signal conserve le nombre de canaux d’origine, mais encode des canaux construits à partir de ces derniers et présentant moins de redondances. Enfin, l’approche mixte introduite dans MPEG USAC utilise un signal audio et un signal résiduel, issu d’une prédiction, et dont les paramètres sont codés conjointement. Dans cette thèse, nous analysons tout d’abord les caractéristiques d’un signal stéréo issu d’un enregistrement commercial et les techniques de production associées. Cette étude nous mène à une réflexion sur les rapports entre les modèles paramétriques d’émetteur, obtenus en analysant les techniques de production des enregistrements commerciaux, et les modèles de récepteur qui sont au coeur du codage spatial paramétrique. A partir de cette mise en perspective nous présentons et étudions les trois approches évoquées plus haut. Pour l’approche purement paramétrique, nous montrons l’impossibilité d’arriver à la transparence pour la majorité des sources audios, nous menons une réflexion sur les représentations paramétriques et proposons des techniques afin de réduire le débit de leurs paramètres et d’améliorer la qualité audio. Ces améliorations passent par une meilleur segmentation du signal audio, basée sur les transitoires, sur des caractéristiques perceptives de certains indices spatiaux et sur une meilleur estimation des indices spatiaux. L’approche mixte étant récemment standardisée dans MPEG USAC, nous l’étudions en détail, puis nous proposons une nouvelle technique de codage qui exploite au mieux l’allocation du résidu aux bandes fréquentielles, lorsque celui-ci n’est pas utilisé sur l’ensemble de la bande passante du signal. Enfin, nous concluons en évoquant l’avenir du codage audio spatial généraliste et mettons l’accent sur l’importance de développer des techniques de classification et de segmentation audio pour optimiser le rapport qualité/débit. / During the last ten years, technics for joint coding exploiting relations and redundancies between channels have been developped in order to further reduce the amount of information needed to represent multichannel audio signals.In this document, we focus on the coding of stereo audio signals where prior informations on the nature of sources in presence, their number or the manner they are spatialized is unknown. Such signals are actually the most representative in commercial records of music industry and in multimedia entertainment in general. To address the coding problematic of these signals, we study parametric and signal approaches, where both of them are often mixed.In this context, three types of approaches are used. The spatial parametric approach reduce the number of audio channels of the signal to encode and recreate the original number of channels from reduced channels and spatial parameters extracted from original channels. The signal approach keep the original number of channels, but encode mono signals, built from the combination of the original ones and containing less redundancies. Finally, the hybrid approach introduced in the MPEG USAC standard keep the two channels of a stereo signal, but one is a mono downmix and the other is a residual signal, resulting from a prediction on the downmix, where prediction parameters are encoded as side information.In this document, we first analyse the characteristics of a stereo audio signal coming from a commercial recording and the associated production techniques. This study lead us to consider the relations between the emitter parametric models, elaborated from our analysis of commercial recording production techniques, and the receiver models which are the basis of spatial parametric coding. In the light of these considerations, we present and study the three approaches mentioned earlier. For the parametric approach, we show that transparency cannot be achieved for most of the stereo audio signals, we have a reflection on parametric representations and we propose techniques to improve the audio quality and further reduce the bitrate of their parameters. These improvements are obtained by applying a better segmentation on the signal, based on the significant transient, by exploiting perceptive characteristics of some spatial cues and by adapting the estimation of spatial cues. As the hybrid approach has been recently standardized in MPEG USAC, we propose a full review of it, then we develop a new coding technique to optimize the allocation of the residual bands when the residual is not used on the whole bandwidth of the signal to encode. In the conclusion, we discuss about the future of the general spatial audio coding and we show the importance of developping new technics of segmentation and classification for audio signals to further adapt the coding to the content of the signal.
154

Výpočet mapy disparity ze stereo obrazu / Disparity Map Estimation from Stereo Image

Tábi, Roman January 2017 (has links)
The master thesis focuses on disparity map estimation using convolutional neural network. It discusses the problem of using convolutional neural networks for image comparison and disparity computation from stereo image as well as existing approaches of solutions for given problem. It also proposes and implements system that consists of convolutional neural network that measures the similarity between two image patches, and filtering and smoothing methods to improve the result disparity map. Experiments and results show, that the most quality disparity maps are computed using CNN on input patches with the size of 9x9 pixels combined with matching cost agregation and correction algorithm and bilateral filter.
155

A Novel Approach for Spherical Stereo Vision / Ein Neuer Ansatz für Sphärisches Stereo Vision

Findeisen, Michel 27 April 2015 (has links) (PDF)
The Professorship of Digital Signal Processing and Circuit Technology of Chemnitz University of Technology conducts research in the field of three-dimensional space measurement with optical sensors. In recent years this field has made major progress. For example innovative, active techniques such as the “structured light“-principle are able to measure even homogeneous surfaces and find its way into the consumer electronic market in terms of Microsoft’s Kinect® at the present time. Furthermore, high-resolution optical sensors establish powerful, passive stereo vision systems in the field of indoor surveillance. Thereby they induce new application domains such as security and assistance systems for domestic environments. However, the constraint field of view can be still considered as an essential characteristic of all these technologies. For instance, in order to measure a volume in size of a living space, two to three deployed 3D sensors have to be applied nowadays. This is due to the fact that the commonly utilized perspective projection principle constrains the visible area to a field of view of approximately 120°. On the contrary, novel fish-eye lenses allow the realization of omnidirectional projection models. Therewith, the visible field of view can be enlarged up to more than 180°. In combination with a 3D measurement approach, thus, the number of required sensors for entire room coverage can be reduced considerably. Motivated by the requirements of the field of indoor surveillance, the present work focuses on the combination of the established stereo vision principle and omnidirectional projection methods. The entire 3D measurement of a living space by means of one single sensor can be considered as major objective. As a starting point for this thesis chapter 1 discusses the underlying requirement, referring to various relevant fields of application. Based on this, the distinct purpose for the present work is stated. The necessary mathematical foundations of computer vision are reflected in Chapter 2 subsequently. Based on the geometry of the optical imaging process, the projection characteristics of relevant principles are discussed and a generic method for modeling fish-eye cameras is selected. Chapter 3 deals with the extraction of depth information using classical (perceptively imaging) binocular stereo vision configurations. In addition to a complete recap of the processing chain, especially occurring measurement uncertainties are investigated. In the following, Chapter 4 addresses special methods to convert different projection models. The example of mapping an omnidirectional to a perspective projection is employed, in order to develop a method for accelerating this process and, hereby, for reducing the computational load associated therewith. Any errors that occur, as well as the necessary adjustment of image resolution, are an integral part of the investigation. As a practical example, an application for person tracking is utilized in order to demonstrate to which extend the usage of “virtual views“ can increase the recognition rate for people detectors in the context of omnidirectional monitoring. Subsequently, an extensive search with respect to omnidirectional imaging stereo vision techniques is conducted in chapter 5. It turns out that the complete 3D capture of a room is achievable by the generation of a hemispherical depth map. Therefore, three cameras have to be combined in order to form a trinocular stereo vision system. As a basis for further research, a known trinocular stereo vision method is selected. Furthermore, it is hypothesized that, applying a modified geometric constellation of cameras, more precisely in the form of an equilateral triangle, and using an alternative method to determine the depth map, the performance can be increased considerably. A novel method is presented, which shall require fewer operations to calculate the distance information and which is to avoid a computational costly step for depth map fusion as necessary in the comparative method. In order to evaluate the presented approach as well as the hypotheses, a hemispherical depth map is generated in Chapter 6 by means of the new method. Simulation results, based on artificially generated 3D space information and realistic system parameters, are presented and subjected to a subsequent error estimate. A demonstrator for generating real measurement information is introduced in Chapter 7. In addition, the methods that are applied for calibrating the system intrinsically as well as extrinsically are explained. It turns out that the calibration procedure utilized cannot estimate the extrinsic parameters sufficiently. Initial measurements present a hemispherical depth map and thus con.rm the operativeness of the concept, but also identify the drawbacks of the calibration used. The current implementation of the algorithm shows almost real-time behaviour. Finally, Chapter 8 summarizes the results obtained along the studies and discusses them in the context of comparable binocular and trinocular stereo vision approaches. For example the results of the simulations carried out produced a saving of up to 30% in terms of stereo correspondence operations in comparison with a referred trinocular method. Furthermore, the concept introduced allows the avoidance of a weighted averaging step for depth map fusion based on precision values that have to be calculated costly. The achievable accuracy is still comparable for both trinocular approaches. In summary, it can be stated that, in the context of the present thesis, a measurement system has been developed, which has great potential for future application fields in industry, security in public spaces as well as home environments.
156

Towards Visual-Inertial SLAM for Dynamic Environments Using Instance Segmentation and Dense Optical Flow

Sarmiento Gonzalez, Luis Alejandro January 2021 (has links)
Dynamic environments pose an open problem for the performance of visual SLAM systems in real-life scenarios. Such environments involve dynamic objects that can cause pose estimation errors. Recently, Deep Learning semantic segmentation networks have been employed to identify potentially moving objects in visual SLAM; however, semantic information is subject to misclassifications and does not yield motion information alone. The thesis presents a hybrid method that employs semantic information and dense optical flow to determine moving objects through a motion likelihood. The proposed approach builds over stereo- inertial ORBSLAM 3, adding the capability of dynamic object detection to allow a more robust performance in dynamic scenarios. The system is evaluated in the OpenLORIS dataset, which considers stereo-inertial information in challenging scenes. The impact of dynamic objects on the system’s performance is studied through the use of ATE, RPE and Correctness Rate metrics. A comparison is made between the original ORBSLAM 3, ORBSLAM 3 considering only semantic information and the hybrid approach. The comparison helps identify the benefits and limitations of the proposed method. Results suggest an improvement in ATE for the hybrid approach with respect to the original ORBSLAM 3 in dynamic scenes. / Dynamiska miljöer utgör ett öppet problem för prestanda för visuella SLAM-system i verkliga scenarier. Sådana miljöer involverar dynamiska objekt som kan orsaka uppskattningsfel vid positionering. Nyligen har djupinlärning med semantiska segmenteringsnätverk använts för att identifiera potentiellt rörliga objekt i visuellt SLAM; emellertid är semantisk information föremål för felklassificeringar och ger inte enskilt rörelseinformation. Avhandlingen presenterar en hybridmetod som använder semantisk information och tätt optiskt flöde för att bestämma rörliga föremål genom en rörlig sannolikhet. Det föreslagna tillvägagångssättet bygger på stereotröghet ORBSLAM 3 och lägger till möjligheten för dynamisk objektdetektering för att möjliggöra en mer robust prestanda i dynamiska scenarier. Systemet utvärderas i OpenLORIS dataset, som tar hänsyn till stereo-inertial information i utmanande scener. Dynamiska objekts inverkan på systemets prestanda studeras med hjälp av medelvärdet av translationsfelet (ATE), relativa positioneringsfelet (RPE) och korrekthetsfördelning (Correctness Rate). En jämförelse görs mellan den ursprungliga ORBSLAM 3, ORBSLAM 3 med endast semantisk information, samt hybridmetoden. Jämförelsen hjälper till att identifiera fördelarna och begränsningarna med den föreslagna metoden. Resultaten tyder på en förbättring av ATE för hybridmetoden i jämförelse med den ursprungliga ORBSLAM 3 i dynamiska scener.
157

Case Studies in Classical Location Recording Using Improvised Techniques

Van Dyne, Steven R. 29 April 2015 (has links)
No description available.
158

A Novel Approach for Spherical Stereo Vision

Findeisen, Michel 23 April 2015 (has links)
The Professorship of Digital Signal Processing and Circuit Technology of Chemnitz University of Technology conducts research in the field of three-dimensional space measurement with optical sensors. In recent years this field has made major progress. For example innovative, active techniques such as the “structured light“-principle are able to measure even homogeneous surfaces and find its way into the consumer electronic market in terms of Microsoft’s Kinect® at the present time. Furthermore, high-resolution optical sensors establish powerful, passive stereo vision systems in the field of indoor surveillance. Thereby they induce new application domains such as security and assistance systems for domestic environments. However, the constraint field of view can be still considered as an essential characteristic of all these technologies. For instance, in order to measure a volume in size of a living space, two to three deployed 3D sensors have to be applied nowadays. This is due to the fact that the commonly utilized perspective projection principle constrains the visible area to a field of view of approximately 120°. On the contrary, novel fish-eye lenses allow the realization of omnidirectional projection models. Therewith, the visible field of view can be enlarged up to more than 180°. In combination with a 3D measurement approach, thus, the number of required sensors for entire room coverage can be reduced considerably. Motivated by the requirements of the field of indoor surveillance, the present work focuses on the combination of the established stereo vision principle and omnidirectional projection methods. The entire 3D measurement of a living space by means of one single sensor can be considered as major objective. As a starting point for this thesis chapter 1 discusses the underlying requirement, referring to various relevant fields of application. Based on this, the distinct purpose for the present work is stated. The necessary mathematical foundations of computer vision are reflected in Chapter 2 subsequently. Based on the geometry of the optical imaging process, the projection characteristics of relevant principles are discussed and a generic method for modeling fish-eye cameras is selected. Chapter 3 deals with the extraction of depth information using classical (perceptively imaging) binocular stereo vision configurations. In addition to a complete recap of the processing chain, especially occurring measurement uncertainties are investigated. In the following, Chapter 4 addresses special methods to convert different projection models. The example of mapping an omnidirectional to a perspective projection is employed, in order to develop a method for accelerating this process and, hereby, for reducing the computational load associated therewith. Any errors that occur, as well as the necessary adjustment of image resolution, are an integral part of the investigation. As a practical example, an application for person tracking is utilized in order to demonstrate to which extend the usage of “virtual views“ can increase the recognition rate for people detectors in the context of omnidirectional monitoring. Subsequently, an extensive search with respect to omnidirectional imaging stereo vision techniques is conducted in chapter 5. It turns out that the complete 3D capture of a room is achievable by the generation of a hemispherical depth map. Therefore, three cameras have to be combined in order to form a trinocular stereo vision system. As a basis for further research, a known trinocular stereo vision method is selected. Furthermore, it is hypothesized that, applying a modified geometric constellation of cameras, more precisely in the form of an equilateral triangle, and using an alternative method to determine the depth map, the performance can be increased considerably. A novel method is presented, which shall require fewer operations to calculate the distance information and which is to avoid a computational costly step for depth map fusion as necessary in the comparative method. In order to evaluate the presented approach as well as the hypotheses, a hemispherical depth map is generated in Chapter 6 by means of the new method. Simulation results, based on artificially generated 3D space information and realistic system parameters, are presented and subjected to a subsequent error estimate. A demonstrator for generating real measurement information is introduced in Chapter 7. In addition, the methods that are applied for calibrating the system intrinsically as well as extrinsically are explained. It turns out that the calibration procedure utilized cannot estimate the extrinsic parameters sufficiently. Initial measurements present a hemispherical depth map and thus con.rm the operativeness of the concept, but also identify the drawbacks of the calibration used. The current implementation of the algorithm shows almost real-time behaviour. Finally, Chapter 8 summarizes the results obtained along the studies and discusses them in the context of comparable binocular and trinocular stereo vision approaches. For example the results of the simulations carried out produced a saving of up to 30% in terms of stereo correspondence operations in comparison with a referred trinocular method. Furthermore, the concept introduced allows the avoidance of a weighted averaging step for depth map fusion based on precision values that have to be calculated costly. The achievable accuracy is still comparable for both trinocular approaches. In summary, it can be stated that, in the context of the present thesis, a measurement system has been developed, which has great potential for future application fields in industry, security in public spaces as well as home environments.:Abstract 7 Zusammenfassung 11 Acronyms 27 Symbols 29 Acknowledgement 33 1 Introduction 35 1.1 Visual Surveillance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 1.2 Challenges in Visual Surveillance . . . . . . . . . . . . . . . . . . . . . . . 38 1.3 Outline of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2 Fundamentals of Computer Vision Geometry 43 2.1 Projective Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.1.1 Euclidean Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.1.2 Projective Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2.2 Camera Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.2.1 Geometrical Imaging Process . . . . . . . . . . . . . . . . . . . . . 45 2.2.1.1 Projection Models . . . . . . . . . . . . . . . . . . . . . . 46 2.2.1.2 Intrinsic Model . . . . . . . . . . . . . . . . . . . . . . . . 47 2.2.1.3 Extrinsic Model . . . . . . . . . . . . . . . . . . . . . . . 50 2.2.1.4 Distortion Models . . . . . . . . . . . . . . . . . . . . . . 51 2.2.2 Pinhole Camera Model . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.2.2.1 Complete Forward Model . . . . . . . . . . . . . . . . . . 52 2.2.2.2 Back Projection . . . . . . . . . . . . . . . . . . . . . . . 53 2.2.3 Equiangular Camera Model . . . . . . . . . . . . . . . . . . . . . . 54 2.2.4 Generic Camera Models . . . . . . . . . . . . . . . . . . . . . . . . 55 2.2.4.1 Complete Forward Model . . . . . . . . . . . . . . . . . . 56 2.2.4.2 Back Projection . . . . . . . . . . . . . . . . . . . . . . . 58 2.3 Camera Calibration Methods . . . . . . . . . . . . . . . . . . . . . . . . . 58 2.3.1 Perspective Camera Calibration . . . . . . . . . . . . . . . . . . . . 59 2.3.2 Omnidirectional Camera Calibration . . . . . . . . . . . . . . . . . 59 2.4 Two-View Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 2.4.1 Epipolar Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 2.4.2 The Fundamental Matrix . . . . . . . . . . . . . . . . . . . . . . . 63 2.4.3 Epipolar Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3 Fundamentals of Stereo Vision 67 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.1.1 The Concept Stereo Vision . . . . . . . . . . . . . . . . . . . . . . 67 3.1.2 Overview of a Stereo Vision Processing Chain . . . . . . . . . . . . 68 3.2 Stereo Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.2.1 Extrinsic Stereo Calibration With Respect to the Projective Error 70 3.3 Stereo Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.3.1 A Compact Algorithm for Rectification of Stereo Pairs . . . . . . . 73 3.4 Stereo Correspondence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.4.1 Disparity Computation . . . . . . . . . . . . . . . . . . . . . . . . 76 3.4.2 The Correspondence Problem . . . . . . . . . . . . . . . . . . . . . 77 3.5 Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.5.1 Depth Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.5.2 Range Field of Measurement . . . . . . . . . . . . . . . . . . . . . 80 3.5.3 Measurement Accuracy . . . . . . . . . . . . . . . . . . . . . . . . 80 3.5.4 Measurement Errors . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.5.4.1 Quantization Error . . . . . . . . . . . . . . . . . . . . . 82 3.5.4.2 Statistical Distribution of Quantization Errors . . . . . . 83 4 Virtual Cameras 87 4.1 Introduction and Related Works . . . . . . . . . . . . . . . . . . . . . . . 88 4.2 Omni to Perspective Vision . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.2.1 Forward Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.2.2 Backward Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.2.3 Fast Backward Mapping . . . . . . . . . . . . . . . . . . . . . . . . 96 4.3 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.4 Accuracy Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.4.1 Intrinsics of the Source Camera . . . . . . . . . . . . . . . . . . . . 102 4.4.2 Intrinsics of the Target Camera . . . . . . . . . . . . . . . . . . . . 102 4.4.3 Marginal Virtual Pixel Size . . . . . . . . . . . . . . . . . . . . . . 104 4.5 Performance Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.6 Virtual Perspective Views for Real-Time People Detection . . . . . . . . . 110 5 Omnidirectional Stereo Vision 113 5.1 Introduction and Related Works . . . . . . . . . . . . . . . . . . . . . . . 113 5.1.1 Geometrical Configuration . . . . . . . . . . . . . . . . . . . . . . . 116 5.1.1.1 H-Binocular Omni-Stereo with Panoramic Views . . . . . 117 5.1.1.2 V-Binocular Omnistereo with Panoramic Views . . . . . 119 5.1.1.3 Binocular Omnistereo with Hemispherical Views . . . . . 120 5.1.1.4 Trinocular Omnistereo . . . . . . . . . . . . . . . . . . . 122 5.1.1.5 Miscellaneous Configurations . . . . . . . . . . . . . . . . 125 5.2 Epipolar Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 5.2.1 Cylindrical Rectification . . . . . . . . . . . . . . . . . . . . . . . . 127 5.2.2 Epipolar Equi-Distance Rectification . . . . . . . . . . . . . . . . . 128 5.2.3 Epipolar Stereographic Rectification . . . . . . . . . . . . . . . . . 128 5.2.4 Comparison of Rectification Methods . . . . . . . . . . . . . . . . 129 5.3 A Novel Spherical Stereo Vision Setup . . . . . . . . . . . . . . . . . . . . 129 5.3.1 Physical Omnidirectional Camera Configuration . . . . . . . . . . 131 5.3.2 Virtual Rectified Cameras . . . . . . . . . . . . . . . . . . . . . . . 131 6 A Novel Spherical Stereo Vision Algorithm 135 6.1 Matlab Simulation Environment . . . . . . . . . . . . . . . . . . . . . . . 135 6.2 Extrinsic Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 6.3 Physical Camera Configuration . . . . . . . . . . . . . . . . . . . . . . . . 137 6.4 Virtual Camera Configuration . . . . . . . . . . . . . . . . . . . . . . . . . 137 6.4.1 The Focal Length . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 6.4.2 Prediscussion of the Field of View . . . . . . . . . . . . . . . . . . 138 6.4.3 Marginal Virtual Pixel Sizes . . . . . . . . . . . . . . . . . . . . . . 139 6.4.4 Calculation of the Field of View . . . . . . . . . . . . . . . . . . . 142 6.4.5 Calculation of the Virtual Pixel Size Ratios . . . . . . . . . . . . . 143 6.4.6 Results of the Virtual Camera Parameters . . . . . . . . . . . . . . 144 6.5 Spherical Depth Map Generation . . . . . . . . . . . . . . . . . . . . . . . 147 6.5.1 Omnidirectional Imaging Process . . . . . . . . . . . . . . . . . . . 148 6.5.2 Rectification Process . . . . . . . . . . . . . . . . . . . . . . . . . . 148 6.5.3 Rectified Depth Map Generation . . . . . . . . . . . . . . . . . . . 150 6.5.4 Spherical Depth Map Generation . . . . . . . . . . . . . . . . . . . 151 6.5.5 3D Reprojection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 6.6 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 7 Stereo Vision Demonstrator 163 7.1 Physical System Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 7.2 System Calibration Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . 165 7.2.1 Intrinsic Calibration of the Physical Cameras . . . . . . . . . . . . 165 7.2.2 Extrinsic Calibration of the Physical and the Virtual Cameras . . 166 7.2.2.1 Extrinsic Initialization of the Physical Cameras . . . . . 167 7.2.2.2 Extrinsic Initialization of the Virtual Cameras . . . . . . 167 7.2.2.3 Two-View Stereo Calibration and Rectification . . . . . . 167 7.2.2.4 Three-View Stereo Rectification . . . . . . . . . . . . . . 168 7.2.2.5 Extrinsic Calibration Results . . . . . . . . . . . . . . . . 169 7.3 Virtual Camera Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 7.4 Software Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 7.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 7.5.1 Qualitative Assessment . . . . . . . . . . . . . . . . . . . . . . . . 172 7.5.2 Performance Measurements . . . . . . . . . . . . . . . . . . . . . . 174 8 Discussion and Outlook 177 8.1 Discussion of the Current Results and Further Need for Research . . . . . 177 8.1.1 Assessment of the Geometrical Camera Configuration . . . . . . . 178 8.1.2 Assessment of the Depth Map Computation . . . . . . . . . . . . . 179 8.1.3 Assessment of the Depth Measurement Error . . . . . . . . . . . . 182 8.1.4 Assessment of the Spherical Stereo Vision Demonstrator . . . . . . 183 8.2 Review of the Different Approaches for Hemispherical Depth Map Generation184 8.2.1 Comparison of the Equilateral and the Right-Angled Three-View Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 8.2.2 Review of the Three-View Approach in Comparison with the Two- View Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 8.3 A Sample Algorithm for Human Behaviour Analysis . . . . . . . . . . . . 187 8.4 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 A Relevant Mathematics 191 A.1 Cross Product by Skew Symmetric Matrix . . . . . . . . . . . . . . . . . . 191 A.2 Derivation of the Quantization Error . . . . . . . . . . . . . . . . . . . . . 191 A.3 Derivation of the Statistical Distribution of Quantization Errors . . . . . . 192 A.4 Approximation of the Quantization Error for Equiangular Geometry . . . 194 B Further Relevant Publications 197 B.1 H-Binocular Omnidirectional Stereo Vision with Panoramic Views . . . . 197 B.2 V-Binocular Omnidirectional Stereo Vision with Panoramic Views . . . . 198 B.3 Binocular Omnidirectional Stereo Vision with Hemispherical Views . . . . 200 B.4 Trinocular Omnidirectional Stereo Vision . . . . . . . . . . . . . . . . . . 201 B.5 Miscellaneous Configurations . . . . . . . . . . . . . . . . . . . . . . . . . 202 Bibliography 209 List of Figures 223 List of Tables 229 Affidavit 231 Theses 233 Thesen 235 Curriculum Vitae 237
159

Design, implementation & analysis of a low-cost, portable, medical measurement system through computer vision

Van der Westhuizen, Gareth 03 1900 (has links)
Thesis (MScEng (Mechanical and Mechatronic Engineering))--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: The In the Physiotherapy Division of the Faculty of Health Sciences on the Tygerberg Hospital Campus of the University of Stellenbosch, the challenge arose to develop a portable, affordable and yet accurate 3D measurement machine for the assessment of posture in school children in their classroom environment. Currently Division already uses a state-of-the-art VICON commercial medical measuring machine to measure human posture in 3D in their physiotherapy clinic, but the system is not portable and is too expensive to cart around to different places for testing. To respond to this challenge, this Master’s thesis designed and analyzed a machine and its supporting system through both research on stereo-vision methodologies and empirical appraisal in the field. In the development process, the research was required to overcome the limitations posed by small image resolutions and lens distortions that are typical of cheap cameras. The academic challenge lay in the development of an error prediction model through Jacobian derivation and Error Propagation Law, to predict uncertainties of angular measurement calculated by the system. The research culminated in a system that is comparable in accuracy to the VICON within 3mm, and that has 1.5mm absolute accuracy within its own system for a measurement volume radius of 2.5 m. As such, the developed error model is an exact predictor of the angular error to within 0.02° of arc. These results, for both system accuracy and the error model, exceed the expectations on the basis of the initial challenge of the system. The development of the machine was successful in providing a prototype tool that is suitable for commercial development for use by physiotherapists in human posture measurement and assessment. In its current incarnation, the machine will also serve the Engineering Faculty as the most fundamental form of a three-dimensional measuring apparatus using only basic theories and algorithms of stereo-vision, thereby providing a basic experimental platform from which further scientific research on the theory and application of computer vision can be conducted. / AFRIKAANSE OPSOMMING: Die Fisioterapie Afdeling van die Fakulteit Gesondheidswetenskappe op die Tygerberg kampus van die Universiteit van Stellenbosch gebruik ’n allernuutste VICON kommersiële mediese meettoestel om menslike postuur in drie dimensies te meet. Vanuit hierdie Afdeling het die uitdaging ontstaan om ’n draagbare, bekostigbare, maar tog akkurate, drie-dimensionele meetapparaat geskik vir die meet van die postuur van skoolkinders in die klaskamer te ontwikkel. In aanvaarding van hierdie uitdaging, het hierdie Magistertesis ’n toestel en ondersteuningstels ontwerp en ontleed deur beide navorsing in stereo-visie metodiek en terplaatse beoordeling. In die ontwikkelingsproses moes die navorsing die beperkings wat deur klein-beeld resolusie en lens-distorsie (tipies van goedkoop kameras) meegebring word, oorkom. Die akademiese uitdaging lê in die ontwikkeling van ’n voorspellende foutmodel deur van die Jacobianse-afleiding en die Fout Propageringswet gebruik te maak om onsekerheid van hoeksberekening deur die stelsel te voorspel. Die navorsing het gelei tot ’n stelsel wat binne 3mm vergelykbaar is in akkuraatheid met dié van die VICON en ook 1.5mm absolute interne akkuraatheid het in ’n meet-volume radius van 2.5m radius. Die ontwikkelde foutmodel is dus ’n presiese voorspeller van hoekfout tot binne 0.02° van boog. Die resultate met betrekking tot beide die akkuraatheid en die foutmodel het die oorspronklike verwagtinge van die uitdaging oortref. Die ontwikkeling was suksesvol in die skep van ’n prototipe-toestel geskik vir kommersiële ontwikkeling, vir gebruik deur fisioterapeute in die meting en evaluering van menslike postuur. Die stelsel is in sy fundamentele vorm, deur die gebruik van slegs basiese teorieë en algoritmes van stereo-visie, funksioneer as ’n drie-dimensionele meetapparaat. In die fundamentele vorm sal die stelsel die Ingenieursfakulteit dien as ’n basiese eksperimentele platform waarop verdere wetenskaplike navorsing in die teorie en toepassing van rekenaar-visie gedoen kan word.
160

Navegação de robôs móveis utilizando visão estéreo / Mobile robot navigation using stereo vision

Mendes, Caio César Teodoro 26 April 2012 (has links)
Navegação autônoma é um tópico abrangente cuja atenção por parte da comunidade de robôs móveis vemaumentando ao longo dos anos. O problema consiste em guiar um robô de forma inteligente por um determinado percurso sem ajuda humana. Esta dissertação apresenta um sistema de navegação para ambientes abertos baseado em visão estéreo. Uma câmera estéreo é utilizada na captação de imagens do ambiente e, utilizando o mapa de disparidades gerado por um método estéreo semi-global, dois métodos de detecção de obstáculos são utilizando para segmentar as imagens em regiões navegáveis e não navegáveis. Posteriormente esta classificação é utilizada em conjunto com um método de desvio de obstáculos, resultando em um sistema completo de navegação autônoma. Os resultados obtidos por está dissertação incluem a avaliação de dois métodos estéreo, esta sendo favorável ao método estéreo empregado (semi-global). Foram feitos testes visando avaliar a qualidade e custo computacional de dois métodos para detecção de obstáculos, um baseado em plano e outro baseado em cone. Tais testes deixaram claras as limitações de ambos os métodos e levaram a uma implementação paralela do método baseado em cone. Utilizando uma unidade de processamento gráfico, a versão paralelizada do método baseado em cone atingiu um ganho no tempo computacional de aproximadamente dez vezes. Por fim, os resultados demonstrarão o sistema completo em funcionamento, onde a plataforma robótica utilizada, um veículo elétrico, foi capaz de desviar de pessoas e cones alcançando seu objetivo seguramente / Autonomous navigation is a broad topic that has received increasing attention from the community of mobile robots over the years. The problem is to guide a robot in a smart way for a certain route without human help. This dissertation presents a navigation system for open environments based on stereo vision. A stereo camera is used to capture images of the environment and based on the disparity map generated by a semi-global stereo method, two obstacle detection methods are used to segment the images into navigable and non-navigable regions. Subsequently, this classification is employed in conjunction with a obstacle avoidance method, resulting in a complete autonomous navigation system. The results include an evaluation two stereo methods, this being favorable to the employed stereo method (semi-global). Tests were performed to evaluate the quality and computational cost of two methods for obstacle detection, a plane based one and a cone based. Such tests have left clear the limitations of both methods and led to a parallel implementation of the cone based method. Using a graphics processing unit, a parallel version of the cone based method reached a gain in computational time of approximately ten times. Finally, the results demonstrate the complete system in operation, where the robotic platform used, an electric vehicle, was able to dodge people and cones reaching its goal safely

Page generated in 0.0656 seconds