• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 6
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 38
  • 38
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Auto-calibration d'une multi-caméra omnidirectionnelle grand public fixée sur un casque / Self-calibration for consumer omnidirectional multi-camera mounted on a helmet

Nguyen, Thanh-Tin 19 December 2017 (has links)
Les caméras sphériques et 360 deviennent populaires et sont utilisées notamment pour la création de vidéos immersives et la génération de contenu pour la réalité virtuelle. Elles sont souvent composées de plusieurs caméras grand-angles/fisheyes pointant dans différentes directions et rigidement liées les unes aux autres. Cependant, il n'est pas si simple de les calibrer complètement car ces caméras grand public sont rolling shutter et peuvent être mal synchronisées. Cette thèse propose des méthodes permettant de calibrer ces multi-caméras à partir de vidéos sans utiliser de mire de calibration. On initialise d'abord un modèle de multi-caméra grâce à des hypothèses appropriées à un capteur omnidirectionnel sans direction privilégiée : les caméras ont les mêmes réglages (dont la fréquence et l'angle de champs de vue) et sont approximativement équiangulaires. Deuxièmement, sachant que le module de la vitesse angulaire est le même pour deux caméras au même instant, nous proposons de synchroniser les caméras à une image près à partir des vitesses angulaires estimées par structure-from-motion monoculaire. Troisièmement, les poses inter-caméras et les paramètres intrinsèques sont estimés par structure-from-motion et ajustement de faisceaux multi-caméras avec les approximations suivantes : la multi-caméra est centrale, global shutter ; et la synchronisation précédant est imposée.Enfin, nous proposons un ajustement de faisceaux final sans ces approximations, qui raffine notamment la synchronisation (à précision sous-trame), le coefficient de rolling shutter et les autres paramètres (intrinsèques, extrinsèques, 3D). On expérimente dans un contexte que nous pensons utile pour des applications comme les vidéos 360 et la modélisation 3D de scènes : plusieurs caméras grand public ou une caméra sphérique fixée(s) sur un casque et se déplaçant le long d'une trajectoire de quelques centaines de mètres à quelques kilomètres. / 360 degree and spherical multi-cameras built by fixing together several consumer cameras become popular and are convenient for recent applications like immersive videos, 3D modeling and virtual reality. This type of cameras allows to include the whole scene in a single view.When the goal of our applications is to merge monocular videos together into one cylinder video or to obtain 3D informations from environment,there are several basic steps that should be performed beforehand.Among these tasks, we consider the synchronization between cameras; the calibration of multi-camera system including intrinsic and extrinsic parameters (i.e. the relative poses between cameras); and the rolling shutter calibration. The goal of this thesis is to develop and apply user friendly method. Our approach does not require a calibration pattern. First, the multi-camera is initialized thanks to assumptions that are suitable to an omnidirectional camera without a privileged direction:the cameras have the same setting (frequency, image resolution, field-of-view) and are roughly equiangular.Second, a frame-accurate synchronization is estimated from instantaneous angular velocities of each camera provided by monocular Structure-from-Motion.Third, both inter-camera poses and intrinsic parameters are refined using multi-camera Structure-from-Motion and bundle adjustment.Last, we introduce a bundle adjustment that estimates not only the usual parameters but also a subframe-accurate synchronization and the rolling shutter. We experiment in a context that we believe useful for applications (3D modeling and 360 videos):several consumer cameras or a spherical camera mounted on a helmet and moving along trajectories of several hundreds of meters or kilometers.
22

A self calibration technique for a DOA array in the presence of mutual coupling and resonant scatterers

Horiki, Yasutaka 22 September 2006 (has links)
No description available.
23

INTELLIGENT DATA ACQUISITION TECHNOLOGY

Powell, Rick, Fitzsimmons, Chris 10 1900 (has links)
International Telemetering Conference Proceedings / October 27-30, 1997 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Telemetry & Instrumentation, in conjunction with NASA’s Kennedy Space Center, has developed a commercial, intelligent, data acquisition module that performs all functions associated with acquiring and digitizing a transducer measurement. These functions include transducer excitation, signal gain and anti-aliasing filtering, A/D conversion, linearization and digital filtering, and sample rate decimation. The functions are programmable and are set up from information stored in a local Transducer Electronic Data Sheet (TEDS). In addition, the module performs continuous self-calibration and self-test to maintain 0.01% accuracy over its entire operating temperature range for periods of one year without manual recalibration. The module operates in conjunction with a VME-based data acquisition system.
24

Geospatial Processing Full Waveform Lidar Data

Qinghua Li (5929958) 16 January 2019 (has links)
This thesis focuses on the comprehensive and thorough studies on the geospatial processing of airborne (full) waveform lidar data, including waveform modeling, direct georeferencing, and precise georeferencing with self-calibration.<div><br></div><div>Both parametric and nonparametric approaches of waveform decomposition are studied. The traditional parametric approach assumes that the returned waveforms follow a Gaussian mixture model where each component is a Gaussian. However, many real examples show that the waveform components can be neither Gaussian nor symmetric. To address the problem, this thesis proposes a nonparametric mixture model to represent lidar waveforms without any constraints on the shape of the waveform components. To decompose the waveforms, a fuzzy mean-shift algorithm is then developed. This approach has the following properties: 1) it does not assume that the waveforms follow any parametric or functional distributions; 2) the waveform decomposition is treated as a fuzzy data clustering problem and the number of components is determined during the process of decomposition; 3) neither peak selection nor noise floor filtering prior to the decomposition is needed; and 4) the range measurement is not affected by the process of noise filtering. In addition, the fuzzy mean-shift approach is about three times faster than the conventional expectationmaximization algorithm and tends to lead to fewer artifacts in the resultant digital elevation model. <br></div><div><br></div><div>This thesis also develops a framework and methodology of self-calibration that simultaneously determines the waveform geospatial position and boresight angles. Besides using the flight trajectory and plane attitude recorded by the onboard GPS receiver and inertial measurement unit, the framework makes use of the publically accessible digital elevation models as control over the study area. Compared to the conventional calibration and georeferencing method, the new development has minimum requirements on ground truth: no extra ground control, no planar objects, and no overlap flight strips are needed. Furthermore, it can also solve the problem of clock synchronization and boresight calibration simultaneously. Through a developed two-stage optimization strategy, the self-calibration approach can resolve both the time synchronization bias and boresight misalignment angles to achieve a stable and correct solution. As a result, a consistency of 0.8662 meter is achieved between the waveform derived digital elevation model and the reference one without systematic trend. Such experiments demonstrate the developed method is a necessary and more economic alternative to the conventional, high demanding georeferencing and calibration approach, especially when no or limited ground control is available.<br></div>
25

Construction de modèles 3D à partir de données vidéo fisheye : application à la localisation en milieu urbain / Construction of 3D models from fisheye video data—Application to the localisation in urban area

Moreau, Julien 07 June 2016 (has links)
Cette recherche vise à la modélisation 3D depuis un système de vision fisheye embarqué, utilisée pour une application GNSS dans le cadre du projet Predit CAPLOC. La propagation des signaux satellitaires en milieu urbain est soumise à des réflexions sur les structures, altérant la précision et la disponibilité de la localisation. L’ambition du projet est (1) de définir un système de vision omnidirectionnelle capable de fournir des informations sur la structure 3D urbaine et (2) de montrer qu’elles permettent d’améliorer la localisation.Le mémoire expose les choix en (1) calibrage automatique, (2) mise en correspondance entre images, (3) reconstruction 3D ; chaque algorithme est évalué sur images de synthèse et réelles. De plus, il décrit une manière de corriger les réflexions des signaux GNSS depuis un nuage de points 3D pour améliorer le positionnement. En adaptant le meilleur de l’état de l’art du domaine, deux systèmes sont proposés et expérimentés. Le premier est un système stéréoscopique à deux caméras fisheye orientées vers le ciel. Le second en est l’adaptation à une unique caméra.Le calibrage est assuré à travers deux étapes : l’algorithme des 9 points adapté au modèle « équisolide » couplé à un RANSAC, suivi d’un affinement par optimisation Levenberg-Marquardt. L’effort a été porté sur la manière d’appliquer la méthode pour des performances optimales et reproductibles. C’est un point crucial pour un système à une seule caméra car la pose doit être estimée à chaque nouvelle image.Les correspondances stéréo sont obtenues pour tout pixel par programmation dynamique utilisant un graphe 3D. Elles sont assurées le long des courbes épipolaires conjuguées projetées de manière adaptée sur chaque image. Une particularité est que les distorsions ne sont pas rectifiées afin de ne pas altérer le contenu visuel ni diminuer la précision. Dans le cas binoculaire il est possible d’estimer les coordonnées à l’échelle. En monoculaire, l’ajout d’un odomètre permet d’y arriver. Les nuages successifs peuvent être calés pour former un nuage global en SfM.L’application finale consiste dans l’utilisation du nuage 3D pour améliorer la localisation GNSS. Il est possible d’estimer l’erreur de pseudodistance d’un signal après multiples réflexions et d’en tenir compte pour une position plus précise. Les surfaces réfléchissantes sont modélisées grâce à une extraction de plans et de l’empreinte des bâtiments. La méthode est évaluée sur des paires d’images fixes géo-référencées par un récepteur bas-coût et un récepteur GPS RTK (vérité terrain). Les résultats montrent une amélioration de la localisation en milieu urbain. / This research deals with 3D modelling from an embedded fisheye vision system, used for a GNSS application as part of CAPLOC project. Satellite signal propagation in urban area implies reflections on structures, impairing localisation’s accuracy and availability. The project purpose is (1) to define an omnidirectional vision system able to provide information on urban 3D structure and (2) to demonstrate that it allows to improve localisation.This thesis addresses problems of (1) self-calibration, (2) matching between images, (3) 3D reconstruction ; each algorithm is assessed on computer-generated and real images. Moreover, it describes a way to correct GNSS signals reflections from a 3D point cloud to improve positioning. We propose and evaluate two systems based on state-of-the-art methods. First one is a stereoscopic system made of two sky facing fisheye cameras. Second one is the adaptation of the former to a single camera.Calibration is handled by a two-steps process: the 9-point algorithm fitted to “equisolid” model coupled with a RANSAC, followed by a Levenberg-Marquardt optimisation refinement. We focused on the way to apply the method for optimal and repeatable performances. It is a crucial point for a system composed of only one camera because the pose must be estimated for every new image.Stereo matches are obtained for every pixel by dynamic programming using a 3D graph. Matching is done along conjugated epipolar curves projected in a suitable manner on each image. A distinctive feature is that distortions are not rectified in order to neither degrade visual content nor to decrease accuracy. In the binocular case it is possible to estimate full-scale coordinates.In the monocular case, we do it by adding odometer information. Local clouds can be wedged in SfM to form a global cloud.The end application is the usage of the 3D cloud to improve GNSS localisation. It is possible to estimate and consider a signal pseudodistance error after multiple reflections in order to increase positioning accuracy. Reflecting surfaces are modelled thanks to plane and buildings trace fitting. The method is evaluated on fixed image pairs, georeferenced by a low-cost receiver and a GPS RTK receiver (ground truth). Study results show the localisation improvement ability in urban environment.
26

以自率光束法提升四旋翼UAV航拍影像之定位精度 / Using self-calibration to promote the positioning accuracy of images acquired from a quadrotor UAV

謝幸宜, Hsieh, Hsing Yi Unknown Date (has links)
整合了GPS、INS的無人飛行載具(Unmanned Aerial Vehicles, UAVs),可提供安全、快速的資料蒐集方法,而能執行自動駕駛(automatic pilot)功能的UAV系統,更可提高資料蒐集的自動化程度。資料收集時,UAV系統中的GPS天線、INS系統以及像機的透視中心並不一致,欲以UAV系統執行航測任務時,須先了解UAV的系統幾何與特性,才能從GPS、INS的記錄資料中取得適當的外方位參數參考值。此外,目前的UAV系統多搭載非量測型像機(non-metric camera)獲取影像,但非量測型像機的內方位參數常以近景攝影測量的方式率定而得。然而,能以近景攝影測量方式獲得內方位參數的商業軟體很多,其所使用的函數模式卻未必完全相同,將影響內方位參數的率定成果,若再於空三平差過程中把不同軟體解得的內方位參數視為固定值,將使空三平差的結果產生較大的影像定位誤差。而自率光束法除了可用於近景攝影測量中的像機率定,也能應用於航空攝影測量中,將航測作業中的像坐標系統誤差模式化並加以改正,以提升該次作業的空三平差精度。因此,本研究以較安全的四旋翼UAV系統搭載非量測型像機獲取影像,比較:(1)一般航測方法(即光束法)執行空三平差、(2)使用自率光束法的空三平差、(3)先將所有影像觀測量以熟知的系統誤差模式改正後,再使用自率光束法的空三平差(以下簡稱預改正(pre-corrected)的自率光束法空三平差)所能達到的精度。測試結果顯示:使用預改正的自率光束法空三平差時,使用Brown(1976)與Ebner(1976)兩種附加參數模式,皆可得到最佳的空三平差精度,而使用Brown附加參數模式的自率光束法空三平差精度次之,且均比一般航測方法的空三平差精度佳。但於自率光束法的空三平差過程中使用Ebner的附加參數模式,所得的空三平差精度則最差。 / Unmanned aerial vehicles (UAVs) integrating with GPS and INS provide a safe and fast method for data acquisition. The UAVs which can implement automatic pilot promote the automation of data collection. In UAV systems, the GPS antenna and the INS system are not aligned with the perspective center, so that the GPS and INS records should be revised according to the geometry of UAV systems for exterior orientation references. And the cameras equipped with UAVs are often belonging to the non-metric camera, whose interior orientation parameters can be acquired by close-range photogrammetry softwares. However, there are several different camera models used in the softwares and the interior parameters calibrated by different softwares would not be the same, so that the interior parameters of the non-metric camera should not be regard as constant in aerotriangulation. Self-calibration can not only calibrate the camera in close-range photogrammetry but also model and compensate the departures from collinearity in aerotriangulation to promote the positioning accuracy. This study uses the images acquired from a safe UAV system, a Quadrotor UAV, and compares the results by using different aerotriangulation procedures. In this paper, the optimal accuracy can be obtained by using self-calibration in bundle adjustment with all measurements been pre-corrected for radial and decentering lens distortion. And the suboptumal accuracy can be obtained by using Brown’s (1976) added parameters in bundle adjustment, better than the results of using bundle adjustment. But using Ebner’s (1976) added parameters in bundle adjustment cannot help promoting the positioning accuracy.
27

Development of ultra-precision tools for metrology and lithography of large area photomasks and high definition displays

Ekberg, Lars Peter January 2013 (has links)
Large area flat displays are nowadays considered being a commodity. After the era of bulky CRT TV technology, LCD and OLED have taken over as the most prevalent technologies for high quality image display devices. An important factor underlying the success of these technologies has been the development of high performance photomask writers in combination with a precise photomask process. Photomask manufacturing can be regarded as an art, highly dependent on qualified and skilled workers in a few companies located in Asia. The manufacturing yield in the photomask process depends to a great extent on several steps of measurements and inspections. Metrology, which is the focus of this thesis, is the science of measurement and is a prerequisite for maintaining high quality in all manufacturing processes. The details and challenges of performing critical measurements over large area photomasks of square meter sizes will be discussed. In particular the development of methods and algorithms related to the metrology system MMS15000, the world standard for large area photomask metrology today, will be presented. The most important quality of a metrology system is repeatability. Achieving good repeatability requires a stable environment, carefully selected materials, sophisticated mechanical solutions, precise optics and capable software. Attributes of the air including humidity, CO2 level, pressure and turbulence are other factors that can impact repeatability and accuracy if not handled properly. Besides the former qualities, there is also the behavior of the photomask itself that needs to be carefully handled in order to achieve a good correspondence to the Cartesian coordinate system. An uncertainty specification below 100 nm (3σ) over an area measured in square meters cannot be fulfilled unless special care is taken to compensate for gravity-induced errors from the photomask itself when it is resting on the metrology tool stage. Calibration is therefore a considerable challenge over these large areas. A novel method for self-calibration will be presented and discussed in the thesis. This is a general method that has proven to be highly robust even in cases when the self-calibration problem is close to being underdetermined. A random sampling method based on massive averaging in the time domain will be presented as the solution for achieving precise spatial measurements of the photomask patterns. This method has been used for detection of the position of chrome or glass edges on the photomask with a repeatability of 1.5 nm (3σ), using a measurement time of 250 ms. The method has also been used for verification of large area measurement repeatability of approximately 10 nm (3σ) when measuring several hundred measurement marks covering an area of 0.8 x 0.8 m2. The measurement of linewidths, referred to in the photomask industry as critical dimension (CD) measurements, is another important task for the MMS15000 system. A threshold-based inverse convolution method will be presented that enhances resolution down to 0.5 µm without requiring a change to the numerical aperture of the system. As already mentioned, metrology is very important for maintaining high quality in a manufacturing environment. In the mask manufacturing industry in particular, the cost of poor quality (CoPQ) is extremely high. Besides the high materials cost, there are also the stringent requirements placed on CD and mask overlay, along with the need for zero defects that make the photomask industry unique. This topic is discussed further, and is shown to be a strong motivation for the development of the ultra-precision metrology built into the MMS15000 system. / <p>QC 20130515</p>
28

Design Considerations for Wide Bandwidth Continuous-Time Low-Pass Delta-Sigma Analog-to-Digital Converters

Padyana, Aravind 1983- 14 March 2013 (has links)
Continuous-time (CT) delta-sigma (ΔΣ) analog-to-digital converters (ADC) have emerged as the popular choice to achieve high resolution and large bandwidth due to their low cost, power efficiency, inherent anti-alias filtering and digital post processing capabilities. This work presents a detailed system-level design methodology for a low-power CT ΔΣ ADC. Design considerations and trade-offs at the system-level are presented. A novel technique to reduce the sensitivity of the proposed ADC to clock jitter-induced feedback charge variations by employing a hybrid digital-to-analog converter (DAC) based on switched-capacitor circuits is also presented. The proposed technique provides a clock jitter tolerance of up to 5ps (rms). The system is implemented using a 5th order active-RC loop filter, 9-level quantizer and DAC, achieving 74dB SNDR over 20MHz signal bandwidth, at 400MHz sampling frequency in a 1.2V, 90 nm CMOS technology. A novel technique to improve the linearity of the feedback digital-to-analog converters (DAC) in a target 11-bits resolution, 100MHz bandwidth, 2GHz sampling frequency CT ΔΣ ADC is also presented in this work. DAC linearity is improved by combining dynamic element matching and automatic background calibration to achieve up to 18dB improvement in the SNR. Transistor-level circuit implementation of the proposed technique was done in a 1.8V, 0.18μm BiCMOS process.
29

Méthodes de reconstruction tridimensionnelle intégrant des points cycliques : application au suivi d’une caméra / Structure-from-Motion paradigms integrating circular points : application to camera tracking

Calvet, Lilian 23 January 2014 (has links)
Cette thèse traite de la reconstruction tridimensionnelle d’une scène rigide à partir d’une collection de photographies numériques, dites vues. Le problème traité est connu sous le nom du "calcul de la structure et du mouvement" (structure-and/from-motion) qui consiste à "expliquer" des trajectoires de points dits d’intérêt au sein de la collection de vues par un certain mouvement de l’appareil (dont sa trajectoire) et des caractéristiques géométriques tridimensionnelles de la scène. Dans ce travail, nous proposons les fondements théoriques pour étendre certaines méthodes de calcul de la structure et du mouvement afin d’intégrer comme données d’entrée, des points d’intérêt réels et des points d’intérêt complexes, et plus précisément des images de points cycliques. Pour tout plan projectif, les points cycliques forment une paire de points complexes conjugués qui, par leur invariance par les similitudes planes, munissent le plan projectif d’une structure euclidienne. Nous introduisons la notion de marqueurs cycliques qui sont des marqueurs plans permettant de calculer sans ambiguïté les images des points cycliques de leur plan de support dans toute vue. Une propriété de ces marqueurs, en plus d’être très "riches" en information euclidienne, est que leurs images peuvent être appariées même si les marqueurs sont disposés arbitrairement sur des plans parallèles, grâce à l’invariance des points cycliques. Nous montrons comment utiliser cette propriété dans le calcul projectif de la structure et du mouvement via une technique matricielle de réduction de rang, dite de factorisation, de la matrice des données correspondant aux images de points réels, complexes et/ou cycliques. Un sous-problème critique abordé dans le calcul de la structure et du mouvement est celui de l’auto-calibrage de l’appareil, problème consistant à transformer un calcul projectif en un calcul euclidien. Nous expliquons comment utiliser l’information euclidienne fournie par les images des points cycliques dans l’algorithme d’auto-calibrage opérant dans l’espace projectif dual et fondé sur des équations linéaires. L’ensemble de ces contributions est finalement utilisé pour une application de suivi automatique de caméra utilisant des marqueurs formés par des couronnes concentriques (appelés CCTags), où il s’agit de calculer le mouvement tridimensionnel de la caméra dans la scène à partir d’une séquence vidéo. Ce type d’application est généralement utilisé dans l’industrie du cinéma ou de la télévision afin de produire des effets spéciaux. Le suivi de caméra proposé dans ce travail a été conçu pour proposer le meilleur compromis possible entre flexibilité d’utilisation et précision des résultats obtenus. / The thesis deals with the problem of 3D reconstruction of a rigid scene from a collection of views acquired by a digital camera. The problem addressed, referred as the Structure-from-Motion (SfM) problem, consists in computing the camera motion (including its trajectory) and the 3D characteristics of the scene based on 2D trajectories of imaged features through the collection. We propose theoretical foundations to extend some SfM paradigms in order to integrate real as well as complex imaged features as input data, and more especially imaged circular points. Circular points of a projective plane consist in a complex conjugate point-pair which is fixed under plane similarity ; thus endowing the plane with an Euclidean structure. We introduce the notion of circular markers which are planar markers that allows to compute, without any ambiguity, imaged circular points of their supporting plane in all views. Aside from providing a very “rich” Euclidean information, such features can be matched even if they are arbitrarily positioned on parallel planes thanks to their invariance under plane similarity ; thus increasing their visibility compared to natural features. We show how to benefit from this geometric property in solving the projective SfM problem via a rank-reduction technique, referred to as projective factorization, of the matrix whose entries are images of real, complex and/or circular features. One of the critical issues in such a SfM paradigm is the self-calibration problem, which consists in updating a projective reconstruction into an euclidean one. We explain how to use the euclidean information provided by imaged circular points in the self-calibration algorithm operating in the dual projective space and relying on linear equations. All these contributions are finally used in an automatic camera tracking application relying on markers made up of concentric circles (called C2Tags). The problem consists in computing the 3D camera motion based on a video sequence. This kind of application is generally used in the cinema or TV industry to create special effects. The camera tracking proposed in this work in designed in order to provide the best compromise between flexibility of use and accuracy.
30

Système de vision hybride à fovéation pour la vidéo-surveillance et la navigation robotique / Hybrid foveated vision system for video surveillance and robotic navigation

Rameau, François 02 December 2014 (has links)
L'objectif principal de ce travail de thèse est l'élaboration d'un système de vision binoculaire mettant en oeuvre deux caméras de types différents. Le système étudié est constitué d'une caméra de type omnidirectionnelle associée à une caméra PTZ. Nous appellerons ce couple de caméras un système de vision hybride. L'utilisation de ce type de capteur fournit une vision globale de la scène à l'aide de la caméra omnidirectionnelle tandis que l'usage de la caméra mécanisée permet une fovéation, c'est-à-dire l'acquisition de détails, sur une région d'intérêt détectée depuis l'image panoramique.Les travaux présentés dans ce manuscrit ont pour objet, à la fois de permettre le suivi d'une cible à l'aide de notre banc de caméras mais également de permettre une reconstruction 3D par stéréoscopie hybride de l'environnement nous permettant d'étudier le déplacement du robot équipé du capteur. / The primary goal of this thesis is to elaborate a binocular vision system using two different types of camera. The system studied here is composed of one omnidirectional camera coupled with a PTZ camera. This heterogeneous association of cameras having different characteristics is called a hybrid stereo-vision system. The couple composed of these two cameras combines the advantages given by both of them, that is to say a large field of view and an accurate vision of a particular Region of interest with an adjustable level of details using the zoom. In this thesis, we are presenting multiple contributions in visual tracking using omnidirectional sensors, PTZ camera self calibration, hybrid vision system calibration and structure from motion using a hybrid stereo-vision system.

Page generated in 0.5132 seconds