• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 1
  • 1
  • Tagged with
  • 17
  • 17
  • 6
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Handling of Rolling Shutter Effects in Monocular Semi-Dense SLAM Algorithms

Tallund, Lukas January 2016 (has links)
Since most people now have a high-performing computing device with an attached camera in their pocket, in the form of a smartphone, robotics and computer vision researchers are thrilled about the possibility this creates. Such devices have previously been used in robotics to create 3D maps of environments and objects by feeding the camera data to a 3D reconstruction algorithm. The big downside with smartphones is that their cameras use a different sensor than what is usually used in robotics, namely a rolling shutter camera.These cameras are cheaper to produce but are not as well suited for general 3D reconstruction algorithms as the global shutter cameras typically used in robotics research. One recent, accurate and performance effective 3D reconstruction method which could be used on a mobile device, if tweaked, is LSD-SLAM. This thesis uses the LSD-SLAM method developed for global shutter cameras and incorporates additional methods developed allow the usage of rolling shutter data.The developed method is evaluated by calculating numbers of failed 3D reconstructions before a successful one is obtained when using rolling shutter data.The result is a method which improves this metric with about 70\% compared to the unedited LSD-SLAM method.
2

Auto-calibration d'une multi-caméra omnidirectionnelle grand public fixée sur un casque / Self-calibration for consumer omnidirectional multi-camera mounted on a helmet

Nguyen, Thanh-Tin 19 December 2017 (has links)
Les caméras sphériques et 360 deviennent populaires et sont utilisées notamment pour la création de vidéos immersives et la génération de contenu pour la réalité virtuelle. Elles sont souvent composées de plusieurs caméras grand-angles/fisheyes pointant dans différentes directions et rigidement liées les unes aux autres. Cependant, il n'est pas si simple de les calibrer complètement car ces caméras grand public sont rolling shutter et peuvent être mal synchronisées. Cette thèse propose des méthodes permettant de calibrer ces multi-caméras à partir de vidéos sans utiliser de mire de calibration. On initialise d'abord un modèle de multi-caméra grâce à des hypothèses appropriées à un capteur omnidirectionnel sans direction privilégiée : les caméras ont les mêmes réglages (dont la fréquence et l'angle de champs de vue) et sont approximativement équiangulaires. Deuxièmement, sachant que le module de la vitesse angulaire est le même pour deux caméras au même instant, nous proposons de synchroniser les caméras à une image près à partir des vitesses angulaires estimées par structure-from-motion monoculaire. Troisièmement, les poses inter-caméras et les paramètres intrinsèques sont estimés par structure-from-motion et ajustement de faisceaux multi-caméras avec les approximations suivantes : la multi-caméra est centrale, global shutter ; et la synchronisation précédant est imposée.Enfin, nous proposons un ajustement de faisceaux final sans ces approximations, qui raffine notamment la synchronisation (à précision sous-trame), le coefficient de rolling shutter et les autres paramètres (intrinsèques, extrinsèques, 3D). On expérimente dans un contexte que nous pensons utile pour des applications comme les vidéos 360 et la modélisation 3D de scènes : plusieurs caméras grand public ou une caméra sphérique fixée(s) sur un casque et se déplaçant le long d'une trajectoire de quelques centaines de mètres à quelques kilomètres. / 360 degree and spherical multi-cameras built by fixing together several consumer cameras become popular and are convenient for recent applications like immersive videos, 3D modeling and virtual reality. This type of cameras allows to include the whole scene in a single view.When the goal of our applications is to merge monocular videos together into one cylinder video or to obtain 3D informations from environment,there are several basic steps that should be performed beforehand.Among these tasks, we consider the synchronization between cameras; the calibration of multi-camera system including intrinsic and extrinsic parameters (i.e. the relative poses between cameras); and the rolling shutter calibration. The goal of this thesis is to develop and apply user friendly method. Our approach does not require a calibration pattern. First, the multi-camera is initialized thanks to assumptions that are suitable to an omnidirectional camera without a privileged direction:the cameras have the same setting (frequency, image resolution, field-of-view) and are roughly equiangular.Second, a frame-accurate synchronization is estimated from instantaneous angular velocities of each camera provided by monocular Structure-from-Motion.Third, both inter-camera poses and intrinsic parameters are refined using multi-camera Structure-from-Motion and bundle adjustment.Last, we introduce a bundle adjustment that estimates not only the usual parameters but also a subframe-accurate synchronization and the rolling shutter. We experiment in a context that we believe useful for applications (3D modeling and 360 videos):several consumer cameras or a spherical camera mounted on a helmet and moving along trajectories of several hundreds of meters or kilometers.
3

3D Vision Geometry for Rolling Shutter Cameras / Géométrie pour la vision 3D avec des caméras Rolling Shutter

Lao, Yizhen 16 May 2019 (has links)
De nombreuses caméras CMOS modernes sont équipées de capteurs Rolling Shutter (RS). Ces caméras à bas coût et basse consommation permettent d’atteindre de très hautes fréquences d’acquisition. Dans ce mode d’acquisition, les lignes de pixels sont exposées séquentiellement du haut vers le bas de l'image. Par conséquent, les images capturées alors que la caméra et/ou la scène est en mouvement présentent des distorsions qui rendent les algorithmes classiques au mieux moins précis, au pire inutilisables en raison de singularités ou de configurations dégénérées. Le but de cette thèse est de revisiter la géométrie de la vision 3D avec des caméras RS en proposant des solutions pour chaque sous-tâche du pipe-line de Structure-from-Motion (SfM).Le chapitre II présente une nouvelle méthode de correction du RS en utilisant les droites. Contrairement aux méthodes existantes, qui sont itératives et font l’hypothèse dite Manhattan World (MW), notre solution est linéaire et n’impose aucune contrainte sur l’orientation des droites 3D. De plus, la méthode est intégrée dans un processus de type RANSAC permettant de distinguer les courbes qui sont des projections de segments droits de celles qui correspondent à de vraies courbes 3D. La méthode de correction est ainsi plus robuste et entièrement automatisée.Le chapitre III revient sur l'ajustement faisceaux ou bundle adjustment (BA). Nous proposons un nouvel algorithme basé sur une erreur de projection dans laquelle l’index de ligne des points projetés varie pendant l’optimisation afin de garder une cohérence géométrique contrairement aux méthodes existantes qui considère un index fixe (celui mesurés dans l’image). Nous montrons que cela permet de lever la dégénérescence dans le cas où les directions de scan des images sont trop proches (cas très communs avec des caméras embraquées sur un véhicule par exemple). Dans le chapitre VI nous étendons le concept d'homographie aux cas d’images RS en démontrant que la relation point-à-point entre deux images d’un nuage de points coplanaires pouvait s’exprimer sous la forme de 3 à 7 matrices de taille 3X3 en fonction du modèle de mouvement utilisé. Nous proposons une méthode linéaire pour le calcul de ces matrices. Ces dernières sont ensuite utilisées pour résoudre deux problèmes classiques en vision par ordinateur à savoir le calcul du mouvement relatif et le « mosaïcing » dans le cas RS.Dans le chapitre V nous traitons le problème de calcul de pose et de reconstruction multi-vues en établissant une analogie avec les méthodes utilisées pour les surfaces déformables telles que SfT (Structure-from-Template) et NRSfM (Non Rigid Structure-from-Motion). Nous montrons qu’une image RS d’une scène rigide en mouvement peut être interprétée comme une image Global Shutter (GS) d’une surface virtuellement déformée (par l’effet RS). La solution proposée pour estimer la pose et la structure 3D de la scène est ainsi composée de deux étapes. D’abord les déformations virtuelles sont d’abord calculées grâce à SfT ou NRSfM en assumant un modèle GS classique (relaxation du modèle RS). Ensuite, ces déformations sont réinterprétées comme étant le résultat du mouvement durant l’acquisition (réintroduction du modèle RS). L’approche proposée présente ainsi de meilleures propriétés de convergence que les approches existantes. / Many modern CMOS cameras are equipped with Rolling Shutter (RS) sensors which are considered as low cost, low consumption and fast cameras. In this acquisition mode, the pixel rows are exposed sequentially from the top to the bottom of the image. Therefore, images captured by moving RS cameras produce distortions (e.g. wobble and skew) which make the classic algorithms at best less precise, at worst unusable due to singularities or degeneracies. The goal of this thesis is to propose a general framework for modelling and solving structure from motion (SfM) with RS cameras. Our approach consists in addressing each sub-task of the SfM pipe-line (namely image correction, absolute and relative pose estimation and bundle adjustment) and proposing improvements.The first part of this manuscript presents a novel RS correction method which uses line features. Unlike existing methods, which uses iterative solutions and make Manhattan World (MW) assumption, our method R4C computes linearly the camera instantaneous-motion using few image features. Besides, the method was integrated into a RANSAC-like framework which enables us to detect curves that correspond to actual 3D straight lines and reject outlier curves making image correction more robust and fully automated.The second part revisits Bundle Adjustment (BA) for RS images. It deals with a limitation of existing RS bundle adjustment methods in case of close read-out directions among RS views which is a common configuration in many real-life applications. In contrast, we propose a novel camera-based RS projection algorithm and incorporate it into RSBA to calculate reprojection errors. We found out that this new algorithm makes SfM survive the degenerate configuration mentioned above.The third part proposes a new RS Homography matrix based on point correspondences from an RS pair. Linear solvers for the computation of this matrix are also presented. Specifically, a practical solver with 13 point correspondences is proposed. In addition, we present two essential applications in computer vision that use RS homography: plane-based RS relative pose estimation and RS image stitching. The last part of this thesis studies absolute camera pose problem (PnP) and SfM which handle RS effects by drawing analogies with non-rigid vision, namely Shape-from-Template (SfT) and Non-rigid SfM (NRSfM) respectively. Unlike all existing methods which perform 3D-2D registration after augmenting the Global Shutter (GS) projection model with the velocity parameters under various kinematic models, we propose to use local differential constraints. The proposed methods outperform stat-of-the-art and handles configurations that are critical for existing methods.
4

Calcul de pose dynamique avec les caméras CMOS utilisant une acquisition séquentielle / Dynamic pose estimation with CMOS cameras using sequential acquisition

Magerand, Ludovic 18 December 2014 (has links)
En informatique, la vision par ordinateur s’attache à extraire de l’information à partir de caméras. Les capteurs de celles-ci peuvent être produits avec la technologie CMOS que nous retrouvons dans les appareils mobiles en raison de son faible coût et d’un encombrement réduit. Cette technologie permet d’acquérir rapidement l’image en exposant les lignes de l’image de manière séquentielle. Cependant cette méthode produit des déformations dans l’image s’il existe un mouvement entre la caméra et la scène filmée. Cet effet est connu sous le nom de «Rolling Shutter» et de nombreuses méthodes ont tenté de corriger ces artefacts. Plutôt que de le corriger, des travaux antérieurs ont développé des méthodes pour extraire de l’information sur le mouvement à partir de cet effet. Ces méthodes reposent sur une extension de la modélisation géométrique classique des caméras pour prendre en compte l’acquisition séquentielle et le mouvement entre le capteur et la scène, considéré uniforme. À partir de cette modélisation, il est possible d’étendre le calcul de pose habituel (estimation de la position et de l’orientation de la scène par rapport au capteur) pour estimer aussi les paramètres du mouvement. Dans la continuité de cette démarche, nous présenterons une généralisation à des mouvements non-uniformes basée sur un lissage des dérivées des paramètres de mouvement. Ensuite nous présenterons une modélisation polynomiale du «Rolling Shutter» et une méthode d’optimisation globale pour l’estimation de ces paramètres. Correctement implémenté, cela permet de réaliser une mise en correspondance automatique entre le modèle tridimensionnel et l’image. Pour terminer nous comparerons ces différentes méthodes tant sur des données simulées que sur des données réelles et conclurons. / Computer Vision, a field of Computer Science, is about extracting information from cameras. Their sensors can be produced using the CMOS technology which is widely used on mobile devices due to its low cost and volume. This technology allows a fast acquisition of an image by sequentially exposin the scan-line. However this method produces some deformation in the image if there is a motion between the camera and the filmed scene. This effect is known as Rolling Shutter and various methods have tried to remove these artifacts. Instead of correcting it, previous works have shown methods to extract information on the motion from this effect. These methods rely on a extension of the usual geometrical model of cameras by taking into account the sequential acquisition and the motion, supposed uniform, between the sensor and the scene. From this model, it’s possible to extend the usual pose estimation (estimation of position and orientation of the camera in the scene) to also estimate the motion parameters. Following on from this approach, we will present an extension to non-uniform motions based on a smoothing of the derivatives of the motion parameters. Afterwards, we will present a polynomial model of the Rolling Shutter and a global optimisation method to estimate the motion parameters. Well implemented, this enables to establish an automatic matching between the 3D model and the image. We will conclude with a comparison of all these methods using either simulated or real data.
5

High Dynamic Range Video for Photometric Measurement of Illumination

Unger, Jonas, Gustavson, Stefan, Ynnerman, Anders January 2007 (has links)
We describe the design and implementation of a high dynamic range (HDR) imaging system capable of capturing RGB color images with a dynamic range of 10,000,000 : 1 at 25 frames per second. We use a highly programmable camera unit with high throughput A/D conversion, data processing and data output. HDR acquisition is performed by multiple exposures in a continuous rolling shutter progression over the sensor. All the different exposures for one particular row of pixels are acquired head to tail within the frame time, which means that the time disparity between exposures is minimal, the entire frame time can be used for light integration and the longest expo- sure is almost the entire frame time. The system is highly configurable, and trade-offs are possible between dynamic range, precision, number of exposures, image resolution and frame rate.
6

3D mapping with iPhone / 3D-kartering med iPhone

Lundqvist, Tobias January 2011 (has links)
Today, 3D models of cities are created from aerial images using a camera rig. Images, together with sensor data from the flights, are stored for further processing when building 3D models. However, there is a market demand for a more mobile solution of satisfactory quality. If the camera position can be calculated for each image, there is an existing algorithm available for the creation of 3D models. This master thesis project aims to investigate whether the iPhone 4 offers good enough image and sensor data quality from which 3D models can be created. Calculations on movements and rotations from sensor data forms the foundation of the image processing, and should refine the camera position estimations. The 3D models are built only from image processing since sensor data cannot be used due to poor data accuracy. Because of that, the scaling of the 3D models are unknown and a measurement is needed on the real objects to make scaling possible. Compared to a test algorithm that calculates 3D models from only images, already available at the SBD’s system, the quality of the 3D model in this master thesis project is almost the same or, in some respects, even better when compared with the human eye.
7

Video stabilization and rectification for handheld cameras

Jia, Chao 26 June 2014 (has links)
Video data has increased dramatically in recent years due to the prevalence of handheld cameras. Such videos, however, are usually shakier compared to videos shot by tripod-mounted cameras or cameras with mechanical stabilizers. In addition, most handheld cameras use CMOS sensors. In a CMOS sensor camera, different rows in a frame are read/reset sequentially from top to bottom. When there is fast relative motion between the scene and the video camera, a frame can be distorted because each row was captured under a different 3D-to-2D projection. This kind of distortion is known as rolling shutter effect. Digital video stabilization and rolling shutter rectification seek to remove the unwanted frame-to-frame jitter and rolling shutter effect, in order to generate visually stable and pleasant videos. In general, we need to (1) estimate the camera motion, (2) regenerate camera motion, and (3) synthesize new frames. This dissertation aims at improving the first two steps of video stabilization and rolling shutter rectification. It has been shown that the inertial sensors in handheld devices can provide more accurate and robust motion estimation compared to vision-based methods. This dissertation proposes an online camera-gyroscope calibration method for sensor fusion while a user is capturing video. The proposed method uses an implicit extended Kalman filter and is based on multiple-view geometry in a rolling shutter camera model. It is able to estimate the needed calibration parameters online with all kinds of camera motion. Given the camera motion estimated from inertial sensors after the pro- posed calibration method, this dissertation first proposes an offline motion smoothing algorithm based on a 3D rotational camera motion model. The offline motion smoothing is formulated as a geodesic-convex regression problem on the manifold of rotation matrix sequences. The formulated problem is solved by an efficient two-metric projection algorithm on the manifold. The geodesic-distance-based smoothness metric better exploits the manifold structure of sequences of rotation matrices. Then this dissertation proposes two online motion smoothing algorithms that are also based on a 3D rotational camera motion model. The first algorithm extends IIR filtering from Euclidean space to the nonlinear manifold of 3D rotation matrices. The second algorithm uses unscented Kalman filtering on a constant angular velocity model. Both offline and online motion smoothing algorithms are constrained to guarantee that no black borders intrude into the stabilized frames. / text
8

Coded Acquisition of High Speed Videos with Multiple Cameras

Pournaghi, Reza 10 April 2015 (has links)
High frame rate video (HFV) is an important investigational tool in sciences, engineering and military. In ultrahigh speed imaging, the obtainable temporal, spatial and spectral resolutions are limited by the sustainable throughput of in-camera mass memory, the lower bound of exposure time, and illumination conditions. In order to break these bottlenecks, we propose a new coded video acquisition framework that employs K>1 cameras, each of which makes random measurements of the video signal in both temporal and spatial domains. For each of the K cameras, this multi-camera strategy greatly relaxes the stringent requirements in memory speed, shutter speed, and illumination strength. The recovery of HFV from these random measurements is posed and solved as a large scale l1 minimization problem by exploiting joint temporal and spatial sparsities of the 3D signal. Three coded video acquisition techniques of varied trade o s between performance and hardware complexity are developed: frame-wise coded acquisition, pixel-wise coded acquisition, and column-row-wise coded acquisition. The performances of these techniques are analyzed in relation to the sparsity of the underlying video signal. To make ultra high speed cameras of coded exposure more practical and a fordable, we develop a coded exposure video/image acquisition system by an innovative assembling of multiple rolling shutter cameras. Each of the constituent rolling shutter cameras adopts a random pixel read-out mechanism by simply changing the read out order of pixel rows from sequential to random. Simulations of these new image/video coded acquisition techniques are carried out and experimental results are reported. / Dissertation / Doctor of Philosophy (PhD)
9

Verification Method for Time of Capture of a Rolling Shutter Image / Metod för Verifiering av Tidpunkt för Bilder Tagna med Rullande Slutare

Johansson, Filip, Johansson, Alexander January 2023 (has links)
Modern automotive systems increasingly depend on camera sensors to gather safetycriticaldata used in driver-assisting features of the system. These features can consist offor example, lane-keeping assist and automatic braking where the sensors register objectswithin certain distances. When these camera sensors gather information, the time of theimage is critical for the calculation of speeds, distances, and size of any potential registeredobject in the frame. Limitations of bandwidth and computing in such vehicles creates aneed to use special cameras that do not capture the whole image simultaneously but insteadcapture the images piecewise. These cameras are called rolling shutter cameras. Thisputs pressure on defining when an image was captured when different parts of the imagewere captured at different points in time. For this thesis, this point in time is defined as thechronological middle point in between the camera starting to capture an image and when ithas collected the final part of it. This thesis performs a mapping-study to evaluate methodsto verify the timestamp of an image generated from rolling shutter cameras. Further, thisthesis proposes a new method using multiple digital clocks and presents its performanceusing a proof-of-concept implementation to prove the method’s ability to accurately representtime with sub-millisecond accuracy.
10

Strobed IR Illumination for Image Quality Improvement in Surveillance Cameras

Darmadi, Steve January 2018 (has links)
Infrared (IR) illumination is commonly found in a surveillance camera to improve night-time recording quality. However, the limited available power from Power over Ethernet (PoE) connection in networkenabled cameras restricts the possibilities of increasing image quality by allocating more power to the illumination system.The thesis explored an alternative way to improve the image quality by using strobed IR illumination. Different strobing methods will be discussed in relation to the rolling shutter timing commonly used in CMOS sensors. The method that benefits the evaluation scenario the most was implemented in a prototype which is based on a commercialized fixed-box camera from Axis. The prototype demonstrated how the synchronization of the sensor and the strobing illumination system can be achieved.License plate recognition (LPR) in a dark highway was chosen as the evaluation scenario and an analysis on the car movements was done in a pursue of creating an indoor test. The indoor test provided a controlled environment while the outdoor test exposed the prototype to real-life conditions. The test results show that with strobed IR, the output image experienced brightness improvement and reduction in rolling shutter artifact, compared to constant IR. The theoretical calculation also proved that these improvement does not compromise the average power consumption and eye-safety level of the illumination system. / Infraröd (IR) belysning påträffas ofta i övervakningskameror för att förbättra bildkvalitén vid videoinspelning på natten. Den begränsade tillgängliga effekten från Power over Ethernet-anslutningen (PoE) i nätverksaktiverade kameror sätter dock en övre gräns för hur mycket effekt som kameran tillåts använda till belysningssystemet, och därmed hur pass mycket bildkvalitén kan ökas.I detta examensarbete undersöktes ett alternativt sätt att förbättra bildkvalitén genom att använda blixtrande (eng: ”strobed”) IR-belysning. Olika strobe-metoder undersöktes i relation till rullande slutare, vilket är den slutar-metod som vanligtvis används i CMOS-sensorer. Den metod som gav mest fördelaktiga resultat vid utvärdering implementerades i en prototyp baserad på en kommersiell nätverkskamera av Fixed box-typ från Axis Communications. Denna prototyp visade framgångsrikt ett koncept för hur synkronisering av bildsensorn och belysningssystemet kan uppnås.Registreringsskyltigenkänning (LPR) på en mörk motorväg valdes som utvärderingsscenario och en analys av bilens rörelser gjordes för att skapa en motsvarande testuppställning inomhus. Inomhustesterna gav en kontrollerad miljö medan testerna utomhus utsatte prototypen för verkliga förhållanden. Testresultaten visar att med strobed IR blev bilden från kameran både ljusare och uppvisade mindre artefakter till följd av rullande slutare, jämfört med konstant IR-belysning. Teoretiska beräkningar visade också att dessa förbättringar inte påverkar varken kamerans genomsnittliga effektförbrukning eller ögonsäkerheten för belysningssystemet negativt.

Page generated in 0.088 seconds