• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 2
  • 1
  • Tagged with
  • 15
  • 15
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automated vehicle follower system based on a monocular camera / Automatiserat fordonsystem for foljning baserat pa en monokular kamera

JOHANSSON, JACOB, SCHRÖDER, JOEL January 2016 (has links)
This report proposes a solution for an automated vehicle follower based on one front-facing monocular camera that can be used to achieve platooning for a lower cost than the systems available on the market today. The sensor will be local to the automated follower vehicle, i.e. no Vehicle-to-Vehicle (V2V) communication. A state-of-the-art chapter describes di erent aspects of platooning, computer vision techniques, state of the art hardware developed especially for autonomous driving as well as systems closely related to the proposed solution. The theory behind the performed implementations such as trajectory, controls, image operations and vehicle models will be presented, followed by a chapter dedicated to the actual implementation. The experimental vehicle used to validate the solution was a modied 1=12 scale radio controlled (RC) car. An Arduino controls the steering and driving motor, and a PC mounted on the vehicle uses a webcam to capture images. The preceding vehicles position relative the follower vehicle was calculated from captured images from the webcam and a trajectory towards the preceding vehicles path was generated from a cubic curve. Measurements from a stereo vision system was used to evaluate the accuracy of the follower vehicle and minimal spacing needed between follower and preceding vehicle. The follower vehicle satisfy the behavior of following a preceding vehicle, but the accuracy of the follower vehicle should be improved to generate a more accurate trajectory before being tested on a larger scale vehicle. The solution shows that a monocular camera can be used to follow a vehicle, and with implementation of a GPS module and a fuzzy velocity controller it could be used to test on a full sized vehicle. / Denna rapport foreslar en losning som gor det mojligt att automatisera ett fordon genom en monokular kamera som foljer en framforvarande ledbil som skulle kunna anvandas inom platooning for en lagre kostnad an de losningar som nns pa marknaden idag. Sensorn ar lokal till det automatiserade fordonet och anvander sig inte utav V2V kommunikation. Ett state-of-the-art kapitel beskriver olika aspekter inom platooning av fordon, datorseende, specikt framtagen hardvara for automatiserade bilar samt automatiserande system som nns inkluderade i bilar idag. Teorin bakom implementationen av bilens trajektoria, reglerteknik, bildbehandlingsoperationer och fordonsmodeller presenteras ocksa. Teorin anvands sedan for att utveckla en prototyp som anvands till att besvara forskningsfragorna. Prototypfordonet ar en modierad radiostyrd bil i skala 1/12. En Arduino styr drivmotor och styrning medan en PC monterad pa bilen anvander sig av en webbkamera for att ta bilder. Ledbilens position relativt foljbilen beraknas med hjalp av bilderna och en bana att folja efter genereras av en tredjegradskurva. Matningar genom ett stereo vision system anvandes for att besvara fragor angaende noggranheten for den utvecklade efterfoljande bilen samt lagsta sakra avstand som kan anvandas mellan bilarna. Den utvecklade prototypbilen foljer efter ledbilen pa ett onskvart satt, dock borde trajektorian som den foljer utvecklas mera innan testning utfors pa storre fordon. Losningen pavisar att en monokular kamera kan anvandas for att folja efter en bil. Om systemet utokas med en GPS modul och en fuzzy hastighetskontroll kan denna losning testas med bilar i full storlek.
2

MonoDepth-vSLAM: A Visual EKF-SLAM using Optical Flow and Monocular Depth Estimation

Dey, Rohit 04 October 2021 (has links)
No description available.
3

Visual-Inertial SLAM Using a Monocular Camera and Detailed Map Data

Ekström, Viktor, Berglund, Ludvig January 2023 (has links)
The most commonly used localisation methods, such as GPS, rely on external signals to generate an estimate of the location. There is a need of systems which are independent of external signals in order to increase the robustness of the localisation capabilities. In this thesis a visual-inertial SLAM-based localisation system which utilises detailed map, image, IMU, and odometry data, is presented and evaluated. The system utilises factor graphs through Georgia Tech Smoothing and Mapping (GTSAM) library, developed at the Georgia Institute of Technology. The thesis contributes with performance evaluations for different camera and landmark settings in a localisation system based on GTSAM. Within the visual SLAM field, the thesis also contributes with a sparse landmark selection and a low image frequency approach to the localisation problem. A variety of camera-related settings, such as image frequency and amount of visible landmarks per image, are used to evaluate the system. The findings show that the estimate improve with a higher image frequency, and does also improve if the image frequency was held constant along the tracks. Having more than one landmark per image result in a significantly better estimate. The estimate is not accurate when only using one distant landmark throughout the track, but it is significantly better if two complementary landmarks are identified briefly along the tracks. The estimate can also handle time periods where no landmarks can be identified while maintaining a good estimate.
4

Stanovení pozice objektu / Detection of object position

Baáš, Filip January 2019 (has links)
Master’s thesis deals with object pose estimation using monocular camera. As an object is considered every rigid, shape fixed entity with strong edges, ideally textureless. Object position in this work is represented by transformation matrix, which describes object translation and rotation towards world coordinate system. First chapter is dedicated to explanation of theory of geometric transformations and intrinsic and extrinsic parameters of camera. This chapter also describes detection algorithm Chamfer Matching, which is used in this work. Second chapter describes all development tools used in this work. Third, fourth and fifth chapter are dedicated to practical realization of this works goal and achieved results. Last chapter describes created application, that realizes known object pose estimation in scene.
5

Fusing Visual and Inertial Information

Zachariah, Dave January 2011 (has links)
QC 20110412
6

Geometrical and contextual scene analysis for object detection and tracking in intelligent vehicles / Analyse de scène contextuelle et géométrique pour la détection et le suivi d'objets dans les véhicules intelligents

Wang, Bihao 08 July 2015 (has links)
Pour les véhicules intelligents autonomes ou semi-autonomes, la perception constitue la première tâche fondamentale à accomplir avant la décision et l’action. Grâce à l’analyse des données vidéo, Lidar et radar, elle fournit une représentation spécifique de l’environnement et de son état, à travers l’extraction de propriétés clés issues des données des capteurs. Comparé à d’autres modalités de perception telles que le GPS, les capteurs inertiels ou les capteurs de distance (Lidar, radar, ultrasons), les caméras offrent la plus grande quantité d’informations. Grâce à leur polyvalence, les caméras permettent aux systèmes intelligents d’extraire à la fois des informations contextuelles de haut niveau et de reconstruire des informations géométriques de la scène observée et ce, à haute vitesse et à faible coût. De plus, la technologie de détection passive des caméras permet une faible consommation d’énergie et facilite leur miniaturisation. L’utilisation des caméras n’est toutefois pas triviale et pose un certain nombre de questions théoriques liées à la façon dont ce capteur perçoit son environnement. Dans cette thèse, nous proposons un système de détection d’objets mobiles basé seule- ment sur l’analyse d’images. En effet, dans les environnements observés par un véhicule intelligent, les objets en mouvement représentent des obstacles avec un risque de collision élevé, et ils doivent être détectés de manière fiable et robuste. Nous abordons le problème de la détection d’objets mobiles à partir de l’extraction du contexte local reposant sur une segmentation de la route. Après transformation de l’image couleur en une image invariante à l’illumination, les ombres peuvent alors être supprimées réduisant ainsi leur influence négative sur la détection d’obstacles. Ainsi, à partir d’une sélection automatique de pixels appartenant à la route, une région d’intérêt où les objets en mouvement peuvent apparaître avec un risque de collision élevé, est extraite. Dans cette zone, les pixels appartenant à des objets mobiles sont ensuite identifiés à l’aide d’une approche plan+parallaxe. À cette fin, les pixels potentiellement mobiles et liés à l’effet de parallaxe sont détectés par une méthode de soustraction du fond de l’image; puis trois contraintes géométriques différentes: la contrainte épipolaire, la contrainte de cohérence structurelle et le tenseur trifocal, sont appliquées à ces pixels pour filtrer ceux issus de l’effet de parallaxe. Des équations de vraisemblance sont aussi proposées afin de combiner les différents contraintes d’une manière complémentaire et efficace. Lorsque la stéréovision est disponible, la segmentation de la route et la détection d’obstacles peuvent être affinées en utilisant une segmentation spécifique de la carte de disparité. De plus, dans ce cas, un algorithme de suivi robuste combinant les informations de l’image et la profondeur des pixels a été proposé. Ainsi, si l’une des deux caméras ne fonctionne plus, le système peut donc revenir dans un mode de fonctionnement monoculaire ce qui constitue une propriété importante pour la fiabilité et l’intégrité du système de perception. Les différents algorithmes proposés ont été testés sur des bases de données d’images publiques en réalisant une évaluation par rapport aux approches de l’état de l’art et en se comparant à des données de vérité terrain. Les résultats obtenus sont prometteurs et montrent que les méthodes proposées sont efficaces et robustes pour différents scénarios routiers et les détections s’avèrent fiables notamment dans des situations ambiguës. / For autonomous or semi-autonomous intelligent vehicles, perception constitutes the first fundamental task to be performed before decision and action/control. Through the analysis of video, Lidar and radar data, it provides a specific representation of the environment and of its state, by extracting key properties from sensor data with time integration of sensor information. Compared to other perception modalities such as GPS, inertial or range sensors (Lidar, radar, ultrasonic), the cameras offer the greatest amount of information. Thanks to their versatility, cameras allow intelligent systems to achieve both high-level contextual and low-level geometrical information about the observed scene, and this is at high speed and low cost. Furthermore, the passive sensing technology of cameras enables low energy consumption and facilitates small size system integration. The use of cameras is however, not trivial and poses a number of theoretical issues related to how this sensor perceives its environmen. In this thesis, we propose a vision-only system for moving object detection. Indeed,within natural and constrained environments observed by an intelligent vehicle, moving objects represent high risk collision obstacles, and have to be handled robustly. We approach the problem of detecting moving objects by first extracting the local contextusing a color-based road segmentation. After transforming the color image into illuminant invariant image, shadows as well as their negative influence on the detection process can be removed. Hence, according to the feature automatically selected onthe road, a region of interest (ROI), where the moving objects can appear with a high collision risk, is extracted. Within this area, the moving pixels are then identified usin ga plane+parallax approach. To this end, the potential moving and parallax pixels a redetected using a background subtraction method; then three different geometrical constraints : the epipolar constraint, the structural consistency constraint and the trifocaltensor are applied to such potential pixels to filter out parallax ones. Likelihood equations are also introduced to combine the constraints in a complementary and effectiveway. When stereo vision is available, the road segmentation and on-road obstacles detection can be refined by means of the disparity map with geometrical cues. Moreover, in this case, a robust tracking algorithm combining image and depth information has been proposed. If one of the two cameras fails, the system can therefore come back to a monocular operation mode, which is an important feature for perception system reliability and integrity. The different proposed algorithms have been tested on public images data set with anevaluation against state-of-the-art approaches and ground-truth data. The obtained results are promising and show that the proposed methods are effective and robust on the different traffic scenarios and can achieve reliable detections in ambiguous situations.
7

Hluboké neuronové sítě pro klasifikaci objektů v obraze / Deep Neural Networks for Classifying Objects in an Image

Mlynarič, Tomáš January 2018 (has links)
This paper deals with classifying objects using deep neural networks. Whole scene segmentation was used as main algorithm for the classification purpose which works with video sequences and obtains information between two video frames. Optical flow was used for getting information from the video frames, based on which features maps of a~neural network are warped. Two neural network architectures were adjusted to work with videos and experimented with. Results of the experiments show, that using videos for image segmentation improves accuracy (IoU) compared to the same architecture working with images.
8

Lokalizace mobilního robota pomocí kamery / Mobile Robot Localization Using Camera

Vaverka, Filip January 2015 (has links)
This thesis describes design and implementation of an approach to the mobile robot localization. The proposed method is based purely on images taken by a monocular camera. The described solution handles localization as an association problem and, therefore, falls in the category of topological localization methods. The method is based on a generative probabilistic model of the environment appearance. The proposed solution is capable to eliminate some of the difficulties which are common in traditional localization approaches.
9

Fusion of Stationary Monocular and Stereo Camera Technologies for Traffic Parameters Estimation

Ali, Syed Musharaf 07 March 2017 (has links)
Modern day intelligent transportation system (ITS) relies on reliable and accurate estimated traffic parameters. Travel speed, traffic flow, and traffic state classification are the main traffic parameters of interest. These parameters can be estimated through efficient vision-based algorithms and appropriate camera sensor technology. With the advances in camera technologies and increasing computing power, use of monocular vision, stereo vision, and camera sensor fusion technologies have been an active research area in the field of ITS. In this thesis, we investigated stationary monocular and stereo camera technology for traffic parameters estimation. Stationary camera sensors provide large spatial-temporal information of the road section with relatively low installation costs. Two novel scientific contributions for vehicle detection and recognition are proposed. The first one is the use stationary stereo camera technology, and the second contribution is the fusion of monocular and stereo camera technologies. A vision-based ITS consists of several hardware and software components. The overall performance of such a system does not only depend on these single modules but also on their interaction. Therefore, a systematic approach considering all essential modules was chosen instead of focusing on one element of the complete system chain. This leads to detailed investigations of several core algorithms, e.g. background subtraction, histogram based fingerprints, and data fusion methods. From experimental results on standard datasets, we concluded that proposed fusion-based approach, consisting of monocular and stereo camera technologies performs better than each particular technology for vehicle detection and vehicle recognition. Moreover, this research work has a potential to provide a low-cost vision-based solution for online traffic monitoring systems in urban and rural environments.
10

Robust Learning of a depth map for obstacle avoidance with a monocular stabilized flying camera / Apprentissage robuste d'une carte de profondeur pour l'évitement d'obstacle dans le cas des cameras volantes, monoculaires et stabilisées

Pinard, Clément 24 June 2019 (has links)
Le drone orienté grand public est principalement une caméra volante, stabilisée et de bonne qualité. Ceux-ci ont démocratisé la prise de vue aérienne, mais avec leur succès grandissant, la notion de sécurité est devenue prépondérante.Ce travail s'intéresse à l'évitement d'obstacle, tout en conservant un vol fluide pour l'utilisateur.Dans ce contexte technologique, nous utilisons seulement une camera stabilisée, par contrainte de poids et de coût.Pour leur efficacité connue en vision par ordinateur et leur performance avérée dans la résolution de tâches complexes, nous utilisons des réseaux de neurones convolutionnels (CNN). Notre stratégie repose sur un systeme de plusieurs niveaux de complexité dont les premieres étapes sont de mesurer une carte de profondeur depuis la caméra. Cette thèse étudie les capacités d'un CNN à effectuer cette tâche.La carte de profondeur, étant particulièrement liée au flot optique dans le cas d'images stabilisées, nous adaptons un réseau connu pour cette tâche, FlowNet, afin qu'il calcule directement la carte de profondeur à partir de deux images stabilisées. Ce réseau est appelé DepthNet.Cette méthode fonctionne en simulateur avec un entraînement supervisé, mais n'est pas assez robuste pour des vidéos réelles. Nous étudions alors les possibilites d'auto-apprentissage basées sur la reprojection différentiable d'images. Cette technique est particulièrement nouvelle sur les CNNs et nécessite une étude détaillée afin de ne pas dépendre de paramètres heuristiques.Finalement, nous développons un algorithme de fusion de cartes de profondeurs pour utiliser DepthNet sur des vidéos réelles. Plusieurs paires différentes sont données à DepthNet afin d'avoir une grande plage de profondeurs mesurées. / Customer unmanned aerial vehicles (UAVs) are mainly flying cameras. They democratized aerial footage, but with thei success came security concerns.This works aims at improving UAVs security with obstacle avoidance, while keeping a smooth flight. In this context, we use only one stabilized camera, because of weight and cost incentives.For their robustness in computer vision and thei capacity to solve complex tasks, we chose to use convolutional neural networks (CNN). Our strategy is based on incrementally learning tasks with increasing complexity which first steps are to construct a depth map from the stabilized camera. This thesis is focused on studying ability of CNNs to train for this task.In the case of stabilized footage, the depth map is closely linked to optical flow. We thus adapt FlowNet, a CNN known for optical flow, to output directly depth from two stabilized frames. This network is called DepthNet.This experiment succeeded with synthetic footage, but is not robust enough to be used directly on real videos. Consequently, we consider self supervised training with real videos, based on differentiably reproject images. This training method for CNNs being rather novel in literature, a thorough study is needed in order not to depend too moch on heuristics.Finally, we developed a depth fusion algorithm to use DepthNet efficiently on real videos. Multiple frame pairs are fed to DepthNet to get a great depth sensing range.

Page generated in 0.0627 seconds