• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 70
  • 24
  • 10
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 137
  • 68
  • 36
  • 34
  • 27
  • 27
  • 26
  • 24
  • 21
  • 16
  • 16
  • 15
  • 15
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Automated vehicle follower system based on a monocular camera / Automatiserat fordonsystem for foljning baserat pa en monokular kamera

JOHANSSON, JACOB, SCHRÖDER, JOEL January 2016 (has links)
This report proposes a solution for an automated vehicle follower based on one front-facing monocular camera that can be used to achieve platooning for a lower cost than the systems available on the market today. The sensor will be local to the automated follower vehicle, i.e. no Vehicle-to-Vehicle (V2V) communication. A state-of-the-art chapter describes di erent aspects of platooning, computer vision techniques, state of the art hardware developed especially for autonomous driving as well as systems closely related to the proposed solution. The theory behind the performed implementations such as trajectory, controls, image operations and vehicle models will be presented, followed by a chapter dedicated to the actual implementation. The experimental vehicle used to validate the solution was a modied 1=12 scale radio controlled (RC) car. An Arduino controls the steering and driving motor, and a PC mounted on the vehicle uses a webcam to capture images. The preceding vehicles position relative the follower vehicle was calculated from captured images from the webcam and a trajectory towards the preceding vehicles path was generated from a cubic curve. Measurements from a stereo vision system was used to evaluate the accuracy of the follower vehicle and minimal spacing needed between follower and preceding vehicle. The follower vehicle satisfy the behavior of following a preceding vehicle, but the accuracy of the follower vehicle should be improved to generate a more accurate trajectory before being tested on a larger scale vehicle. The solution shows that a monocular camera can be used to follow a vehicle, and with implementation of a GPS module and a fuzzy velocity controller it could be used to test on a full sized vehicle. / Denna rapport foreslar en losning som gor det mojligt att automatisera ett fordon genom en monokular kamera som foljer en framforvarande ledbil som skulle kunna anvandas inom platooning for en lagre kostnad an de losningar som nns pa marknaden idag. Sensorn ar lokal till det automatiserade fordonet och anvander sig inte utav V2V kommunikation. Ett state-of-the-art kapitel beskriver olika aspekter inom platooning av fordon, datorseende, specikt framtagen hardvara for automatiserade bilar samt automatiserande system som nns inkluderade i bilar idag. Teorin bakom implementationen av bilens trajektoria, reglerteknik, bildbehandlingsoperationer och fordonsmodeller presenteras ocksa. Teorin anvands sedan for att utveckla en prototyp som anvands till att besvara forskningsfragorna. Prototypfordonet ar en modierad radiostyrd bil i skala 1/12. En Arduino styr drivmotor och styrning medan en PC monterad pa bilen anvander sig av en webbkamera for att ta bilder. Ledbilens position relativt foljbilen beraknas med hjalp av bilderna och en bana att folja efter genereras av en tredjegradskurva. Matningar genom ett stereo vision system anvandes for att besvara fragor angaende noggranheten for den utvecklade efterfoljande bilen samt lagsta sakra avstand som kan anvandas mellan bilarna. Den utvecklade prototypbilen foljer efter ledbilen pa ett onskvart satt, dock borde trajektorian som den foljer utvecklas mera innan testning utfors pa storre fordon. Losningen pavisar att en monokular kamera kan anvandas for att folja efter en bil. Om systemet utokas med en GPS modul och en fuzzy hastighetskontroll kan denna losning testas med bilar i full storlek.
72

Monocular Depth Estimation: Datasets, Methods, and Applications

Bauer, Zuria 15 September 2021 (has links)
The World Health Organization (WHO) stated in February 2021 at the Seventy- Third World Health Assembly that, globally, at least 2.2 billion people have a near or distance vision impairment. They also denoted the severe impact vision impairment has on the quality of life of the individual suffering from this condition, how it affects the social well-being and their economic independence in society, becoming in some cases an additional burden to also people in their immediate surroundings. In order to minimize the costs and intrusiveness of the applications and maximize the autonomy of the individual life, the natural solution is using systems that rely on computer vision algorithms. The systems improving the quality of life of the visually impaired need to solve different problems such as: localization, path recognition, obstacle detection, environment description, navigation, etc. Each of these topics involves an additional set of problems that have to be solved to address it. For example, for the task of object detection, there is the need of depth prediction to know the distance to the object, path recognition to know if the user is on the road or on a pedestrian path, alarm system to provide notifications of danger for the user, trajectory prediction of the approaching obstacle, and those are only the main key points. Taking a closer look at all of these topics, they have one key component in common: depth estimation/prediction. All of these topics are in need of a correct estimation of the depth in the scenario. In this thesis, our main focus relies on addressing depth estimation in indoor and outdoor environments. Traditional depth estimation methods, like structure from motion and stereo matching, are built on feature correspondences from multiple viewpoints. Despite the effectiveness of these approaches, they need a specific type of data for their proper performance. Since our main goal is to provide systems with minimal costs and intrusiveness that are also easy to handle we decided to infer the depth from single images: monocular depth estimation. Estimating depth of a scene from a single image is a simple task for humans, but it is notoriously more difficult for computational models to be able to achieve high accuracy and low resource requirements. Monocular Depth Estimation is this very task of estimating depth from a single RGB image. Since there is only a need of one image, this approach is used in applications such as autonomous driving, scene understanding or 3D modeling where other type of information is not available. This thesis presents contributions towards solving this task using deep learning as the main tool. The four main contributions of this thesis are: first, we carry out an extensive review of the state-of-the-art in monocular depth estimation; secondly, we introduce a novel large scale high resolution outdoor stereo dataset able to provide enough image information to solve various common computer vision problems; thirdly, we show a set of architectures able to predict monocular depth effectively; and, at last, we propose two real life applications of those architectures, addressing the topic of enhancing the perception for the visually impaired using low-cost wearable sensors.
73

Monocular Depth Estimation with Edge-Based Constraints and Active Learning

January 2019 (has links)
abstract: The ubiquity of single camera systems in society has made improving monocular depth estimation a topic of increasing interest in the broader computer vision community. Inspired by recent work in sparse-to-dense depth estimation, this thesis focuses on sparse patterns generated from feature detection based algorithms as opposed to regular grid sparse patterns used by previous work. This work focuses on using these feature-based sparse patterns to generate additional depth information by interpolating regions between clusters of samples that are in close proximity to each other. These interpolated sparse depths are used to enforce additional constraints on the network’s predictions. In addition to the improved depth prediction performance observed from incorporating the sparse sample information in the network compared to pure RGB-based methods, the experiments show that actively retraining a network on a small number of samples that deviate most from the interpolated sparse depths leads to better depth prediction overall. This thesis also introduces a new metric, titled Edge, to quantify model performance in regions of an image that show the highest change in ground truth depth values along either the x-axis or the y-axis. Existing metrics in depth estimation like Root Mean Square Error(RMSE) and Mean Absolute Error(MAE) quantify model performance across the entire image and don’t focus on specific regions of an image that are hard to predict. To this end, the proposed Edge metric focuses specifically on these hard to classify regions. The experiments also show that using the Edge metric as a small addition to existing loss functions like L1 loss in current state-of-the-art methods leads to vastly improved performance in these hard to classify regions, while also improving performance across the board in every other metric. / Dissertation/Thesis / Masters Thesis Computer Engineering 2019
74

Physiological Effects of Monocular Display Augmented, Articulated Arm-Based Laser Digitizing

Littell, William Neil 11 May 2013 (has links)
The process of capturing solid geometry as 3 dimensional data requires the use of laser based reverse engineering hardware, known as a digitizer. Many digitizers exist as articulated coordinate measuring machines augmented with a laser, which forces the operator into many postures that are not ergonomically sound, particularly in the operator's upper body. This study analyzes the traditional method of laser digitizing using modern methods and technologies. An alternative user interface using a head-mounted monocular display is hypothesized and evaluated.
75

MonoDepth-vSLAM: A Visual EKF-SLAM using Optical Flow and Monocular Depth Estimation

Dey, Rohit 04 October 2021 (has links)
No description available.
76

Monocular Visual Odometry for Underwater Navigation : An examination of the performance of two methods / Monokulär visuell odometri för undervattensnavigation : En undersökning av två metoder

Voisin-Denoual, Maxime January 2018 (has links)
This thesis examines two methods for monocular visual odometry, FAST + KLT and ORBSLAM2, in the case of underwater environments.This is done by implementing and testing the methods on different underwater datasets. The results for the FAST + KLT provide no evidence that this method is effective in underwater settings. However, results for the ORBSLAM2 indicate that good performance is possible whenproperly tuned and provided with good camera calibration. Still, thereremain challenges related to, for example, sand bottom environments and scale estimation in monocular setups. The conclusion is therefore that the ORBSLAM2 is the most promising method of the two tested for underwater monocular visual odometry. / Denna uppsats undersöker två metoder för monokulär visuell odometri, FAST + KLT och ORBSLAM2, i det särskilda fallet av miljöer under vatten. Detta görs genom att implementera och testa metoderna på olika undervattensdataset. Resultaten för FAST + KLT ger inget stöd för att metoden skulle vara effektiv i undervattensmiljöer. Resultaten för ORBSLAM2, däremot, indikerar att denna metod kan prestera bra om den justeras på rätt sätt och får bra kamerakalibrering. Samtidigt återstår dock utmaningar relaterade till exempelvis miljöer med sandbottnar och uppskattning av skala i monokulära setups. Slutsatsen är därför att ORBSLAM2 är den mest lovande metoden av de två testade för monokulär visuell odometri under vatten.
77

Implementation and Evaluation of Monocular SLAM

Martinsson, Jesper January 2022 (has links)
This thesis report aims to explain the research, implementation, and testing of a monocular SLAM system in an application developed by Voysys AB called Oden, as well as the making and investigation of a new data set used to test the SLAM system. Using CUDASIFT to find and match feature points, OpenCV to compute the initial guess, and the Ceres Solver to optimize the results. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
78

The Effects of Binocular Vision Impairment on Adaptive Gait. The effects of binocular vision impairment due to monocular refractive blur on adaptive gait involving negotiation of a raised surface.

Vale, Anna January 2009 (has links)
Impairment of stereoacuity is common in the elderly population and is found to be a risk factor for falls. The purpose of these experiments was to extend knowledge regarding impairment of binocular vision and adaptive gait. Firstly using a 3D motion analysis system to measure how impairment of stereopsis affected adaptive gait during a negotiation of a step, secondly by determining which clinical stereotest was the most reliable for measuring stereoacuity in elderly subjects and finally investigating how manipulating the perceived height of a step in both binocular and monocular conditions affected negotiation of a step. In conditions of impaired stereopsis induced by acutely presented monocular blur, both young and elderly subjects adopted a safety strategy of increasing toe clearance of the step edge, even at low levels of monocular blur (+0.50DS) and the effect was greater when the dominant eye was blurred. The same adaptation was not found for individuals with chronic monocular blur, where vertical toe clearance did not change but variability of toe clearance increased compared to full binocular correction. Findings indicate stereopsis is important for accurately judging the height of a step, and offers support to epidemiological findings that impaired stereoacuity is a risk for falls. Poor agreement was found between clinical stereotests. The Frisby test was found to have the best repeatability. Finally, a visual illusion that caused a step to be perceived as taller led to increased toe elevation. This demonstrates a potential way of increasing toe clearance when stepping up and hence increase safety on stairs. / The Study data files are unavailable online.
79

Visual-Inertial SLAM Using a Monocular Camera and Detailed Map Data

Ekström, Viktor, Berglund, Ludvig January 2023 (has links)
The most commonly used localisation methods, such as GPS, rely on external signals to generate an estimate of the location. There is a need of systems which are independent of external signals in order to increase the robustness of the localisation capabilities. In this thesis a visual-inertial SLAM-based localisation system which utilises detailed map, image, IMU, and odometry data, is presented and evaluated. The system utilises factor graphs through Georgia Tech Smoothing and Mapping (GTSAM) library, developed at the Georgia Institute of Technology. The thesis contributes with performance evaluations for different camera and landmark settings in a localisation system based on GTSAM. Within the visual SLAM field, the thesis also contributes with a sparse landmark selection and a low image frequency approach to the localisation problem. A variety of camera-related settings, such as image frequency and amount of visible landmarks per image, are used to evaluate the system. The findings show that the estimate improve with a higher image frequency, and does also improve if the image frequency was held constant along the tracks. Having more than one landmark per image result in a significantly better estimate. The estimate is not accurate when only using one distant landmark throughout the track, but it is significantly better if two complementary landmarks are identified briefly along the tracks. The estimate can also handle time periods where no landmarks can be identified while maintaining a good estimate.
80

Monocular and Binocular Visual Tracking

Salama, Gouda Ismail Mohamed 06 January 2000 (has links)
Visual tracking is one of the most important applications of computer vision. Several tracking systems have been developed which either focus mainly on the tracking of targets moving on a plane, or attempt to reduce the 3-dimensional tracking problem to the tracking of a set of characteristic points of the target. These approaches are seriously handicapped in complex visual situations, particularly those involving significant perspective, textures, repeating patterns, or occlusion. This dissertation describes a new approach to visual tracking for monocular and binocular image sequences, and for both passive and active cameras. The method combines Kalman-type prediction with steepest-descent search for correspondences, using 2-dimensional affine mappings between images. This approach differs significantly from many recent tracking systems, which emphasize the recovery of 3-dimensional motion and/or structure of objects in the scene. We argue that 2-dimensional area-based matching is sufficient in many situations of interest, and we present experimental results with real image sequences to illustrate the efficacy of this approach. Image matching between two images is a simple one to one mapping, if there is no occlusion. In the presence of occlusion wrong matching is inevitable. Few approaches have been developed to address this issue. This dissertation considers the effect of occlusion on tracking a moving object for both monocular and binocular image sequences. The visual tracking system described here attempts to detect occlusion based on the residual error computed by the matching method. If the residual matching error exceeds a user-defined threshold, this means that the tracked object may be occluded by another object. When occlusion is detected, tracking continues with the predicted locations based on Kalman filtering. This serves as a predictor of the target position until it reemerges from the occlusion again. Although the method uses a constant image velocity Kalman filtering, it has been shown to function reasonably well in a non-constant velocity situation. Experimental results show that tracking can be maintained during periods of substantial occlusion. The area-based approach to image matching often involves correlation-based comparisons between images, and this requires the specification of a size for the correlation windows. Accordingly, a new approach based on moment invariants was developed to select window size adaptively. This approach is based on the sudden increasing or decreasing in the first Maitra moment invariant. We applied a robust regression model to smooth the first Maitra moment invariant to make the method robust against noise. This dissertation also considers the effect of spatial quantization on several moment invariants. Of particular interest are the affine moment invariants, which have emerged, in recent years as a useful tool for image reconstruction, image registration, and recognition of deformed objects. Traditional analysis assumes moments and moment invariants for images that are defined in the continuous domain. Quantization of the image plane is necessary, because otherwise the image cannot be processed digitally. Image acquisition by a digital system imposes spatial and intensity quantization that, in turn, introduce errors into moment and invariant computations. This dissertation also derives expressions for quantization-induced error in several important cases. Although it considers spatial quantization only, this represents an important extension of work by other researchers. A mathematical theory for a visual tracking approach of a moving object is presented in this dissertation. This approach can track a moving object in an image sequence where the camera is passive, and when the camera is actively controlled. The algorithm used here is computationally cheap and suitable for real-time implementation. We implemented the proposed method on an active vision system, and carried out experiments of monocular and binocular tracking for various kinds of objects in different environments. These experiments demonstrated that very good performance using real images for fairly complicated situations. / Ph. D.

Page generated in 0.0714 seconds