• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 186
  • 25
  • 13
  • 8
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 294
  • 294
  • 78
  • 69
  • 65
  • 61
  • 57
  • 49
  • 44
  • 43
  • 42
  • 40
  • 38
  • 36
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Radar and Thermopile Sensor Fusion for Pedestrian Detection

Rouhani, Shahin January 2005 (has links)
During the last decades, great steps have been taken to decrease passenger fatality in cars. Systems such as ABS and airbags have been developed for this purpose alone. But not much effort has been put into pedestrian safety. In traffic today, pedestrians are one of the most endangered participants and in recent years, there has been an increased demand for pedestrian safety from the European Enhanced Vehicle safety Committee and the European New Car Assessment Programme has thereby developed tests where pedestrian safety is rated. With this, detection of pedestrians has arised as a part in the automotive safety research. This thesis provides some of this research available in the area and a brief introduction to some of the sensors readily available. The objective of this work is to detect pedestrians in front of a vehicle by using thermoelectric infrared sensors fused with short range radar sensors and also to minimize any missed detections or false alarms. There has already been extensive work performed with the thermoelectric infrared sensors for this sole purpose and this thesis is based on that work. Information is provided about the sensors used and an explanation of how they are set up during this work. Methods used for classifying objects are given and the assumptions made about pedestrians in this system. A basic tracking algorithm is used to track radar detected objects in order to provide the fusion system with better data. The approach chosen for the sensor fusion is a central-level fusion where the probabilities for a pedestrian from the radars and the thermoelectric infrared sensors are combined using Dempster-Shafer Theory and accumulated over time in the Occupancy Grid framework. Theories that are extensively used in this thesis are explained in detail and discussed accordingly in different chapters. Finally the experiments undertaken and the results attained from the presented system are shown. A comparison is made with the previous detection system, which only uses thermoelectric infrared sensors and of which this work continues on. Conclusions regarding what this system is capable of are drawn with its inherent strengths and weaknesses.
162

Kalman Filter Based Fusion Of Camera And Inertial Sensor Measurements For Body State Estimation

Aslan Aydemir, Gokcen 01 September 2009 (has links) (PDF)
The focus of the present thesis is on the joint use of cameras and inertial sensors, a recent area of active research. Within our scope, the performance of body state estimation is investigated with isolated inertial sensors, isolated cameras and finally with a fusion of two types of sensors within a Kalman Filtering framework. The study consists of both simulation and real hardware experiments. The body state estimation problem is restricted to a single axis rotation where we estimate turn angle and turn rate. This experimental setup provides a simple but effective means of assessing the benefits of the fusion process. Additionally, a sensitivity analysis is carried out in our simulation experiments to explore the sensitivity of the estimation performance to varying levels of calibration errors. It is shown by experiments that state estimation is more robust to calibration errors when the sensors are used jointly. For the fusion of sensors, the Indirect Kalman Filter is considered as well as the Direct Form Kalman Filter. This comparative study allows us to assess the contribution of an accurate system dynamical model to the final state estimates. Our simulation and real hardware experiments effectively show that the fusion of the sensors eliminate the unbounded error growth characteristic of inertial sensors while final state estimation outperforms the use of cameras alone. Overall we can v demonstrate that the Kalman based fusion result in bounded error, high performance estimation of body state. The results are promising and suggest that these benefits can be extended to body state estimation for multiple degrees of freedom.
163

Triangulation Based Fusion of Sonar Data with Application in Mobile Robot Mapping and Localization

Wijk, Olle January 2001 (has links)
No description available.
164

Radar and Thermopile Sensor Fusion for Pedestrian Detection

Rouhani, Shahin January 2005 (has links)
<p>During the last decades, great steps have been taken to decrease passenger fatality in cars. Systems such as ABS and airbags have been developed for this purpose alone. But not much effort has been put into pedestrian safety. In traffic today, pedestrians are one of the most endangered participants and in recent years, there has been an increased demand for pedestrian safety from the European Enhanced Vehicle safety Committee and the European New Car Assessment Programme has thereby developed tests where pedestrian safety is rated. With this, detection of pedestrians has arised as a part in the automotive safety research.</p><p>This thesis provides some of this research available in the area and a brief introduction to some of the sensors readily available. The objective of this work is to detect pedestrians in front of a vehicle by using thermoelectric infrared sensors fused with short range radar sensors and also to minimize any missed detections or false alarms. There has already been extensive work performed with the thermoelectric infrared sensors for this sole purpose and this thesis is based on that work.</p><p>Information is provided about the sensors used and an explanation of how they are set up during this work. Methods used for classifying objects are given and the assumptions made about pedestrians in this system. A basic tracking algorithm is used to track radar detected objects in order to provide the fusion system with better data. The approach chosen for the sensor fusion is a central-level fusion where the probabilities for a pedestrian from the radars and the thermoelectric infrared sensors are combined using Dempster-Shafer Theory and accumulated over time in the Occupancy Grid framework. Theories that are extensively used in this thesis are explained in detail and discussed accordingly in different chapters.</p><p>Finally the experiments undertaken and the results attained from the presented system are shown. A comparison is made with the previous detection system, which only uses thermoelectric infrared sensors and of which this work continues on. Conclusions regarding what this system is capable of are drawn with its inherent strengths and weaknesses.</p>
165

Sensor Fusion Navigation for Sounding Rocket Applications / Navigering med Sensorfusion i en Sondraket

Nilsson, Mattias, Vinkvist, Rikard January 2008 (has links)
<p>One of Saab Space’s products is the S19 guidance system for sounding rockets.Today this system is based on an inertial navigation system that blindly calculatesthe position of the rocket by integrating sensor readings with unknown bias. Thepurpose of this thesis is to integrate a Global Positioning System (GPS) receiverinto the guidance system to increase precision and robustness. There are mainlytwo problems involved in this integration. One is to integrate the GPS with sensorfusion into the existing guidance system. The seconds is to get the GPS satellitetracking to work under extremely high dynamics. The first of the two problems issolved by using an Extended Kalman filter (EKF) with two different linearizations.One of them is uses Euler angles and the other is done with quaternions. Theintegration technique implemented in this thesis is a loose integration between theGPS receiver and the inertial navigation system. The main task of the EKF isto estimate the bias of the inertial navigation system sensors and correct it toeliminate drift in the position. The solution is verified by computing the positionof a car using a GPS and an inertial measurement unit. Different solutions to theGPS tracking problem are proposed in a pre-study.</p> / <p>En av Saab Space produkter är navigationssystemet S19 som styr sondraketer.Fram till idag har systemet varit baserat på ett tröghetsnavigeringssystem somblint räknar ut position genom att integrera tröghetsnavigerinssystemets sensorermed okända biaser. Syftet med detta exjobb är att integrera en GPS med tröghetsnavigeringsystemetför att öka robusthet och precision. Det kan i huvudsak delasupp i två problem; att integrera en GPS-mottagare med det befintliga navigationsystemetmed användning utav sensorfusion, och att få satellitföljningen attfungera under extremt höga dynamiska förhållanden. Det första av de två problemenlöses genom ett Extended Kalman filter (EKF) med två olika linjäriseringar.Den första linjäriseringen är med Eulervinklar och är välbeprövad. Den andra ärmed kvaternioner. Integrationstekniken som implementeras i detta Examensarbeteär en lös integration mellan GPS-mottagaren och tröghetsnavigeringssystemet. Huvudsyftetmed EKF:en är att estimera bias i tröghetsnavigeringsystemets sensoreroch korrigera dem för att eliminera drifter i position. Lösningen verifieras genomatt räkna ut positionen för en bil med GPS och en inertiell mätenhet. Olika lösningartill satellitföljningen föreslås i en förstudie.</p>
166

Automatic geo-referencing by integrating camera vision and inertial measurements

Randeniya, Duminda I. B 01 June 2007 (has links)
Importance of an alternative sensor system to an inertial measurement unit (IMU) is essential for intelligent land navigation systems when the vehicle travels in a GPS deprived environment. The sensor system that has to be used in updating the IMU for a reliable navigation solution has to be a passive sensor system which does not depend on any outside signal. This dissertation presents the results of an effort where position and orientation data from vision and inertial sensors are integrated. Information from a sequence of images captured by a monocular camera attached to a survey vehicle at a maximum frequency of 3 frames per second was used in upgrading the inertial system installed in the same vehicle for its inherent error accumulation. Specifically, the rotations and translations estimated from point correspondences tracked through a sequence of images were used in the integration. However, for such an effort, two types of tasks need to be performed. The first task is the calibration to estimate the intrinsic properties of the vision sensors (cameras), such as the focal length and lens distortion parameters and determination of the transformation between the camera and the inertial systems. Calibration of a two sensor system under indoor conditions does not provide an appropriate and practical transformation for use in outdoor maneuvers due to invariable differences between outdoor and indoor conditions. Also, use of custom calibration objects in outdoor operational conditions is not feasible due to larger field of view that requires relatively large calibration object sizes. Hence calibration becomes one of the critical issues particularly if the integrated system is used in Intelligent Transportation Systems applications. In order to successfully estimate the rotations and translations from vision system the calibration has to be performed prior to the integration process. The second task is the effective fusion of inertial and vision sensor systems. The automated algorithm that identifies point correspondences in images enables its use in real-time autonomous driving maneuvers. In order to verify the accuracy of the established correspondences, independent constraints such as epipolar lines and correspondence flow directions were used. Also a pre-filter was utilized to smoothen out the noise associated with the vision sensor (camera) measurements. A novel approach was used to obtain the geodetic coordinates, i.e. latitude, longitude and altitude, from the normalized translations determined from the vision sensor. Finally, the position locations based on the vision sensor was integrated with those of the inertial system in a decentralized format using a Kalman filter. The vision/inertial integrated position estimates are successfully compared with those from 1) inertial/GPS system output and 2) actual survey performed on the same roadway. This comparison demonstrates that vision can in fact be used successfully to supplement the inertial measurements during potential GPS outages. The derived intrinsic properties and the transformation between individual sensors are also verified during two separate test runs on an actual roadway section.
167

Estimation of Local Map from Radar Data / Skattning av lokal karta från radardata

Moritz, Malte, Pettersson, Anton January 2014 (has links)
Autonomous features in vehicles is already a big part of the automobile area and now many companies are looking for ways to make vehicles fully autonomous. Autonomous vehicles need to get information about the surrounding environment. The information is extracted from exteroceptive sensors and today vehicles often use laser scanners for this purpose. Laser scanners are very expensive and fragile, it is therefore interesting to investigate if cheaper radar sensors could be used. One big challenge when it comes to autonomous vehicles is to be able to use the exteroceptive sensors and extract a position of the vehicle and at the same time get a map of the environment. The area of Simultaneous Localization and Mapping (SLAM) is a well explored area when using laser scanners but is not that well explored when using radars. It has been investigated if it is possible to use radar sensors on a truck to create a map of the area where the truck drives. The truck has been equipped with ego-motion sensors and radars and the data from them has been fused together to get a position of the truck and to get a map of the surrounding environment, i.e. a SLAM algorithm has been implemented. The map is represented by an Occupancy Grid Map (OGM) which should only consist of static objects. The OGM is updated probabilistically by using a binary Bayes filter. To localize the truck with help of motion sensors an Extended Kalman Filter (EKF) is used together with a map and a scan match method. All these methods are put together to create a SLAM algorithm. A range rate filter method is used to filter out noise and non-static measurements from the radar. The results of this thesis show that it is possible to use radar sensors to create a map of a truck's surroundings. The quality of the map is considered to be good and details such as space between parked trucks, signs and light posts can be distinguished. It has also been proven that methods with low performance on their own can together with other methods work very well in the SLAM algorithm. Overall the SLAM algorithm works well but when driving in unexplored areas with a low number of objects problems with positioning might occur. A real time system has also been implemented and the map can be seen at the same time as the truck is manoeuvred.
168

Improved detection and tracking of objects in surveillance video

Denman, Simon Paul January 2009 (has links)
Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very dicult for a human op- erator to eectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identication at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the eective use of more advanced technolo- gies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identication. Before an object can be tracked, it must be detected. Motion segmentation tech- niques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erro- neous motion caused by noise and lighting eects, or due to the detection routines being unable to split occluded regions into their component objects. Particle l- ters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (of- ten manual) detection to initialise the lter. Particle lters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle lter. A novel hybrid motion segmentation / optical ow algorithm, capable of simulta- neously extracting multiple layers of foreground and optical ow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical ow is capable of extracting a mov- ing object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and signi- cant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle lter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benet from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle lter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking sys- tems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classication in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a signicant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi- automated video processing and therefore improve security in areas under surveil- lance.
169

Markerless augmented reality on ubiquitous mobile devices with integrated sensors

Van Wyk, Carel 03 1900 (has links)
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: The computational power of mobile smart-phone devices are ever increasing and high-end phones become more popular amongst consumers every day. The technical speci cations of a high-end smart-phone today rivals those of a home computer system of only a few years ago. Powerful processors, combined with cameras and ease of development encourage an increasing number of Augmented Reality (AR) researchers to adopt mobile smart-phones as AR platform. Implementation of marker-based Augmented Reality systems on mobile phones is mostly a solved problem. Markerless systems still o er challenges due to increased processing requirements. Some researchers adopt purely computer vision based markerless tracking methods to estimate camera pose on mobile devices. In this thesis we propose the use of a hybrid system that employs both computer vision and integrated sensors present in most new smartphones to facilitate pose estimation. We estimate three of the six degrees of freedom of pose using integrated sensors and estimate the remaining three using feature tracking. A proof of concept hybrid system is implemented as part of this thesis. / AFRIKAANSE OPSOMMING: Die berekeningskrag van nuwe-generasie selfone neem elke dag toe en kragtige "slim-fone" word al hoe meer populêr onder verbruikers. Die tegniese spesifikasies van 'n nuwe slim-foon vandag is vergelykbaar met die van 'n persoonlike rekenaar van slegs 'n paar jaar gelede. Die kombinasie van kragtige verwerkers, kameras en die gemaklikheid waarmee programmatuur op hierdie toestelle ontwikkel word, maak dit 'n aantreklike ontwikkelingsplatform vir navorsers in Toegevoegde Realiteit. Die implimentering van 'n merker-gebaseerde Toegevoegde Realiteitstelsel op selfone is 'n probleem wat reeds grotendeels opgelos is. Merker-vrye stelsels, aan die ander kant, bied steeds interessante uitdagings omdat hulle meer prosesseringskrag vereis. 'n Paar navorsers het reeds rekenaarvisie-gebaseerde merker-vrye stelsels aangepas om op selfone te funksioneer. In hierdie tesis stel ons die ontwikkeling voor van 'n hibriede stelsel wat gebruik maak van rekenaarvisie sowel as geintegreerde sensore in die foon om die berekening van kamera-orientasie te vergemaklik. Ons gebruik geintegreerde sensore om drie uit ses vryheidsgrade van orientasie te bereken, terwyl die oorblywende drie met behulp van rekenaarvisie-tegnieke bepaal word. 'n Prototipe stelsel is ontwikkel as deel van hierdie tesis.
170

Tracking of Ground Vehicles : Evaluation of Tracking Performance Using Different Sensors and Filtering Techniques

Homelius, Marcus January 2018 (has links)
It is crucial to find a good balance between positioning accuracy and cost when developing navigation systems for ground vehicles. In open sky or even in a semi-urban environment, a single global navigation satellite system (GNSS) constellation performs sufficiently well. However, the positioning accuracy decreases drastically in urban environments. Because of the limitation in tracking performance for standalone GNSS, particularly in cities, many solutions are now moving toward integrated systems that combine complementary sensors. In this master thesis the improvement of tracking performance for a low-cost ground vehicle navigation system is evaluated when complementary sensors are added and different filtering techniques are used. How the GNSS aided inertial navigation system (INS) is used to track ground vehicles is explained in this thesis. This has shown to be a very effective way of tracking a vehicle through GNSS outages. Measurements from an accelerometer and a gyroscope are used as inputs to inertial navigation equations. GNSS measurements are then used to correct the tracking solution and to estimate the biases in the inertial sensors. When velocity constraints on the vehicle’s motion in the y- and z-axis are included, the GNSS aided INS has shown very good performance, even during long GNSS outages. Two versions of the Rauch-Tung-Striebel (RTS) smoother and a particle filter (PF) version of the GNSS aided INS have also been implemented and evaluated. The PF has shown to be computationally demanding in comparison with the other approaches and a real-time implementation on the considered embedded system is not doable. The RTS smoother has shown to give a smoother trajectory but a lot of extra information needs to be stored and the position accuracy is not significantly improved. Moreover, map matching has been combined with GNSS measurements and estimates from the GNSS aided INS. The Viterbi algorithm is used to output the the road segment identification numbers of the most likely path and then the estimates are matched to the closest position of these roads. A suggested solution to acquire reliable tracking with high accuracy in all environments is to run the GNSS aided INS in real-time in the vehicle and simultaneously send the horizontal position coordinates to a back office where map information is kept and map matching is performed.

Page generated in 0.0842 seconds