• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 187
  • 25
  • 13
  • 8
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 296
  • 296
  • 78
  • 69
  • 66
  • 61
  • 57
  • 50
  • 46
  • 43
  • 43
  • 41
  • 38
  • 36
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

A network based algorithm for aided navigation / En nätverksbaserad algoritm för navigeringsunderstöd

Magnusson, Daniel January 2012 (has links)
This thesis is concerned with development of a navigation algorithm primarily for the aircraft fighter SAAB JAS 39 Gripen, in swarms of other units. The algorithm uses information from conventional navigation systems and additional information from a radio data link as aiding information, relative range measurements. As the GPS can get jammed, this group tracking solution can provide an increased navigation performance in these conditions. For simplicity, simplified characteristics are used in the simulations where simple generated trajectories and measurements are used. This measurement information can then be fused by using filter theory applied from the sensor fusionarea with statistical approaches. By using the radio data link and the external information sources, i.e. other aircraft and different types of landmarks with often good performance, navigation is aided when the GPS is not usable, at e.g. hostile GPS conditions. A number of scenarios with operative sense of reality were simulated for verifying and studying these conditions, to give results with conclusions. / Det här examensarbetet syftar till utveckling av en algoritm för navigering, primärt för stridsflygplanet SAAB JAS 39 Gripen, i svärmar av andra enheter. Algoritmen använder information från konventionella navigeringssystem och ytterligare information från en radiodatalänk som ger understödjande information, relativa avståndsmätningar. Då den förlitade GPS:en kan störas ut, kan denna gruppspårande lösning öka navigeringsprestandan i dessa förhållanden. För enkelhetens skull, används förenklade karaktäristiker i simuleringarna där enkla genererade trajektorier och mätningar används. Denna mätinformation kan sedan ihopviktas genom att använda filterteori från statistisk sensorfusion. Genom att använda radiodatalänkar och den tillförda informationen från externa informationskällor, således andra flygplan och olika typer av landmärken som väldigt ofta har god prestanda, är navigeringen understödd när GPS inte är användbar, t.ex. i GPS-fientliga miljöer. Ett antal scenarion med operativ verklighetsanknytning simulerades för att verifiera och studera dessa förhållanden, för att ge resultat med slutsatser. / © Daniel Magnusson.
162

Multiple Platform Bias Error Estimation / Estimering av Biasfel med Multipla Plattformar

Wiklund, Åsa January 2004 (has links)
Sensor fusion has long been recognized as a mean to improve target tracking. Sensor fusion deals with the merging of several signals into one to get a better and more reliable result. To get an improved and more reliable result you have to trust the incoming data to be correct and not contain unknown systematic errors. This thesis tries to find and estimate the size of the systematic errors that appear when we have a multi platform environment and data is shared among the units. To be more precise, the error estimated within the scope of this thesis appears when platforms cannot determine their positions correctly and share target tracking data with their own corrupted position as a basis for determining the target's position. The algorithms developed in this thesis use the Kalman filter theory, including the extended Kalman filter and the information filter, to estimate the platform location bias error. Three algorithms are developed with satisfying result. Depending on time constraints and computational demands either one of the algorithms could be preferred.
163

Radar and Thermopile Sensor Fusion for Pedestrian Detection

Rouhani, Shahin January 2005 (has links)
During the last decades, great steps have been taken to decrease passenger fatality in cars. Systems such as ABS and airbags have been developed for this purpose alone. But not much effort has been put into pedestrian safety. In traffic today, pedestrians are one of the most endangered participants and in recent years, there has been an increased demand for pedestrian safety from the European Enhanced Vehicle safety Committee and the European New Car Assessment Programme has thereby developed tests where pedestrian safety is rated. With this, detection of pedestrians has arised as a part in the automotive safety research. This thesis provides some of this research available in the area and a brief introduction to some of the sensors readily available. The objective of this work is to detect pedestrians in front of a vehicle by using thermoelectric infrared sensors fused with short range radar sensors and also to minimize any missed detections or false alarms. There has already been extensive work performed with the thermoelectric infrared sensors for this sole purpose and this thesis is based on that work. Information is provided about the sensors used and an explanation of how they are set up during this work. Methods used for classifying objects are given and the assumptions made about pedestrians in this system. A basic tracking algorithm is used to track radar detected objects in order to provide the fusion system with better data. The approach chosen for the sensor fusion is a central-level fusion where the probabilities for a pedestrian from the radars and the thermoelectric infrared sensors are combined using Dempster-Shafer Theory and accumulated over time in the Occupancy Grid framework. Theories that are extensively used in this thesis are explained in detail and discussed accordingly in different chapters. Finally the experiments undertaken and the results attained from the presented system are shown. A comparison is made with the previous detection system, which only uses thermoelectric infrared sensors and of which this work continues on. Conclusions regarding what this system is capable of are drawn with its inherent strengths and weaknesses.
164

Kalman Filter Based Fusion Of Camera And Inertial Sensor Measurements For Body State Estimation

Aslan Aydemir, Gokcen 01 September 2009 (has links) (PDF)
The focus of the present thesis is on the joint use of cameras and inertial sensors, a recent area of active research. Within our scope, the performance of body state estimation is investigated with isolated inertial sensors, isolated cameras and finally with a fusion of two types of sensors within a Kalman Filtering framework. The study consists of both simulation and real hardware experiments. The body state estimation problem is restricted to a single axis rotation where we estimate turn angle and turn rate. This experimental setup provides a simple but effective means of assessing the benefits of the fusion process. Additionally, a sensitivity analysis is carried out in our simulation experiments to explore the sensitivity of the estimation performance to varying levels of calibration errors. It is shown by experiments that state estimation is more robust to calibration errors when the sensors are used jointly. For the fusion of sensors, the Indirect Kalman Filter is considered as well as the Direct Form Kalman Filter. This comparative study allows us to assess the contribution of an accurate system dynamical model to the final state estimates. Our simulation and real hardware experiments effectively show that the fusion of the sensors eliminate the unbounded error growth characteristic of inertial sensors while final state estimation outperforms the use of cameras alone. Overall we can v demonstrate that the Kalman based fusion result in bounded error, high performance estimation of body state. The results are promising and suggest that these benefits can be extended to body state estimation for multiple degrees of freedom.
165

Triangulation Based Fusion of Sonar Data with Application in Mobile Robot Mapping and Localization

Wijk, Olle January 2001 (has links)
No description available.
166

Radar and Thermopile Sensor Fusion for Pedestrian Detection

Rouhani, Shahin January 2005 (has links)
<p>During the last decades, great steps have been taken to decrease passenger fatality in cars. Systems such as ABS and airbags have been developed for this purpose alone. But not much effort has been put into pedestrian safety. In traffic today, pedestrians are one of the most endangered participants and in recent years, there has been an increased demand for pedestrian safety from the European Enhanced Vehicle safety Committee and the European New Car Assessment Programme has thereby developed tests where pedestrian safety is rated. With this, detection of pedestrians has arised as a part in the automotive safety research.</p><p>This thesis provides some of this research available in the area and a brief introduction to some of the sensors readily available. The objective of this work is to detect pedestrians in front of a vehicle by using thermoelectric infrared sensors fused with short range radar sensors and also to minimize any missed detections or false alarms. There has already been extensive work performed with the thermoelectric infrared sensors for this sole purpose and this thesis is based on that work.</p><p>Information is provided about the sensors used and an explanation of how they are set up during this work. Methods used for classifying objects are given and the assumptions made about pedestrians in this system. A basic tracking algorithm is used to track radar detected objects in order to provide the fusion system with better data. The approach chosen for the sensor fusion is a central-level fusion where the probabilities for a pedestrian from the radars and the thermoelectric infrared sensors are combined using Dempster-Shafer Theory and accumulated over time in the Occupancy Grid framework. Theories that are extensively used in this thesis are explained in detail and discussed accordingly in different chapters.</p><p>Finally the experiments undertaken and the results attained from the presented system are shown. A comparison is made with the previous detection system, which only uses thermoelectric infrared sensors and of which this work continues on. Conclusions regarding what this system is capable of are drawn with its inherent strengths and weaknesses.</p>
167

Sensor Fusion Navigation for Sounding Rocket Applications / Navigering med Sensorfusion i en Sondraket

Nilsson, Mattias, Vinkvist, Rikard January 2008 (has links)
<p>One of Saab Space’s products is the S19 guidance system for sounding rockets.Today this system is based on an inertial navigation system that blindly calculatesthe position of the rocket by integrating sensor readings with unknown bias. Thepurpose of this thesis is to integrate a Global Positioning System (GPS) receiverinto the guidance system to increase precision and robustness. There are mainlytwo problems involved in this integration. One is to integrate the GPS with sensorfusion into the existing guidance system. The seconds is to get the GPS satellitetracking to work under extremely high dynamics. The first of the two problems issolved by using an Extended Kalman filter (EKF) with two different linearizations.One of them is uses Euler angles and the other is done with quaternions. Theintegration technique implemented in this thesis is a loose integration between theGPS receiver and the inertial navigation system. The main task of the EKF isto estimate the bias of the inertial navigation system sensors and correct it toeliminate drift in the position. The solution is verified by computing the positionof a car using a GPS and an inertial measurement unit. Different solutions to theGPS tracking problem are proposed in a pre-study.</p> / <p>En av Saab Space produkter är navigationssystemet S19 som styr sondraketer.Fram till idag har systemet varit baserat på ett tröghetsnavigeringssystem somblint räknar ut position genom att integrera tröghetsnavigerinssystemets sensorermed okända biaser. Syftet med detta exjobb är att integrera en GPS med tröghetsnavigeringsystemetför att öka robusthet och precision. Det kan i huvudsak delasupp i två problem; att integrera en GPS-mottagare med det befintliga navigationsystemetmed användning utav sensorfusion, och att få satellitföljningen attfungera under extremt höga dynamiska förhållanden. Det första av de två problemenlöses genom ett Extended Kalman filter (EKF) med två olika linjäriseringar.Den första linjäriseringen är med Eulervinklar och är välbeprövad. Den andra ärmed kvaternioner. Integrationstekniken som implementeras i detta Examensarbeteär en lös integration mellan GPS-mottagaren och tröghetsnavigeringssystemet. Huvudsyftetmed EKF:en är att estimera bias i tröghetsnavigeringsystemets sensoreroch korrigera dem för att eliminera drifter i position. Lösningen verifieras genomatt räkna ut positionen för en bil med GPS och en inertiell mätenhet. Olika lösningartill satellitföljningen föreslås i en förstudie.</p>
168

Automatic geo-referencing by integrating camera vision and inertial measurements

Randeniya, Duminda I. B 01 June 2007 (has links)
Importance of an alternative sensor system to an inertial measurement unit (IMU) is essential for intelligent land navigation systems when the vehicle travels in a GPS deprived environment. The sensor system that has to be used in updating the IMU for a reliable navigation solution has to be a passive sensor system which does not depend on any outside signal. This dissertation presents the results of an effort where position and orientation data from vision and inertial sensors are integrated. Information from a sequence of images captured by a monocular camera attached to a survey vehicle at a maximum frequency of 3 frames per second was used in upgrading the inertial system installed in the same vehicle for its inherent error accumulation. Specifically, the rotations and translations estimated from point correspondences tracked through a sequence of images were used in the integration. However, for such an effort, two types of tasks need to be performed. The first task is the calibration to estimate the intrinsic properties of the vision sensors (cameras), such as the focal length and lens distortion parameters and determination of the transformation between the camera and the inertial systems. Calibration of a two sensor system under indoor conditions does not provide an appropriate and practical transformation for use in outdoor maneuvers due to invariable differences between outdoor and indoor conditions. Also, use of custom calibration objects in outdoor operational conditions is not feasible due to larger field of view that requires relatively large calibration object sizes. Hence calibration becomes one of the critical issues particularly if the integrated system is used in Intelligent Transportation Systems applications. In order to successfully estimate the rotations and translations from vision system the calibration has to be performed prior to the integration process. The second task is the effective fusion of inertial and vision sensor systems. The automated algorithm that identifies point correspondences in images enables its use in real-time autonomous driving maneuvers. In order to verify the accuracy of the established correspondences, independent constraints such as epipolar lines and correspondence flow directions were used. Also a pre-filter was utilized to smoothen out the noise associated with the vision sensor (camera) measurements. A novel approach was used to obtain the geodetic coordinates, i.e. latitude, longitude and altitude, from the normalized translations determined from the vision sensor. Finally, the position locations based on the vision sensor was integrated with those of the inertial system in a decentralized format using a Kalman filter. The vision/inertial integrated position estimates are successfully compared with those from 1) inertial/GPS system output and 2) actual survey performed on the same roadway. This comparison demonstrates that vision can in fact be used successfully to supplement the inertial measurements during potential GPS outages. The derived intrinsic properties and the transformation between individual sensors are also verified during two separate test runs on an actual roadway section.
169

Estimation of Local Map from Radar Data / Skattning av lokal karta från radardata

Moritz, Malte, Pettersson, Anton January 2014 (has links)
Autonomous features in vehicles is already a big part of the automobile area and now many companies are looking for ways to make vehicles fully autonomous. Autonomous vehicles need to get information about the surrounding environment. The information is extracted from exteroceptive sensors and today vehicles often use laser scanners for this purpose. Laser scanners are very expensive and fragile, it is therefore interesting to investigate if cheaper radar sensors could be used. One big challenge when it comes to autonomous vehicles is to be able to use the exteroceptive sensors and extract a position of the vehicle and at the same time get a map of the environment. The area of Simultaneous Localization and Mapping (SLAM) is a well explored area when using laser scanners but is not that well explored when using radars. It has been investigated if it is possible to use radar sensors on a truck to create a map of the area where the truck drives. The truck has been equipped with ego-motion sensors and radars and the data from them has been fused together to get a position of the truck and to get a map of the surrounding environment, i.e. a SLAM algorithm has been implemented. The map is represented by an Occupancy Grid Map (OGM) which should only consist of static objects. The OGM is updated probabilistically by using a binary Bayes filter. To localize the truck with help of motion sensors an Extended Kalman Filter (EKF) is used together with a map and a scan match method. All these methods are put together to create a SLAM algorithm. A range rate filter method is used to filter out noise and non-static measurements from the radar. The results of this thesis show that it is possible to use radar sensors to create a map of a truck's surroundings. The quality of the map is considered to be good and details such as space between parked trucks, signs and light posts can be distinguished. It has also been proven that methods with low performance on their own can together with other methods work very well in the SLAM algorithm. Overall the SLAM algorithm works well but when driving in unexplored areas with a low number of objects problems with positioning might occur. A real time system has also been implemented and the map can be seen at the same time as the truck is manoeuvred.
170

Improved detection and tracking of objects in surveillance video

Denman, Simon Paul January 2009 (has links)
Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very dicult for a human op- erator to eectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identication at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the eective use of more advanced technolo- gies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identication. Before an object can be tracked, it must be detected. Motion segmentation tech- niques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erro- neous motion caused by noise and lighting eects, or due to the detection routines being unable to split occluded regions into their component objects. Particle l- ters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (of- ten manual) detection to initialise the lter. Particle lters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle lter. A novel hybrid motion segmentation / optical ow algorithm, capable of simulta- neously extracting multiple layers of foreground and optical ow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical ow is capable of extracting a mov- ing object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and signi- cant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle lter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benet from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle lter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking sys- tems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classication in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a signicant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi- automated video processing and therefore improve security in areas under surveil- lance.

Page generated in 0.6189 seconds