• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 165
  • 25
  • 10
  • 8
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 263
  • 263
  • 69
  • 62
  • 61
  • 54
  • 49
  • 48
  • 43
  • 39
  • 38
  • 38
  • 33
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

UKF and EKF with time dependent measurement and model uncertainties for state estimation in heavy duty diesel engines

Berggren, Henrik, Melin, Martin January 2011 (has links)
The continuous challenge to decrease emissions, sensor costs and fuel consumption in diesel engines is battled in this thesis. To reach higher goals in engine efficiency and environmental sustainability the prediction of engine states is essential due to their importance in engine control and diagnosis. Model output will be improved with help from sensors, advanced mathematics and non linear Kalman filtering. The task consist of constructing non linear Kalman Filters and to adaptively weight measurements against model output to increase estimation accuracy. This thesis shows an approach of how to improve estimates by nonlinear Kalman filtering and how to achieve additional information that can be used to acquire better accuracy when a sensor fails or to replace existing sensors. The best performing Kalman filter shows a decrease of the Root Mean Square Error of 75 % in comparison to model output.
22

Nonlinear and distributed sensory estimation

Sugathevan, Suranthiran 29 August 2005 (has links)
Methods to improve performance of sensors with regard to sensor nonlinearity, sensor noise and sensor bandwidths are investigated and new algorithms are developed. The necessity of the proposed research has evolved from the ever-increasing need for greater precision and improved reliability in sensor measurements. After describing the current state of the art of sensor related issues like nonlinearity and bandwidth, research goals are set to create a new trend on the usage of sensors. We begin the investigation with a detailed distortion analysis of nonlinear sensors. A need for efficient distortion compensation procedures is further justified by showing how a slight deviation from the linearity assumption leads to a very severe distortion in time and in frequency domains. It is argued that with a suitable distortion compensation technique the danger of having an infinite bandwidth nonlinear sensory operation, which is dictated by nonlinear distortion, can be avoided. Several distortion compensation techniques are developed and their performance is validated by simulation and experimental results. Like any other model-based technique, modeling errors or model uncertainty affects performance of the proposed scheme, this leads to the innovation of robust signal reconstruction. A treatment for this problem is given and a novel technique, which uses a nominal model instead of an accurate model and produces the results that are robust to model uncertainty, is developed. The means to attain a high operating bandwidth are developed by utilizing several low bandwidth pass-band sensors. It is pointed out that instead of using a single sensor to measure a high bandwidth signal, there are many advantages of using an array of several pass-band sensors. Having shown that employment of sensor arrays is an economic incentive and practical, several multi-sensor fusion schemes are developed to facilitate their implementation. Another aspect of this dissertation is to develop means to deal with outliers in sensor measurements. As fault sensor data detection is an essential element of multi-sensor network implementation, which is used to improve system reliability and robustness, several sensor scheduling configurations are derived to identify and to remove outliers.
23

Road Shape Estimation based on On-board Sensors and Map Data

Foborg, Felix January 2014 (has links)
The ability to acquire accurate information of the surrounding road environment is crucial for autonomous driving and advanced driver assistance systems. A method to estimate the shape of the road has been developed and evaluated. The estimate is based on fusion of data from a road marking detector, a radar tracker, map data, GPS, and inertial sensors. The method is intended for highway use and focus has been on increasing the availability of a sufficiently accurate road shape estimate in the event of sensor failures. To make use of past sensor measurements, an extended Kalman filter has been used together with dynamical models for the road and the ego vehicle. Results from a performance evaluation show that the road shape estimate clearly benefits from being based on a fusion of sensor data. The different sensors have also proven to be of various importance to the different parameters that describe the road shape. / Fordon som kan köra autonomt, det vill säga utan förare, är ett mål för fordonsindustrin och en dröm för många bilägare. Det skulle möjliggöra för förare att använda tiden till annat och minska personalkostnader för transportbolag. Säkerheten på våra vägar skulle även kunna förbättras eftersom att ett sådant system har möjlighet att reagera snabbare än någon människa och drabbas inte av trötthet eller störs av andra passagerare. Förmåga att kunna inhämta och tolka information om den omkringliggande trafiksituationen är ytterst nödvändigt för att kunna utveckla autonoma fordon och behövs även för mer avancerade moderna säkerhetssytem, som till exempel kollissionsvarningssystem. En viktig del i detta är att kunna uppfatta hur formen på vägen ser ut. Målet med detta examensarbete är att utveckla en algoritm som estimerar vägens form baserat på ett antal sensorer monterade på ett fordon och information från en kartdatabas. Den största vikten har legat på att algoritmen alltid ska kunna leverera en tillräckligt bra skattning, även i perioder när sensormätningar inte finns tillgängliga på grund av att sensorer fallerar. Den tänkta miljön är motorvägskörning, främst därför att det innebär en hel del förenklingar i jämförelse med andra typer av vägar. Det stora problemet för sådana algoritmer ligger ofta i att sensorer lider av olika typer av nackdelar. De mäter bara en viss specifik sak, kan ha stora mätfel, är känsliga för olika förhållanden och har begränsingar i räckvidd. För att uttnyttja sensorernas olika styrkor och mildra effekten av deras brister har ett flertal sensorer använts tillsammans. Examensarbetet har utförts på Scania och testats på deras lastbilar. De typer av sensorer som har använts är redan, eller är på god väg att bli, standardutrustning i deras lastbilar och i många andra moderna fordon. Algoritmen använder sig av mätningar från en vägmarkeringsdetektor, som tillhandahåller formen på de två närmaste väglinjerna, en radar, som ger position och rörelse hos framförvarande bilar, en kartdatabas, som tillsammans med en GPS ger tidigare uppmätt kurvatur vid fordonets position, och interna sensorer som mäter det egna fordonets rörelser. För att kunna fortsätta ge en skattning när mätningar inte finns tillgängliga och för att göra algoritmen robustare mot dålig data, har en metod använts som uttnyttjar informationen i tidigare mätvärden, ett så kallat Extended Kalman filter. Denna metod kräver en matematisk beskrivning av hur formen på vägen framför fordonet förväntas förändras över tid, baserat på hur fordonet rör sig. De olika typerna av mätvärden från sensorerna kombineras i metoden och viktas olika beroende på hur tillförlitliga man anser att sensorerna är. Algoritmen har utvärderats på mätningar från allmänna motorvägar utanför Södertälje. Resultatet från denna utvärdering visar att det är väldigt fördelaktigt att kombinera flera olika typer av sensorer för att kunna leverera en bra skattning så ofta som möjligt. Det visar sig även att de olika typerna av sensorer är av olika stor betydelse för olika vägformsparametrar.
24

Real-Time Multi-Sensor Localisation and Mapping Algorithms for Mobile Robots

Matsumoto, Takeshi, takeshi.matsumoto@flinders.edu.au January 2010 (has links)
A mobile robot system provides a grounded platform for a wide variety of interactive systems to be developed and deployed. The mobility provided by the robot presents unique challenges as it must observe the state of the surroundings while observing the state of itself with respect to the environment. The scope of the discipline includes the mechanical and hardware issues, which limit and direct the capabilities of the software considerations. The systems that are integrated into the mobile robot platform include both specific task oriented and fundamental modules that define the core behaviour of the robot. While the earlier can sometimes be developed separately and integrated at a later stage, the core modules are often custom designed early on to suit the individual robot system depending on the configuration of the mechanical components. This thesis covers the issues encountered and the resolutions that were implemented during the development of a low cost mobile robot platform using off the shelf sensors, with a particular focus on the algorithmic side of the system. The incrementally developed modules target the localisation and mapping aspects by incorporating a number of different sensors to gather the information of the surroundings from different perspectives by simultaneously or sequentially combining the measurements to disambiguate and support each other. Although there is a heavy focus on the image processing techniques, the integration with the other sensors and the characteristics of the platform itself are included in the designs and analyses of the core and interactive modules. A visual odometry technique is implemented for the localisation module, which includes calibration processes, feature tracking, synchronisation between multiple sensors, as well as short and long term landmark identification to calculate the relative pose of the robot in real time. The mapping module considers the interpretation and the representation of sensor readings to simplify and hasten the interactions between multiple sensors, while selecting the appropriate attributes and characteristics to construct a multi-attributed model of the environment. The modules that are developed are applied to realistic indoor scenarios, which are taken into consideration in some of the algorithms to enhance the performance through known constraints. As the performance of algorithms depends significantly on the hardware, the environment, and the number of concurrently running sensors and modules, comparisons are made against various implementations that have been developed throughout the project.
25

Advantages and Risks of Sensing for Cyber-Physical Security

Han, Jun 01 May 2018 (has links)
With the the emergence of the Internet-of-Things (IoT) and Cyber-Physical Systems (CPS), modern computing is now transforming from residing only in the cyber domain to the cyber-physical domain. I focus on one important aspect of this transformation, namely shortcomings of traditional security measures. Security research over the last couple of decades focused on protecting data in regard to identities or similar static attributes. However, in the physical world, data rely more on physical relationships, hence requires CPS to verify identities together with relative physical context to provide security guarantees. To enable such verification, it requires the devices to prove unique relative physical context only available to the intended devices. In this work, I study how varying levels of constraints on physical boundary of co-located devices determine the relative physical context. Specifically, I explore different application scenarios with varying levels of constraints – including smart-home, semi-autonomous vehicles, and in-vehicle environments – and analyze how different constraints affect binding identities to physical relationships, ultimately enabling IoT devices to perform such verification. Furthermore, I also demonstrate that sensing may pose risks for CPS by presenting an attack on personal privacy in a smart home environment.
26

Human Friendly Robot

Hu, Yu January 2014 (has links)
In this project, a novel human friendly mobile robot navigation controller is investigated. By applying this controller, the mobile robot is able to work in a complicated environment with several humans and other obstacles avoiding them before a collision happens. This robot will have a preference in avoiding humans over other obstacles keeping human safety as its first consideration. To achieve this goal, three problems have to be solved. The first one is the robot should be able to “see” the environment and distinguish the human and the obstacles. The functions of human sensor and sonar sensor are presented. A new sensor fusion method for combining the information collected by these two sorts of sensors based on Dempster-Shafer evidence theory is also proposed. By using the sensor fusion method, the robot will have a better view of human. The second problem is the robot has to know how to avoid collision. A new navigation algorithm, based on an improved velocity potential field method, is then described. The way of calculating the distances of avoidance based on different kinds of obstacles is presented as well. The last problem is how to make the mobile robot put human as its first priority when avoiding collision. A summary of the methods which are used to protect human is mentioned. According to the simulation and the experimental results, the new mobile robot navigation controller successfully led the robot avoid collisions in complicated situations and always put human safety as its first consideration.
27

Multirobot Localization Using Heuristically Tuned Extended Kalman Filter

Masinjila, Ruslan January 2016 (has links)
A mobile robot needs to know its pose (position and orientation) in order to navigate and perform useful tasks. The problem of determining this pose with respect to a global or local frame is called localisation, and is a key component in providing autonomy to mobile robots. Thus, localisation answers the question Where am I? from the robot’s perspective. Localisation involving a single robot is a widely explored and documented problem in mobile robotics. The basic idea behind most documented localisation techniques involves the optimum combination of noisy and uncertain information that comes from various robot’s sensors. However, many complex robotic applications require multiple robots to work together and share information among themselves in order to successfully and efficiently accomplish certain tasks. This leads to research in collaborative localisation involving multiple robots. Several studies have shown that when multiple robots collaboratively localise themselves, the resulting accuracy in their estimated positions and orientations outperforms that of a single robot, especially in scenarios where robots do not have access to information about their surrounding environment. This thesis presents the main theme of most of the existing collaborative, multi-robot localisation solutions, and proposes an alternative or complementary solution to some of the existing challenges in multirobot localisation. Specifically, in this thesis, a heuristically tuned Extended Kalman Filter is proposed to localise a group of mobile robots. Simulations show that when certain conditions are met, the proposed tuning method significantly improves the accuracy and reliability of poses estimated by the Extended Kalman Filter. Real world experiments performed on custom-made robotic platforms validate the simulation results.
28

Combinação de métodos de inteligência artificial para fusão de sensores / Combination of artificial intelligence methods for sensor fusion

Katti Faceli 23 March 2001 (has links)
Robôs móveis dependem de dados provenientes de sensores para ter uma representação do seu ambiente. Porém, os sensores geralmente fornecem informações incompletas, inconsistentes ou imprecisas. Técnicas de fusão de sensores têm sido empregadas com sucesso para aumentar a precisão de medidas obtidas com sensores. Este trabalho propõe e investiga o uso de técnicas de inteligência artificial para fusão de sensores com o objetivo de melhorar a precisão e acurácia de medidas de distância entre um robô e um objeto no seu ambiente de trabalho, obtidas com diferentes sensores. Vários algoritmos de aprendizado de máquina são investigados para fundir os dados dos sensores. O melhor modelo gerado com cada algoritmo é chamado de estimador. Neste trabalho, é mostrado que a utilização de estimadores pode melhorar significativamente a performance alcançada por cada sensor isoladamente. Mas os vários algoritmos de aprendizado de máquina empregados têm diferentes características, fazendo com que os estimadores tenham diferentes comportamentos em diferentes situações. Objetivando atingir um comportamento mais preciso e confiável, os estimadores são combinados em comitês. Os resultados obtidos sugerem que essa combinação pode melhorar a confiança e precisão das medidas de distâncias dos sensores individuais e estimadores usados para fusão de sensores. / Mobile robots rely on sensor data to have a representation of their environment. However, the sensors usually provide incomplete, inconsistent or inaccurate information. Sensor fusion has been successfully employed to enhance the accuracy of sensor measures. This work proposes and investigates the use of artificial intelligence techniques for sensor fusion. Its main goal is to improve the accuracy and reliability of a distance between a robot and an object in its work environment using measures obtained from different sensors. Several machine learning algorithms are investigated to fuse the sensors data. The best model generated with each algorithm are called estimator. It is shown that the employment of the estimators based on artificial intelligence can improve significantly the performance achieved by each sensor alone. The machine learning algorithms employed have different characteristics, causing the estimators to have different behaviour in different situations. Aiming to achieve more accurate and reliable behavior, the estimators are combined in committees. The results obtained suggest that this combination can improve the reliability and accuracy of the distance measures by the individual sensors and estimators used for sensor fusion.
29

Cooperative Perception for Connected Autonomous Vehicle Edge Computing System

Chen, Qi 08 1900 (has links)
This dissertation first conducts a study on raw-data level cooperative perception for enhancing the detection ability of self-driving systems for connected autonomous vehicles (CAVs). A LiDAR (Light Detection and Ranging sensor) point cloud-based 3D object detection method is deployed to enhance detection performance by expanding the effective sensing area, capturing critical information in multiple scenarios and improving detection accuracy. In addition, a point cloud feature based cooperative perception framework is proposed on edge computing system for CAVs. This dissertation also uses the features' intrinsically small size to achieve real-time edge computing, without running the risk of congesting the network. In order to distinguish small sized objects such as pedestrian and cyclist in 3D data, an end-to-end multi-sensor fusion model is developed to implement 3D object detection from multi-sensor data. Experiments show that by solving multiple perception on camera and LiDAR jointly, the detection model can leverage the advantages from high resolution image and physical world LiDAR mapping data, which leads the KITTI benchmark on 3D object detection. At last, an application of cooperative perception is deployed on edge to heal the live map for autonomous vehicles. Through 3D reconstruction and multi-sensor fusion detection, experiments on real-world dataset demonstrate that a high definition (HD) map on edge can afford well sensed local data for navigation to CAVs.
30

Drone Detection and Classification using Machine Learning and Sensor Fusion

Svanström, Fredrik January 2020 (has links)
This thesis explores the process of designing an automatic multisensordrone detection system using machine learning and sensorfusion. Besides the more common video and audio sensors, the systemalso includes a thermal infrared camera. The results show thatutilizing an infrared sensor is a feasible solution to the drone detectiontask, and even with slightly lower resolution, the performance isjust as good as a video sensor. The detector performance as a functionof the sensor-to-target distance is also investigated. Using sensor fusion, the system is made more robust than the individualsensors. It is observed that when using the proposed sensorfusion approach, the output system results are more stable, and thenumber of false detections is mitigated. A video dataset containing 650 annotated infrared and visible videosof drones, birds, airplanes and helicopters is published. Additionally,an audio dataset with the classes drones, helicopters and backgroundsis also published.

Page generated in 0.0674 seconds