• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 186
  • 25
  • 13
  • 8
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 292
  • 292
  • 78
  • 69
  • 64
  • 61
  • 56
  • 48
  • 43
  • 43
  • 42
  • 40
  • 38
  • 35
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Road Shape Estimation based on On-board Sensors and Map Data

Foborg, Felix January 2014 (has links)
The ability to acquire accurate information of the surrounding road environment is crucial for autonomous driving and advanced driver assistance systems. A method to estimate the shape of the road has been developed and evaluated. The estimate is based on fusion of data from a road marking detector, a radar tracker, map data, GPS, and inertial sensors. The method is intended for highway use and focus has been on increasing the availability of a sufficiently accurate road shape estimate in the event of sensor failures. To make use of past sensor measurements, an extended Kalman filter has been used together with dynamical models for the road and the ego vehicle. Results from a performance evaluation show that the road shape estimate clearly benefits from being based on a fusion of sensor data. The different sensors have also proven to be of various importance to the different parameters that describe the road shape. / Fordon som kan köra autonomt, det vill säga utan förare, är ett mål för fordonsindustrin och en dröm för många bilägare. Det skulle möjliggöra för förare att använda tiden till annat och minska personalkostnader för transportbolag. Säkerheten på våra vägar skulle även kunna förbättras eftersom att ett sådant system har möjlighet att reagera snabbare än någon människa och drabbas inte av trötthet eller störs av andra passagerare. Förmåga att kunna inhämta och tolka information om den omkringliggande trafiksituationen är ytterst nödvändigt för att kunna utveckla autonoma fordon och behövs även för mer avancerade moderna säkerhetssytem, som till exempel kollissionsvarningssystem. En viktig del i detta är att kunna uppfatta hur formen på vägen ser ut. Målet med detta examensarbete är att utveckla en algoritm som estimerar vägens form baserat på ett antal sensorer monterade på ett fordon och information från en kartdatabas. Den största vikten har legat på att algoritmen alltid ska kunna leverera en tillräckligt bra skattning, även i perioder när sensormätningar inte finns tillgängliga på grund av att sensorer fallerar. Den tänkta miljön är motorvägskörning, främst därför att det innebär en hel del förenklingar i jämförelse med andra typer av vägar. Det stora problemet för sådana algoritmer ligger ofta i att sensorer lider av olika typer av nackdelar. De mäter bara en viss specifik sak, kan ha stora mätfel, är känsliga för olika förhållanden och har begränsingar i räckvidd. För att uttnyttja sensorernas olika styrkor och mildra effekten av deras brister har ett flertal sensorer använts tillsammans. Examensarbetet har utförts på Scania och testats på deras lastbilar. De typer av sensorer som har använts är redan, eller är på god väg att bli, standardutrustning i deras lastbilar och i många andra moderna fordon. Algoritmen använder sig av mätningar från en vägmarkeringsdetektor, som tillhandahåller formen på de två närmaste väglinjerna, en radar, som ger position och rörelse hos framförvarande bilar, en kartdatabas, som tillsammans med en GPS ger tidigare uppmätt kurvatur vid fordonets position, och interna sensorer som mäter det egna fordonets rörelser. För att kunna fortsätta ge en skattning när mätningar inte finns tillgängliga och för att göra algoritmen robustare mot dålig data, har en metod använts som uttnyttjar informationen i tidigare mätvärden, ett så kallat Extended Kalman filter. Denna metod kräver en matematisk beskrivning av hur formen på vägen framför fordonet förväntas förändras över tid, baserat på hur fordonet rör sig. De olika typerna av mätvärden från sensorerna kombineras i metoden och viktas olika beroende på hur tillförlitliga man anser att sensorerna är. Algoritmen har utvärderats på mätningar från allmänna motorvägar utanför Södertälje. Resultatet från denna utvärdering visar att det är väldigt fördelaktigt att kombinera flera olika typer av sensorer för att kunna leverera en bra skattning så ofta som möjligt. Det visar sig även att de olika typerna av sensorer är av olika stor betydelse för olika vägformsparametrar.
32

Real-Time Multi-Sensor Localisation and Mapping Algorithms for Mobile Robots

Matsumoto, Takeshi, takeshi.matsumoto@flinders.edu.au January 2010 (has links)
A mobile robot system provides a grounded platform for a wide variety of interactive systems to be developed and deployed. The mobility provided by the robot presents unique challenges as it must observe the state of the surroundings while observing the state of itself with respect to the environment. The scope of the discipline includes the mechanical and hardware issues, which limit and direct the capabilities of the software considerations. The systems that are integrated into the mobile robot platform include both specific task oriented and fundamental modules that define the core behaviour of the robot. While the earlier can sometimes be developed separately and integrated at a later stage, the core modules are often custom designed early on to suit the individual robot system depending on the configuration of the mechanical components. This thesis covers the issues encountered and the resolutions that were implemented during the development of a low cost mobile robot platform using off the shelf sensors, with a particular focus on the algorithmic side of the system. The incrementally developed modules target the localisation and mapping aspects by incorporating a number of different sensors to gather the information of the surroundings from different perspectives by simultaneously or sequentially combining the measurements to disambiguate and support each other. Although there is a heavy focus on the image processing techniques, the integration with the other sensors and the characteristics of the platform itself are included in the designs and analyses of the core and interactive modules. A visual odometry technique is implemented for the localisation module, which includes calibration processes, feature tracking, synchronisation between multiple sensors, as well as short and long term landmark identification to calculate the relative pose of the robot in real time. The mapping module considers the interpretation and the representation of sensor readings to simplify and hasten the interactions between multiple sensors, while selecting the appropriate attributes and characteristics to construct a multi-attributed model of the environment. The modules that are developed are applied to realistic indoor scenarios, which are taken into consideration in some of the algorithms to enhance the performance through known constraints. As the performance of algorithms depends significantly on the hardware, the environment, and the number of concurrently running sensors and modules, comparisons are made against various implementations that have been developed throughout the project.
33

Sensor Fusion : Applying sensor fusion in a district heating substation

Kangerud, Jim January 2005 (has links)
Many machines in these days have sensors to collect information from the world they inhabit. The correctness of this information is crucial for the correct operation. However, at times sensors are not so reliable since they are sometimes affected of some type of noise and thus give incorrect information. Another drawback might be lack of information due to shortage of existing sensors. Sensor fusion is trying to overcome these drawbacks by integrating or combining information from multiple sensors. The heating of a building is a slow and time consuming process, i.e. either the flow or energy consumption are object to drastically changes. On the other hand, the tap water system, i.e. the heating of tap water can be the source to severe changes in both flow and energy consumption. This because of that the flow is stochastic in the tap water system, at any given time a tap may be opened or closed and therefore drastically change the flow. The purpose of this thesis is to investigate if is it possible to use sensor fusion to get accurate continuous flow values from a district heating substation. This is done by integrating different sensor fusion algorithms in a district heating substation simulator.
34

Advantages and Risks of Sensing for Cyber-Physical Security

Han, Jun 01 May 2018 (has links)
With the the emergence of the Internet-of-Things (IoT) and Cyber-Physical Systems (CPS), modern computing is now transforming from residing only in the cyber domain to the cyber-physical domain. I focus on one important aspect of this transformation, namely shortcomings of traditional security measures. Security research over the last couple of decades focused on protecting data in regard to identities or similar static attributes. However, in the physical world, data rely more on physical relationships, hence requires CPS to verify identities together with relative physical context to provide security guarantees. To enable such verification, it requires the devices to prove unique relative physical context only available to the intended devices. In this work, I study how varying levels of constraints on physical boundary of co-located devices determine the relative physical context. Specifically, I explore different application scenarios with varying levels of constraints – including smart-home, semi-autonomous vehicles, and in-vehicle environments – and analyze how different constraints affect binding identities to physical relationships, ultimately enabling IoT devices to perform such verification. Furthermore, I also demonstrate that sensing may pose risks for CPS by presenting an attack on personal privacy in a smart home environment.
35

Human Friendly Robot

Hu, Yu January 2014 (has links)
In this project, a novel human friendly mobile robot navigation controller is investigated. By applying this controller, the mobile robot is able to work in a complicated environment with several humans and other obstacles avoiding them before a collision happens. This robot will have a preference in avoiding humans over other obstacles keeping human safety as its first consideration. To achieve this goal, three problems have to be solved. The first one is the robot should be able to “see” the environment and distinguish the human and the obstacles. The functions of human sensor and sonar sensor are presented. A new sensor fusion method for combining the information collected by these two sorts of sensors based on Dempster-Shafer evidence theory is also proposed. By using the sensor fusion method, the robot will have a better view of human. The second problem is the robot has to know how to avoid collision. A new navigation algorithm, based on an improved velocity potential field method, is then described. The way of calculating the distances of avoidance based on different kinds of obstacles is presented as well. The last problem is how to make the mobile robot put human as its first priority when avoiding collision. A summary of the methods which are used to protect human is mentioned. According to the simulation and the experimental results, the new mobile robot navigation controller successfully led the robot avoid collisions in complicated situations and always put human safety as its first consideration.
36

Multirobot Localization Using Heuristically Tuned Extended Kalman Filter

Masinjila, Ruslan January 2016 (has links)
A mobile robot needs to know its pose (position and orientation) in order to navigate and perform useful tasks. The problem of determining this pose with respect to a global or local frame is called localisation, and is a key component in providing autonomy to mobile robots. Thus, localisation answers the question Where am I? from the robot’s perspective. Localisation involving a single robot is a widely explored and documented problem in mobile robotics. The basic idea behind most documented localisation techniques involves the optimum combination of noisy and uncertain information that comes from various robot’s sensors. However, many complex robotic applications require multiple robots to work together and share information among themselves in order to successfully and efficiently accomplish certain tasks. This leads to research in collaborative localisation involving multiple robots. Several studies have shown that when multiple robots collaboratively localise themselves, the resulting accuracy in their estimated positions and orientations outperforms that of a single robot, especially in scenarios where robots do not have access to information about their surrounding environment. This thesis presents the main theme of most of the existing collaborative, multi-robot localisation solutions, and proposes an alternative or complementary solution to some of the existing challenges in multirobot localisation. Specifically, in this thesis, a heuristically tuned Extended Kalman Filter is proposed to localise a group of mobile robots. Simulations show that when certain conditions are met, the proposed tuning method significantly improves the accuracy and reliability of poses estimated by the Extended Kalman Filter. Real world experiments performed on custom-made robotic platforms validate the simulation results.
37

Combinação de métodos de inteligência artificial para fusão de sensores / Combination of artificial intelligence methods for sensor fusion

Katti Faceli 23 March 2001 (has links)
Robôs móveis dependem de dados provenientes de sensores para ter uma representação do seu ambiente. Porém, os sensores geralmente fornecem informações incompletas, inconsistentes ou imprecisas. Técnicas de fusão de sensores têm sido empregadas com sucesso para aumentar a precisão de medidas obtidas com sensores. Este trabalho propõe e investiga o uso de técnicas de inteligência artificial para fusão de sensores com o objetivo de melhorar a precisão e acurácia de medidas de distância entre um robô e um objeto no seu ambiente de trabalho, obtidas com diferentes sensores. Vários algoritmos de aprendizado de máquina são investigados para fundir os dados dos sensores. O melhor modelo gerado com cada algoritmo é chamado de estimador. Neste trabalho, é mostrado que a utilização de estimadores pode melhorar significativamente a performance alcançada por cada sensor isoladamente. Mas os vários algoritmos de aprendizado de máquina empregados têm diferentes características, fazendo com que os estimadores tenham diferentes comportamentos em diferentes situações. Objetivando atingir um comportamento mais preciso e confiável, os estimadores são combinados em comitês. Os resultados obtidos sugerem que essa combinação pode melhorar a confiança e precisão das medidas de distâncias dos sensores individuais e estimadores usados para fusão de sensores. / Mobile robots rely on sensor data to have a representation of their environment. However, the sensors usually provide incomplete, inconsistent or inaccurate information. Sensor fusion has been successfully employed to enhance the accuracy of sensor measures. This work proposes and investigates the use of artificial intelligence techniques for sensor fusion. Its main goal is to improve the accuracy and reliability of a distance between a robot and an object in its work environment using measures obtained from different sensors. Several machine learning algorithms are investigated to fuse the sensors data. The best model generated with each algorithm are called estimator. It is shown that the employment of the estimators based on artificial intelligence can improve significantly the performance achieved by each sensor alone. The machine learning algorithms employed have different characteristics, causing the estimators to have different behaviour in different situations. Aiming to achieve more accurate and reliable behavior, the estimators are combined in committees. The results obtained suggest that this combination can improve the reliability and accuracy of the distance measures by the individual sensors and estimators used for sensor fusion.
38

Cooperative Perception for Connected Autonomous Vehicle Edge Computing System

Chen, Qi 08 1900 (has links)
This dissertation first conducts a study on raw-data level cooperative perception for enhancing the detection ability of self-driving systems for connected autonomous vehicles (CAVs). A LiDAR (Light Detection and Ranging sensor) point cloud-based 3D object detection method is deployed to enhance detection performance by expanding the effective sensing area, capturing critical information in multiple scenarios and improving detection accuracy. In addition, a point cloud feature based cooperative perception framework is proposed on edge computing system for CAVs. This dissertation also uses the features' intrinsically small size to achieve real-time edge computing, without running the risk of congesting the network. In order to distinguish small sized objects such as pedestrian and cyclist in 3D data, an end-to-end multi-sensor fusion model is developed to implement 3D object detection from multi-sensor data. Experiments show that by solving multiple perception on camera and LiDAR jointly, the detection model can leverage the advantages from high resolution image and physical world LiDAR mapping data, which leads the KITTI benchmark on 3D object detection. At last, an application of cooperative perception is deployed on edge to heal the live map for autonomous vehicles. Through 3D reconstruction and multi-sensor fusion detection, experiments on real-world dataset demonstrate that a high definition (HD) map on edge can afford well sensed local data for navigation to CAVs.
39

Drone Detection and Classification using Machine Learning and Sensor Fusion

Svanström, Fredrik January 2020 (has links)
This thesis explores the process of designing an automatic multisensordrone detection system using machine learning and sensorfusion. Besides the more common video and audio sensors, the systemalso includes a thermal infrared camera. The results show thatutilizing an infrared sensor is a feasible solution to the drone detectiontask, and even with slightly lower resolution, the performance isjust as good as a video sensor. The detector performance as a functionof the sensor-to-target distance is also investigated. Using sensor fusion, the system is made more robust than the individualsensors. It is observed that when using the proposed sensorfusion approach, the output system results are more stable, and thenumber of false detections is mitigated. A video dataset containing 650 annotated infrared and visible videosof drones, birds, airplanes and helicopters is published. Additionally,an audio dataset with the classes drones, helicopters and backgroundsis also published.
40

Environment Mapping in Larger Spaces

Ciambrone, Andrew James 09 February 2017 (has links)
Spatial mapping or environment mapping is the process of exploring a real world environment and creating its digital representation. To create convincing mixed reality programs, an environment mapping device must be able to detect a user's position and map the user's environment. Currently available commercial spatial mapping devices mostly use infrared camera to obtain a depth map which is effective only for short to medium distances (3-4 meters). This work describes an extension to the existing environment mapping devices and techniques to enable mapping of larger architectural environments using a combination of a camera, Inertial Measurement Unit (IMU), and Light Detection and Ranging (LIDAR) devices supported by sensor fusion and computer vision techniques. There are three main parts to the proposed system. The first part is data collection and data fusion using embedded hardware, the second part is data processing (segmentation) and the third part is creating a geometry mesh of the environment. The developed system was evaluated against its ability to determine the dimension of the room and of objects within the room. This low cost system can significantly expand the mapping range of the existing mixed reality devices such as Microsoft HoloLens device. / Master of Science

Page generated in 0.0643 seconds