• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 10
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Toward Automatically Composed FPGA-Optimized Robotic Systems Using High-Level Synthesis

Lin, Szu-Wei 14 April 2023 (has links) (PDF)
Robotic systems are known to be computationally intensive. To improve performance, developers tend to implement custom robotic algorithms in hardware. However, a full robotic system typically consists of many interconnected algorithmic components that can easily max-out FPGA resources, thus requiring the designer to adjust each algorithm design for each new robotic systems in order to meet specific systems requirements and limited resources. Furthermore, manual development of digital circuitry using a hardware description language (HDL) such as verilog or VHDL, is error-prone, time consuming, and often takes months or years to develop and verify. Recent developments in high-level synthesis (HLS), enable automatic generation of digital circuit designs from high-level languages such as C or C++. In this thesis, we propose to develop a database of HLS-generated pareto-optimal hardware designs for various robotic algorithms, such that a fully automated process can optimally compose a complete robotic system given a set of system requirements. In the first part of this thesis, we take a first step towards this goal by developing a system for automatic selection of an Occupancy Grid Mapping (OGM) implementation given specific system requirements and resource thresholds. We first generate hundreds of possible hardware designs via Vitis HLS as we vary parameters to explore the designs space. We then present results which evaluate and explore trade-offs of these designs with respect to accuracy, latency, resource utilization, and power. Using these results, we create a software tool which is able to automatically select an optimal OGM implementation. After implementing selected designs on a PYNQ-Z2 FPGA board, our results show that the runtime of the algorithm improves by 35x over a C++-based implementation. In the second part of this thesis, we extend these same techniques to the Particle Filter (PF) algorithm by implementing 7 different resampling methods and varying parameters on hardware, again via HLS. In this case, we are able to explore and analyze thousands of PF designs. Our evaluation results show that runtime of the algorithm using Local Selection Resampling method reaches the fastest performance on an FPGA and can be as much as 10x faster than in C++. Finally, we build another design selection tool that automatically generates an optimal PF implementation from this design space for a given query set of requirements.
12

Radar and Thermopile Sensor Fusion for Pedestrian Detection

Rouhani, Shahin January 2005 (has links)
During the last decades, great steps have been taken to decrease passenger fatality in cars. Systems such as ABS and airbags have been developed for this purpose alone. But not much effort has been put into pedestrian safety. In traffic today, pedestrians are one of the most endangered participants and in recent years, there has been an increased demand for pedestrian safety from the European Enhanced Vehicle safety Committee and the European New Car Assessment Programme has thereby developed tests where pedestrian safety is rated. With this, detection of pedestrians has arised as a part in the automotive safety research. This thesis provides some of this research available in the area and a brief introduction to some of the sensors readily available. The objective of this work is to detect pedestrians in front of a vehicle by using thermoelectric infrared sensors fused with short range radar sensors and also to minimize any missed detections or false alarms. There has already been extensive work performed with the thermoelectric infrared sensors for this sole purpose and this thesis is based on that work. Information is provided about the sensors used and an explanation of how they are set up during this work. Methods used for classifying objects are given and the assumptions made about pedestrians in this system. A basic tracking algorithm is used to track radar detected objects in order to provide the fusion system with better data. The approach chosen for the sensor fusion is a central-level fusion where the probabilities for a pedestrian from the radars and the thermoelectric infrared sensors are combined using Dempster-Shafer Theory and accumulated over time in the Occupancy Grid framework. Theories that are extensively used in this thesis are explained in detail and discussed accordingly in different chapters. Finally the experiments undertaken and the results attained from the presented system are shown. A comparison is made with the previous detection system, which only uses thermoelectric infrared sensors and of which this work continues on. Conclusions regarding what this system is capable of are drawn with its inherent strengths and weaknesses.
13

Radar and Thermopile Sensor Fusion for Pedestrian Detection

Rouhani, Shahin January 2005 (has links)
<p>During the last decades, great steps have been taken to decrease passenger fatality in cars. Systems such as ABS and airbags have been developed for this purpose alone. But not much effort has been put into pedestrian safety. In traffic today, pedestrians are one of the most endangered participants and in recent years, there has been an increased demand for pedestrian safety from the European Enhanced Vehicle safety Committee and the European New Car Assessment Programme has thereby developed tests where pedestrian safety is rated. With this, detection of pedestrians has arised as a part in the automotive safety research.</p><p>This thesis provides some of this research available in the area and a brief introduction to some of the sensors readily available. The objective of this work is to detect pedestrians in front of a vehicle by using thermoelectric infrared sensors fused with short range radar sensors and also to minimize any missed detections or false alarms. There has already been extensive work performed with the thermoelectric infrared sensors for this sole purpose and this thesis is based on that work.</p><p>Information is provided about the sensors used and an explanation of how they are set up during this work. Methods used for classifying objects are given and the assumptions made about pedestrians in this system. A basic tracking algorithm is used to track radar detected objects in order to provide the fusion system with better data. The approach chosen for the sensor fusion is a central-level fusion where the probabilities for a pedestrian from the radars and the thermoelectric infrared sensors are combined using Dempster-Shafer Theory and accumulated over time in the Occupancy Grid framework. Theories that are extensively used in this thesis are explained in detail and discussed accordingly in different chapters.</p><p>Finally the experiments undertaken and the results attained from the presented system are shown. A comparison is made with the previous detection system, which only uses thermoelectric infrared sensors and of which this work continues on. Conclusions regarding what this system is capable of are drawn with its inherent strengths and weaknesses.</p>
14

Autonomous Mapping and Exploration of Dynamic Indoor Environments / Autonom kartläggning och utforskning av dynamiska inomhusmiljöer

Fåk, Joel, Wilkinson, Tomas January 2013 (has links)
This thesis describes all the necessary parts needed to build a complete system for autonomous indoor mapping in 3D. The robotic platform used is a two-wheeled Segway, operating in a planar environment. This, together with wheel odometers, an Inertial Measurement Unit (IMU), two Microsoft Kinects and a laptop comprise the backbone of the system, which can be divided into three parts: The localization and mapping part, which fundamentally is a SLAM (simultaneous localization and mapping) algorithm implemented using the registration technique Iterative Closest Point (ICP). Along with the map being in 3D, it also designed to handle the mapping of dynamic scenes, something absent from the standard SLAM design. The planning used by the system is twofold. First, the path planning - finding a path from the current position to a destination - and second, the target planning - determining where to go next given the current state of the map and the robot. The third part of the system is the control and collision systems, which while they have not received much focus, are very necessary for a fully autonomous system. Contributions made by this thesis include: The 3D map framework Octomap is extended to handle the mapping of dynamic scenes; A new method for target planning, based on image processing is presented; A calibration procedure for the robot is derived that gives a full six degree of freedom pose for each Kinect. Results show that our calibration procedure produces an accurate pose for each Kinect, which is crucial for a functioning system. The dynamic mapping is shown to outperform the standard occupancy grid in fundamental situations that arise when mapping dynamic scenes. Additionally, the results indicate that the target planning algorithm provides a fast and easy way to plan new target destinations. Finally, the entire system’s autonomous mapping capabilities are evaluated together, producing promising results. However, it also highlights some problems that limit the system’s performance such as the inaccuracy and short range of the Kinects or noise added and reinforced by the multiple subsystems / Detta exjobb beskriver delarna som krävs för att för bygga ett komplett system som autonomt kartlägger inomhusmiljöer i tre dimensioner. Robotplattformen är en Segway, som är kapabel att röra sig i ett plan. Segwayn, tillsammans med en tröghetssensor, två Microsoft Kinects och en bärbar dator utgör grunden till systemet, som kan delas i tre delar: En lokaliserings- och karteringsdel, som i grunden är en SLAM-algoritm (simultan lokalisering och kartläggning)  baserad på registreringsmetoden Iterative Closest Point (ICP). Kartan som byggs upp är i tre dimensioner och ska dessutom hantera kartläggningen av dynamiska miljöer, något som orginalforumleringen av SLAM problemet inte klarar av. En automatisk planeringsdel, som består av två delar. Dels ruttplanering som går ut på att hitta en väg från sin nuvarande position till det valda målet och dels målplanering som innebär att välja ett mål att åka till givet den nuvarande kartan och robotens nuvarande position. Systemets tredje del är regler- och kollisionssystemen. Dessa system har inte varit i fokus i detta arbete, men de är ändå högst nödvändiga för att ett autonomt system skall fungera. Detta examensarbete bidrar med följande: Octomap, ett ramverk för kartläggningen i 3D, har utökats för att hantera kartläggningen av dynamiska miljöer; En ny metod för målplanering, baserad på bildbehandling läggs fram; En kalibreringsprocedur för roboten är framtagen som ger den fullständiga posen i förhållande till roboten för varje Kinect. Resultaten visar att vår kalibreringsprocedur ger en nogrann pose for för varje Kinect, vilket är avgörande för att systemet ska fungera. Metoden för kartläggningen av dynamiska miljöer visas prestera bra i grundläggande situationer som uppstår vid kartläggning av dynamiska miljöer. Vidare visas att målplaneringsalgoritmen ger ett snabbt och enkelt sätt att planera mål att åka till. Slutligen utvärderas hela systemets autonoma kartläggningsförmåga, som ger lovande resultat. Dock lyfter resultat även fram problem som begränsar systemets prestanda, till exempel Kinectens onoggranhet och korta räckvidd samt brus som läggs till och förstärks av de olika subsystemen.
15

Estimation of Local Map from Radar Data / Skattning av lokal karta från radardata

Moritz, Malte, Pettersson, Anton January 2014 (has links)
Autonomous features in vehicles is already a big part of the automobile area and now many companies are looking for ways to make vehicles fully autonomous. Autonomous vehicles need to get information about the surrounding environment. The information is extracted from exteroceptive sensors and today vehicles often use laser scanners for this purpose. Laser scanners are very expensive and fragile, it is therefore interesting to investigate if cheaper radar sensors could be used. One big challenge when it comes to autonomous vehicles is to be able to use the exteroceptive sensors and extract a position of the vehicle and at the same time get a map of the environment. The area of Simultaneous Localization and Mapping (SLAM) is a well explored area when using laser scanners but is not that well explored when using radars. It has been investigated if it is possible to use radar sensors on a truck to create a map of the area where the truck drives. The truck has been equipped with ego-motion sensors and radars and the data from them has been fused together to get a position of the truck and to get a map of the surrounding environment, i.e. a SLAM algorithm has been implemented. The map is represented by an Occupancy Grid Map (OGM) which should only consist of static objects. The OGM is updated probabilistically by using a binary Bayes filter. To localize the truck with help of motion sensors an Extended Kalman Filter (EKF) is used together with a map and a scan match method. All these methods are put together to create a SLAM algorithm. A range rate filter method is used to filter out noise and non-static measurements from the radar. The results of this thesis show that it is possible to use radar sensors to create a map of a truck's surroundings. The quality of the map is considered to be good and details such as space between parked trucks, signs and light posts can be distinguished. It has also been proven that methods with low performance on their own can together with other methods work very well in the SLAM algorithm. Overall the SLAM algorithm works well but when driving in unexplored areas with a low number of objects problems with positioning might occur. A real time system has also been implemented and the map can be seen at the same time as the truck is manoeuvred.
16

Detekce pohyblivých objektů v prostředí mobilního robota / Moving Object Detection in the Environment of Mobile Robot

Dorotovič, Viktor January 2017 (has links)
This work's aim is movement detection in the environment of a robot, that may move itself. A 2D occupancy grid representation is used, containing only the currently visible environment, without filtering in time. Motion detection is based on a grid-based particle filter introduced by Tanzmeister et al. in Grid-based Mapping and Tracking in Dynamic Environments using a Uniform Evidential Environment Representation. The system was implemented in the Robot Operating System, which allows for re-use of modules which the solution is composed of. The KITTI Visual Odometry dataset was chosen as a source~of LiDAR data for experiments, along with ground-truth pose information. Ground segmentation based on Loopy Belief Propagation was used to filter the point clouds. The implemeted motion detector is able to distiguish between static and dynamic vehicles in this dataset. Further tests in a simulated environment have shown some shortcomings in the detection of large continuous moving objects.
17

Parking Map Generation and Tracking Using Radar : Adaptive Inverse Sensor Model / Parkeringskartagenerering och spårning med radar

Mahmoud, Mohamed January 2020 (has links)
Radar map generation using binary Bayes filter or what is commonly known as Inverse Sensor Model; which translates the sensor measurements into grid cells occupancy estimation, is a classical problem in different fields. In this work, the focus will be on development of Inverse Sensor Model for parking space using 77 GHz FMCW (Frequency Modulated Continuous Wave) automotive radar, that can handle different environment geometrical complexity in a parking space. There are two main types of Inverse Sensor Models, where each has its own assumption about the sensor noise. One that is fixed and is similar to a lookup table, and constructed based on combination of sensor-specific characteristics, experimental data and empirically-determined parameters. The other one is learned by using ground truth labeling of the grid map cell, to capture the desired Inverse Sensor Model. In this work a new Inverse Sensor Model is proposed, that make use of the computational advantage of using fixed Inverse Sensor Model and capturing desired occupancy estimation based on ground truth labeling. A derivation of the occupancy grid mapping problem using binary Bayes filtering would be performed from the well known SLAM (Simultaneous Localization and Mapping) problem, followed by presenting the Adaptive Inverse Sensor Model, that uses fixed occupancy estimation but with adaptive occupancy shape estimation based on statistical analysis of the radar measurements distribution across the acquisition environment. A prestudy of the noise nature of the radar used in this work is performed, to have a common Inverse Sensor Model as a benchmark. Then the drawbacks of such Inverse Sensor Model would be addressed as sub steps of Adaptive Inverse Sensor Model, to be able to haven an optimal grid map occupancy estimator. Finally a comparison between the generated maps using the benchmark and the adaptive Inverse Sensor Model will take place, to show that under the fulfillment of the assumptions of the Adaptive Inverse Sensor Model, the Adaptive Inverse Sensor Model can offer a better visual appealing map to that of the benchmark.
18

Navigation visuelle de robots mobile dans un environnement d'intérieur. / Visual navigation of mobile robots in indoor environments.

Ghazouani, Haythem 12 December 2012 (has links)
Les travaux présentés dans cette thèse concernent le thème des fonctionnalités visuelles qu'il convient d'embarquer sur un robot mobile, afin qu'il puisse se déplacer dans son environnement. Plus précisément, ils ont trait aux méthodes de perception par vision stéréoscopique dense, de modélisation de l'environnement par grille d'occupation, et de suivi visuel d'objets, pour la navigation autonome d'un robot mobile dans un environnement d'intérieur. Il nous semble important que les méthodes de perception visuelle soient à la fois robustes et rapide. Alors que dans les travaux réalisés, on trouve les méthodes globales de mise en correspondance qui sont connues pour leur robustesse mais moins pour être employées dans les applications temps réel et les méthodes locales qui sont les plus adaptées au temps réel tout en manquant de précision. Pour cela, ce travail essaye de trouver un compromis entre robustesse et temps réel en présentant une méthode semi-locale, qui repose sur la définition des distributions de possibilités basées sur une formalisation floue des contraintes stéréoscopiques.Il nous semble aussi important qu'un robot puisse modéliser au mieux son environnement. Une modélisation fidèle à la réalité doit prendre en compte l'imprécision et l'incertitude. Ce travail présente une modélisation de l'environnement par grille d'occupation qui repose sur l'imprécision du capteur stéréoscopique. La mise à jour du modèle est basée aussi sur la définition de valeurs de crédibilité pour les mesures prises.Enfin, la perception et la modélisation de l'environnement ne sont pas des buts en soi mais des outils pour le robot pour assurer des tâches de haut niveau. Ce travail traite du suivi visuel d'un objet mobile comme tâche de haut niveau. / This work concerns visual functionalities to be embedded in a mobile robot for navigation purposes. More specifically, it relates to methods of dense stereoscopic vision based perception, grid occupancy based environment modeling and object tracking for autonomous navigation of mobile robots in indoor environments.We consider that is important for visual perception methods to be robust and fast. While in previous works, there are global stereo matching methods which are known for their robustness, but less likely to be employed in real-time applications. There are also local methods which are more suitable for real time but imprecise. To this aim, this work tries to find a compromise between robustness and real-time by proposing a semi-local method based on the definition of possibility distributions built around a fuzzy formalization of stereoscopic constraints.We consider also important for a mobile robot to better model its environment. To better fit a model to the reality we have to take uncertainty and inaccuracy into account. This work presents an occupancy grid environment modeling based on stereoscopic sensor inaccuracy.. Model updating relies on the definition of credibility values for the measures taken.Finally, perception and environment modeling are not goals but tools to provide robot high-level tasks. This work deals with visual tracking of a moving object such as high-level task.
19

Local model predictive control for navigation of a wheeled mobile robot using monocular information

Pacheco Valls, Lluís 30 November 2009 (has links)
Aquesta tesi està inspirada en els agents naturals per tal de planificar de manera dinàmica la navegació d'un robot diferencial de dues rodes. Les dades dels sistemes de percepció són integrades dins una graella d'ocupació de l'entorn local del robot. La planificació de les trajectòries es fa considerant la configuració desitjada del robot, així com els vértexs més significatius dels obstacles més propers. En el seguiment de les trajectòries s'utilitzen tècniques locals de control predictiu basades en el model, amb horitzons de predicció inferiors a un segon. La metodologia emprada és validada mitjançant nombrosos experiments. / In this thesis are used natural agents for dinamic navigation of a differential driven wheeled mobile robot. The perception data are integrated on a local occupancy grid framework where planar floor model is assumed. The path-planning is done by considering the local desired configuration, as well as the meaningful local obstacle vertexes. The trajectory-tracking is implemented by using LMPC (local model predictive control) techniques, with prediction horizons of less than one second. Many experiments are tested in order to report the validity of the prosed methodology.
20

3D Perception of Outdoor and Dynamic Environment using Laser Scanner / Perception 3D de l'environnement extérieur et dynamique utilisant Laser Scanner

Azim, Asma 17 December 2013 (has links)
Depuis des décennies, les chercheurs essaient de développer des systèmes intelligents pour les véhicules modernes, afin de rendre la conduite plus sûre et plus confortable. Ces systèmes peuvent conduire automatiquement le véhicule ou assister un conducteur en le prévenant et en l'assistant en cas de situations dangereuses. Contrairement aux conducteurs, ces systèmes n'ont pas de contraintes physiques ou psychologiques et font preuve d'une grande robustesse dans des conditions extrêmes. Un composant clé de ces systèmes est la fiabilité de la perception de l'environnement. Pour cela, les capteurs lasers sont très populaires et largement utilisés. Les capteurs laser 2D classiques ont des limites qui sont souvent compensées par l'ajout d'autres capteurs complémentaires comme des caméras ou des radars. Les avancées récentes dans le domaine des capteurs, telles que les capteurs laser 3D qui perçoivent l'environnement avec une grande résolution spatiale, ont montré qu'ils étaient une solution intéressante afin d'éviter l'utilisation de plusieurs capteurs. Bien qu'il y ait des méthodes bien connues pour la perception avec des capteurs laser 2D, les approches qui utilisent des capteurs lasers 3D sont relativement rares dans la littérature. De plus, la plupart d'entre elles utilisent plusieurs capteurs et réduisent le problème de la 3ème dimension en projetant les données 3D sur un plan et utilisent les méthodes classiques de perception 2D. Au contraire de ces approches, ce travail résout le problème en utilisant uniquement un capteur laser 3D et en utilisant les informations spatiales fournies par ce capteur. Notre première contribution est une extension des méthodes génériques de cartographie 3D fondée sur des grilles d'occupations optimisées pour résoudre le problème de cartographie et de localisation simultanée (SLAM en anglais). En utilisant des grilles d'occupations 3D, nous définissons une carte d'élévation pour la segmentation des données laser correspondant au sol. Pour corriger les erreurs de positionnement, nous utilisons une méthode incrémentale d'alignement des données laser. Le résultat forme la base pour le reste de notre travail qui constitue nos contributions les plus significatives. Dans la deuxième partie, nous nous focalisons sur la détection et le suivi des objets mobiles (DATMO en anglais). La deuxième contribution de ce travail est une méthode pour distinguer les objets dynamiques des objets statiques. L'approche proposée utilise une détection fondée sur le mouvement et sur des techniques de regroupement pour identifier les objets mobiles à partir de la grille d'occupations 3D. La méthode n'utilise pas de modèles spécifiques d'objets et permet donc la détection de tout type d'objets mobiles. Enfin, la troisième contribution est une méthode nouvelle pour classer les objets mobiles fondée sur une technique d'apprentissage supervisée. La contribution finale est une méthode pour suivre les objets mobiles en utilisant l'algorithme de Viterbi pour associer les nouvelles observations avec les objets présents dans l'environnement, Dans la troisième partie, l'approche propose est testée sur des jeux de données acquis à partir d'un capteur laser 3D monté sur le toit d'un véhicule qui se déplace dans différents types d'environnement incluant des environnements urbains, des autoroutes et des zones piétonnes. Les résultats obtenus montrent l'intérêt du système intelligent proposé pour la cartographie et la localisation simultanée ainsi que la détection et le suivi d'objets mobiles en environnement extérieur et dynamique en utilisant un capteur laser 3D. / With an anticipation to make driving experience safer and more convenient, over the decades, researchers have tried to develop intelligent systems for modern vehicles. The intended systems can either drive automatically or monitor a human driver and assist him in navigation by warning in case of a developing dangerous situation. Contrary to the human drivers, these systems are not constrained by many physical and psychological limitations and therefore prove more robust in extreme conditions. A key component of an intelligent vehicle system is the reliable perception of the environment. Laser range finders have been popular sensors which are widely used in this context. The classical 2D laser scanners have some limitations which are often compensated by the addition of other complementary sensors including cameras and radars. The recent advent of new sensors, such as 3D laser scanners which perceive the environment at a high spatial resolution, has proven to be an interesting addition to the arena. Although there are well-known methods for perception using 2D laser scanners, approaches using a 3D range scanner are relatively rare in literature. Most of those which exist either address the problem partially or augment the system with many other sensors. Surprisingly, many of those rely on reducing the dimensionality of the problem by projecting 3D data to 2D and using the well-established methods for 2D perception. In contrast to these approaches, this work addresses the problem of vehicle perception using a single 3D laser scanner. First contribution of this research is made by the extension of a generic 3D mapping framework based on an optimized occupancy grid representation to solve the problem of simultaneous localization and mapping (SLAM). Using the 3D occupancy grid, we introduce a variance-based elevation map for the segmentation of range measurements corresponding to the ground. To correct the vehicle location from odometry, we use a grid-based incremental scan matching method. The resulting SLAM framework forms a basis for rest of the contributions which constitute the major achievement of this work. After obtaining a good vehicle localization and a reliable map with ground segmentation, we focus on the detection and tracking of moving objects (DATMO). The second contribution of this thesis is the method for discriminating between the dynamic objects and the static environment. The presented approach uses motion-based detection and density-based clustering for segmenting the moving objects from 3D occupancy grid. It does not use object specific models but enables detecting arbitrary traffic participants. Third contribution is an innovative method for layered classification of the detected objects based on supervised learning technique which makes it easier to estimate their position with time. Final contribution is a method for tracking the detected objects by using Viterbi algorithm to associate the new observations with the existing objects in the environment. The proposed framework is verified with the datasets acquired from a laser scanner mounted on top of a vehicle moving in different environments including urban, highway and pedestrian-zone scenarios. The promising results thus obtained show the applicability of the proposed system for simultaneous localization and mapping with detection, classification and tracking of moving objects in dynamic outdoor environments using a single 3D laser scanner.

Page generated in 0.0594 seconds