• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 233
  • 44
  • 44
  • 34
  • 17
  • 13
  • 12
  • 9
  • 6
  • 6
  • 5
  • 1
  • Tagged with
  • 482
  • 110
  • 104
  • 102
  • 94
  • 92
  • 87
  • 78
  • 60
  • 55
  • 50
  • 48
  • 47
  • 46
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

Range-based Wireless Sensor Network Localization for Planetary Rovers

Svensson, August January 2020 (has links)
Obstacles faced in planetary surface exploration require innovation in many areas, primarily that of robotics. To be able to study interesting areas that are by current means hard to reach, such as steep slopes, ravines, caves andlava tubes, the surface vehicles of today need to be modified or augmented. Oneaugmentation with such a goal is PHALANX (Projectile Hordes for AdvancedLong-term and Networked eXploration), a prototype system being developed atthe NASA Ames Research Center. PHALANX uses remote deployment of expendablesensor nodes from a lander or rover vehicle. This enables in-situ measurementsin hard-to-reach areas with reduced risk to the rover. The deployed sensornodes are equipped with capabilities to transmit data wirelessly back to therover and to form a network with the rover and other nodes. Knowledge of the location of deployed sensor nodes and the momentary locationof the rover is greatly desired. PHALANX can be of aid in this aspect as well.With the addition of inter-node and rover-to-node range measurements, arange-based network SLAM (Simultaneous Localization and Mapping) system can beimplemented for the rover to use while it is driving within the network. Theresulting SLAM system in PHALANX shares characteristics with others in the SLAM literature, but with some additions that make it unique. One crucial additionis that the rover itself deploys the nodes. Another is the ability for therover to more accurately localize deployed nodes by external sensing, such asby utilizing the rover cameras. In this thesis, the SLAM of PHALANX is studied by means of computer simulation.The simulation software is created using real mission values and valuesresulting from testing of the PHALANX prototype hardware. An overview of issuesthat a SLAM solution has to face as present in the literature is given in thecontext of the PHALANX SLAM system, such as poor connectivity, and highlycollinear placements of nodes. The system performance and sensitivities arethen investigated for the described issues, using predicted typical PHALANXapplication scenarios. The results are presented as errors in estimated positions of the sensor nodesand in the estimated position of the rover. I find that there are relativesensitivities to the investigated parameters, but that in general SLAM inPHALANX is fairly insensitive. This gives mission planners and operatorsgreater flexibility to prioritize other aspects important to the mission athand. The simulation software developed in this thesis work also has thepotential to be expanded on as a tool for mission planners to prepare forspecific mission scenarios using PHALANX.
442

Robust Localization of Research Concept Vehicle (RCV) in Large Scale Environment / Robust lokalisering av Research Concept Vehicle (RCV) i storskalig miljö

Raghuram, Anchit January 2018 (has links)
Autonomous vehicles in the recent era are robust vehicles that have the capability to drive themselves without human involvement using sensors and Simultaneous Localization and Mapping algorithms, which helps the vehicle gain an understanding of its environment while driving with the help of laser scanners (Velodyne), IMU and GPS to collect data and solidify the foundation for locating itself in an unknown environment. Various methods were studied and have been tested for increasing the efficiency of registration and optimization over the years but the implementation of the NDT library for mapping and localization have been found to be fast and more accurate as compared to conventional methods. The objective of this thesis is to ascertain a robust method of pose estimation of the vehicle by combining data from the laser sensor, with the data from the IMU and GPS receiver on the vehicle. The initial estimate prediction of the position is achieved by generating a 3D map using the Normal Distribution Transform and estimating the position using the NDT localization algorithm and the GPS data collected by driving the vehicle in an external environment. The results presented explain and verify the hypothesis being stated and shows the comparison of the localization algorithm implemented with the GPS receiver data available on the vehicle while driving. / Autonoma fordon har på senare tid utvecklats till robusta fordon som kan köra sig själva utan hjälp av en människa, detta har möjliggjorts genom användandet av sensorer och algoritmer som utför lokalisering och kartläggning samtidigt (SLAM). Dessa sensorer och algoritmer hjälper fordonet att förstå dess omgivning medan det kör och tillsammans med laser skanners (Velodyne), IMU'er och GPS läggs grunden för att kunna utföra lokalisering i en okänd miljö. Ett flertal metoder har studerats och testats för att förbättra effektiviteten av registrering och optimering under åren men implementationen av NDT biblioteket för kartläggning och lokalisering har visat sig att vara snabbt och mer exakt jämfört med konventionella metoder. Målet med detta examensarbete är att hitta en robust metod för uppskatta pose genom att kombinera data från laser sensorn, en uppskattning av den ursprungliga positionen som fås genom att generera en 3D karta med hjälp av normalfördelningstransformen och GPS data insamlad från körningar i en extern miljö. Resultaten som presenteras beskriver och verifierar den hypotes som läggs fram och visar jämförelsen av den implementerade lokaliseringsalgoritmen med GPS data tillgänglig på fordonet under körning.
443

<strong>Redefining Visual SLAM for Construction Robots: Addressing Dynamic Features and Semantic Composition for Robust Performance</strong>

Liu Yang (16642902) 07 August 2023 (has links)
<p>  </p> <p>This research is motivated by the potential of autonomous mobile robots (AMRs) in enhancing safety, productivity, and efficiency in the construction industry. The dynamic and complex nature of construction sites presents significant challenges to AMRs, particularly in localization and mapping – a process where AMRs determine their own position in the environment while creating a map of the surrounding area. These capabilities are crucial for autonomous navigation and task execution but are inadequately addressed by existing solutions, which primarily rely on visual Simultaneous Localization and Mapping (SLAM) methods. These methods are often ineffective in construction sites due to their underlying assumption of a static environment, leading to unreliable outcomes. Therefore, there is a pressing need to enhance the applicability of AMRs in construction by addressing the limitations of current localization and mapping methods in addressing the dynamic nature of construction sites, thereby empowering AMRs to function more effectively and fully realize their potential in the construction industry.</p> <p>The overarching goal of this research is to fulfill this critical need by developing a novel visual SLAM framework that is capable of not only detecting and segmenting diverse dynamic objects in construction environments but also effectively interpreting the semantic structure of the environment. Furthermore, it can efficiently integrate these functionalities into a unified system to provide an improved SLAM solution for dynamic, complex, and unstructured environments. The rationale is that such a SLAM system could effectively address the dynamic nature of construction sites, thereby significantly improving the efficiency and accuracy of robot localization and mapping in the construction working environment. </p> <p>Towards this goal, three specific objectives have been formulated. The first objective is to develop a novel methodology for comprehensive dynamic object segmentation that can support visual SLAM within highly variable construction environments. This novel method integrates class-agnostic objectness masks and motion cues into video object segmentation, thereby significantly improving the identification and segmentation of dynamic objects within construction sites. These dynamic objects present a significant challenge to the reliable operation of AMRs and, by accurately identifying and segmenting them, the accuracy and reliability of SLAM-based localization is expected to greatly improve. The key to this innovative approach involves a four-stage method for dynamic object segmentation, including objectness mask generation, motion saliency estimation, fusion of objectness masks and motion saliency, and bi-directional propagation of the fused mask. Experimental results show that the proposed method achieves a highest of 6.4% improvement for dynamic object segmentation than state-of-the-art methods, as well as lowest localization errors when integrated into visual SLAM system over public dataset. </p> <p>The second objective focuses on developing a flexible, cost-effective method for semantic segmentation of construction images of structural elements. This method harnesses the power of image-level labels and Building Information Modeling (BIM) object data to replace the traditional and often labor-intensive pixel-level annotations. The hypothesis for this objective is that by fusing image-level labels with BIM-derived object information, a segmentation that is competitive with pixel-level annotations while drastically reducing the associated cost and labor intensity can be achieved. The research method involves initializing object location, extracting object information, and incorporating location priors. Extensive experiments indicate the proposed method with simple image-level labels achieves competitive results with the full pixel-level supervisions, but completely remove the need for laborious and expensive pixel-level annotations when adapting networks to unseen environments. </p> <p>The third objective aims to create an efficient integration of dynamic object segmentation and semantic interpretation within a unified visual SLAM framework. It is proposed that a more efficient dynamic object segmentation with adaptively selected frames combined with the leveraging of a semantic floorplan from an as-built BIM would speed up the removal of dynamic objects and enhance localization while reducing the frequency of scene segmentation. The technical approach to achieving this objective is through two major modifications to the classic visual SLAM system: adaptive dynamic object segmentation, and semantic-based feature reliability update. Upon the accomplishment of this objective, an efficient framework is developed that seamlessly integrates dynamic object segmentation and semantic interpretation into a visual SLAM framework. Experiments demonstrate the proposed framework achieves competitive performance over the testing scenarios, with processing time almost halved than the counterpart dynamic SLAM algorithms.</p> <p>In conclusion, this research contributes significantly to the adoption of AMRs in construction by tailoring a visual SLAM framework specifically for dynamic construction sites. Through the integration of dynamic object segmentation and semantic interpretation, it enhances localization accuracy, mapping efficiency, and overall SLAM performance. With broader implications of visual SLAM algorithms such as site inspection in dangerous zones, progress monitoring, and material transportation, the study promises to advance AMR capabilities, marking a significant step towards a new era in construction automation.</p>
444

Rolling shutter in feature-based Visual-SLAM : Robustness through rectification in a wearable and monocular context

Norée Palm, Caspar January 2023 (has links)
This thesis analyzes the impact of and implements compensation for rolling shutter distortions in the state-of-the-art feature-based visual SLAM system ORB-SLAM3. The compensation method involves rectifying the detected features, and the evaluation was conducted on the "Rolling-Shutter Visual-Inertial Odometry Dataset" from TUM, which comprises of ten sequences recorded with side-by-side synchronized global and rolling shutter cameras in a single room.  The performance of ORB-SLAM3 on rolling shutter without the implemented rectification algorithms substantially decreased in terms of accuracy and robustness. The global shutter camera achieved centimeter or even sub-centimeter accuracy, while the rolling shutter camera's accuracy could reach the decimeter range in the more challenging sequences. Also, specific individual executions using a rolling shutter camera could not track the trajectory effectively, indicating a degradation in robustness. The effects of rolling shutter in inertial ORB-SLAM3 were even more pronounced with higher trajectory errors and outright failure to track in some sequences. This was the case even though using inertial measurements with the global shutter camera resulted in better accuracy and robustness compared to the non-inertial case.  The rectification algorithms implemented in this thesis yielded significant accuracy increases of up to a 7x relative improvement for the non-inertial case, which turned trajectory errors back to the centimeter scale from the decimeter one for the more challenging sequences. For the inertial case, the rectification scheme was even more crucial. It resulted in better trajectory accuracies, better than the non-inertial case for the less challenging sequences, and made tracking possible for the more challenging ones.
445

Visual simultaneous localization and mapping in a noisy static environment

Makhubela, J. K. 03 1900 (has links)
M. Tech. (Department of Information and Communication Technology, Faculty of Applied and Computer Sciences), Vaal University of Technology / Simultaneous Localization and Mapping (SLAM) has seen tremendous interest amongst the research community in recent years due to its ability to make the robot truly independent in navigation. Visual Simultaneous Localization and Mapping (VSLAM) is when an autonomous mobile robot is embedded with a vision sensor such as monocular, stereo vision, omnidirectional or Red Green Blue Depth (RGBD) camera to localize and map an unknown environment. The purpose of this research is to address the problem of environmental noise, such as light intensity in a static environment, which has been an issue that makes a Visual Simultaneous Localization and Mapping (VSLAM) system to be ineffective. In this study, we have introduced a Light Filtering Algorithm into the Visual Simultaneous Localization and Mapping (VSLAM) method to reduce the amount of noise in order to improve the robustness of the system in a static environment, together with the Extended Kalman Filter (EKF) algorithm for localization and mapping and A* algorithm for navigation. Simulation is utilized to execute experimental performance. Experimental results show a 60% landmark or landfeature detection of the total landmark or landfeature within a simulation environment and a root mean square error (RMSE) of 0.13m, which is minimal when compared with other Simultaneous Localization and Mapping (SLAM) systems from literature. The inclusion of a Light Filtering Algorithm has enabled the Visual Simultaneous Localization and Mapping (VSLAM) system to navigate in an obscure environment.
446

Efficient Estimation for Small Multi-Rotor Air Vehicles Operating in Unknown, Indoor Environments

Macdonald, John Charles 07 December 2012 (has links) (PDF)
In this dissertation we present advances in developing an autonomous air vehicle capable of navigating through unknown, indoor environments. The problem imposes stringent limits on the computational power available onboard the vehicle, but the environment necessitates using 3D sensors such as stereo or RGB-D cameras whose data requires significant processing. We address the problem by proposing and developing key elements of a relative navigation scheme that moves as many processing tasks as possible out of the time-critical functions needed to maintain flight. We present in Chapter 2 analysis and results for an improved multirotor helicopter state estimator. The filter generates more accurate estimates by using an improved dynamic model for the vehicle and by properly accounting for the correlations that exist in the uncertainty during state propagation. As a result, the filter can rely more heavily on frequent and easy to process measurements from gyroscopes and accelerometers, making it more robust to error in the processing intensive information received from the exteroceptive sensors. In Chapter 3 we present BERT, a novel approach to map optimization. The goal of map optimization is to produce an accurate global map of the environment by refining the relative pose transformation estimates generated by the real-time navigation system. We develop BERT to jointly optimize the global poses and relative transformations. BERT exploits properties of independence and conditional independence to allow new information to efficiently flow through the network of transformations. We show that BERT achieves the same final solution as a leading iterative optimization algorithm. However, BERT delivers noticeably better intermediate results for the relative transformation estimates. The improved intermediate results, along with more readily available covariance estimates, make BERT especially applicable to our problem where computational resources are limited. We conclude in Chapter 4 with analysis and results that extend BERT beyond the simple example of Chapter 3. We identify important structure in the network of transformations and address challenges arising in more general map optimization problems. We demonstrate results from several variations of the algorithm and conclude the dissertation with a roadmap for future work.
447

Globally Consistent Map Generation in GPS-Degraded Environments

Nyholm, Paul William 01 May 2015 (has links) (PDF)
Heavy reliance on GPS is preventing unmanned air systems (UAS) from being fully inte- grated for many of their numerous applications. In the absence of GPS, GPS-reliant UAS have difficulty estimating vehicle states resulting in vehicle failures. Additionally, naively using erro- neous measurements when GPS is available can result in significant state inaccuracies. We present a simultaneous localization and mapping (SLAM) solution to GPS-degraded navigation that al- lows vehicle state estimation and control independent of global information. Optionally, a global map can be constructed from odometry measurements and can be updated with GPS measurements while maintaining robustness against outliers.We detail a relative navigation SLAM framework that distinguishes a relative front end and global back end. It decouples the front-end flight critical processes, such as state estimation and control, from back-end global map construction and optimization. Components of the front end function relative to a locally-established coordinate frame, completely independent from global state information. The approach maintains state estimation continuity in the absence of GPS mea- surements or when there are jumps in the global state, such as after map optimization. A global graph-based SLAM back end complements the relative front end by constructing and refining a global map using odometry measurements provided by the front end.Unlike typical approaches that use GPS in the front end to estimate global states, our unique back end uses a virtual zero and virtual constraint to allow intermittent GPS measurements to be applied directly to the map. Methods are presented to reduce the scale of GPS induced costs and refine the map’s initial orientation prior to optimization, both of which facilitate convergence to a globally consistent map. The approach uses a state-of-the-art robust least-squares optimization algorithm called dynamic covariance scaling (DCS) to identify and reject outlying GPS measure- ments and loop closures. We demonstrate our system’s ability to generate globally consistent and aligned maps in GPS-degraded environments through simulation, hand-carried, and flight test re- sults.
448

Analysis and Definition of the BAT-ME (BATonomous Moon cave Explorer) Mission / Analys och bestämning av BAT-ME (BATonomous Moon cave Explorer) missionen

Muresan, Alexandru Camil January 2019 (has links)
Humanity has always wanted to explore the world we live in and answer different questions about our universe. After the International Space Station will end its service one possible next step could be a Moon Outpost: a convenient location for research, astronaut training and technological development that would enable long-duration space. This location can be inside one of the presumed lava tubes that should be present under the surface but would first need to be inspected, possibly by machine capable of capturing and relaying a map to a team on Earth.In this report the past and future Moon base missions will be summarized considering feasible outpost scenarios from the space companies or agencies. and their prospected manned budget. Potential mission profiles, objectives, requirements and constrains of the BATonomous Moon cave Explorer (BAT-ME) mission will be discussed and defined. Vehicle and mission concept will be addressed, comparing and presenting possible propulsion or locomotion approaches inside the lava tube.The Inkonova “Batonomous™” system is capable of providing Simultaneous Localization And Mapping (SLAM), relay the created maps, with the possibility to easily integrate the system on any kind of vehicle that would function in a real-life scenario.Although the system is not fully developed, it will be assessed from a technical perspective, and proper changes for a viable system transition for the space-Moon environment will be devised. The transition of the system from the Batonomous™ state to the BAT-ME required state will be presented from the requirement, hardware, software, electrical and operational point of view.The mission will be devised into operational phases, with key goals in mind. Two different vehicles will be presented and designed on a high engineering level. A risk analysis and management system will be made to understand the possible negative outcomes of different parts failure on the mission outcome.
449

Calibrated uncertainty estimation for SLAM

Bansal, Dishank 04 1900 (has links)
La focus de cette thèse de maîtrise est l’analyse de l’étalonnage de l’incertitude pour la lo- calisation et la cartographie simultanées (SLAM) en utilisant des modèles de mesure basés sur les réseaux de neurones. SLAM sont un problème fondamental en robotique et en vision par ordinateur, avec de nombreuses applications allant des voitures autonomes aux réalités augmentées. Au cœur de SLAM, il s’agit d’estimer la pose (c’est-à-dire la position et l’orien- tation) d’un robot ou d’une caméra lorsqu’elle se déplace dans un environnement inconnu et de construire simultanément une carte de l’environnement environnant. Le SLAM visuel, qui utilise des images en entrée, est un cadre de SLAM couramment utilisé. Cependant, les méthodes traditionnelles de SLAM visuel sont basées sur des caractéristiques fabriquées à la main et peuvent être vulnérables à des défis tels que la mauvaise luminosité et l’occultation. L’apprentissage profond est devenu une approche plus évolutive et robuste, avec les réseaux de neurones convolutionnels (CNN) devenant le système de perception de facto en robotique. Pour intégrer les méthodes basées sur les CNN aux systèmes de SLAM, il est nécessaire d’estimer l’incertitude ou le bruit dans les mesures de perception. L’apprentissage profond bayésien a fourni diverses méthodes pour estimer l’incertitude dans les réseaux de neurones, notamment les ensembles, la distribution sur les paramètres du réseau et l’ajout de têtes de prédiction pour les paramètres de distribution de la sortie. Cependant, il est également important de s’assurer que ces estimations d’incertitude sont bien étalonnées, c’est-à-dire qu’elles reflètent fidèlement l’erreur de prédiction. Dans cette thèse de maîtrise, nous abordons ce défi en développant un système de SLAM qui intègre un réseau de neurones en tant que modèle de mesure et des estimations d’in- certitude étalonnées. Nous montrons que ce système fonctionne mieux que les approches qui utilisent la méthode traditionnelle d’estimation de l’incertitude, où les estimations de l’incertitude sont simplement considérées comme des hyperparamètres qui sont réglés ma- nuellement. Nos résultats démontrent l’importance de tenir compte de manière précise de l’incertitude dans le problème de SLAM, en particulier lors de l’utilisation d’un réseau de neur. / The focus of this Masters thesis is the analysis of uncertainty calibration for Simultaneous Localization and Mapping (SLAM) using neural network-based measurement models. SLAM is a fundamental problem in robotics and computer vision, with numerous applications rang- ing from self-driving cars to augmented reality. At its core, SLAM involves estimating the pose (i.e., position and orientation) of a robot or camera as it moves through an unknown environment and constructing a map of the surrounding environment simultaneously. Vi- sual SLAM, which uses images as input, is a commonly used SLAM framework. However, traditional Visual SLAM methods rely on handcrafted features and can be vulnerable to challenges such as poor lighting and occlusion. Deep learning has emerged as a more scal- able and robust approach, with Convolutional Neural Networks (CNNs) becoming the de facto perception system in robotics. To integrate CNN-based methods with SLAM systems, it is necessary to estimate the uncertainty or noise in the perception measurements. Bayesian deep learning has provided various methods for estimating uncertainty in neural networks, including ensembles, distribu- tions over network parameters, and adding variance heads for direct uncertainty prediction. However, it is also essential to ensure that these uncertainty estimates are well-calibrated, i.e they accurately reflect the error in the prediction. In this Master’s thesis, we address this challenge by developing a system for SLAM that incorporates a neural network as the measurement model and calibrated uncertainty esti- mates. We show that this system performs better than the approaches which uses traditional uncertainty estimation method, where uncertainty estimates are just considered hyperpa- rameters which are tuned manually. Our results demonstrate the importance of accurately accounting for uncertainty in the SLAM problem, particularly when using a neural network as the measurement model, in order to achieve reliable and robust localization and mapping.
450

GPS/Optical/Inertial Integration for 3D Navigation and Mapping Using Multi-copter Platforms

Dill, Evan T. 24 August 2015 (has links)
No description available.

Page generated in 0.0641 seconds