• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • 1
  • Tagged with
  • 8
  • 8
  • 8
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Motion Conflict Detection and Resolution in Visual-Inertial Localization Algorithm

Wisely Babu, Benzun 30 July 2018 (has links)
In this dissertation, we have focused on conflicts that occur due to disagreeing motions in multi-modal localization algorithms. In spite of the recent achievements in robust localization by means of multi-sensor fusion, these algorithms are not applicable to all environments. This is primarily attributed to the following fundamental assumptions: (i) the environment is predominantly stationary, (ii) only ego-motion of the sensor platform exists, and (iii) multiple sensors are always in agreement with each other regarding the observed motion. Recently, studies have shown how to relax the static environment assumption using outlier rejection techniques and dynamic object segmentation. Additionally, to handle non ego-motion, approaches that extend the localization algorithm to multi-body tracking have been studied. However, there has been no attention given to the conditions where multiple sensors contradict each other with regard to the motions observed. Vision based localization has become an attractive approach for both indoor and outdoor applications due to the large information bandwidth provided by images and reduced cost of the cameras used. In order to improve the robustness and overcome the limitations of vision, an Inertial Measurement Unit (IMU) may be used. Even though visual-inertial localization has better accuracy and improved robustness due to the complementary nature of camera and IMU sensor, they are affected by disagreements in motion observations. We term such dynamic situations as environments with motion conflictbecause these are caused when multiple different but self- consistent motions are observed by different sensors. Tightly coupled visual inertial fusion approaches that disregard such challenging situations exhibit drift that can lead to catastrophic errors. We have provided a probabilistic model for motion conflict. Additionally, a novel algorithm to detect and resolve motion conflicts is also presented. Our method to detect motion conflicts is based on per-frame positional estimate discrepancy and per- landmark reprojection errors. Motion conflicts were resolved by eliminating inconsistent IMU and landmark measurements. Finally, a Motion Conflict aware Visual Inertial Odometry (MC- VIO) algorithm that combined both detection and resolution of motion conflict was implemented. Both quantitative and qualitative evaluation of MC-VIO on visually and inertially challenging datasets were obtained. Experimental results indicated that MC-VIO algorithm reduced the absolute trajectory error by 70% and the relative pose error by 34% in scenes with motion conflict, in comparison to the reference VIO algorithm. Motion conflict detection and resolution enables the application of visual inertial localization algorithms to real dynamic environments. This paves the way for articulate object tracking in robotics. It may also find numerous applications in active long term augmented reality.
2

Ovládání robotického ramene s využitím rozšířené reality a tabletu / Control of Robot Manipulator Using Augmented Reality and Tablet

Pristaš, Martin January 2018 (has links)
The aim of this thesis is to create an experimental application for manipulating virtual objects in the augmented reality using an tablet for controlling the robotic arm. There is created various ways of manipulating virtual objects for their translation, rotation, and scale change. These methods are tested on several users and compared within their usability. The application allows you to send the position of virtual object changes to the PR2 robot arm and simulate manipulation of virtual objects with augmented reality.
3

Robustness of State-of-the-Art Visual Odometry and SLAM Systems / Robusthet hos moderna Visual Odometry och SLAM system

Mannila, Cassandra January 2023 (has links)
Visual(-Inertial) Odometry (VIO) and Simultaneous Localization and Mapping (SLAM) are hot topics in Computer Vision today. These technologies have various applications, including robotics, autonomous driving, and virtual reality. They may also be valuable in studying human behavior and navigation through head-mounted visual systems. A complication to SLAM and VIO systems could potentially be visual degeneration such as motion blur. This thesis attempts to evaluate the robustness to motion blur of two open-source state-of-the-art VIO and SLAM systems, namely Delayed Marginalization Visual-Inertial Odometry (DM-VIO) and ORB-SLAM3. There are no real-world benchmark datasets with varying amounts of motion blur today. Instead, a semi-synthetic dataset was created with a dynamic trajectory-based motion blurring technique on an existing dataset, TUM VI. The systems were evaluated in two sensor configurations, Monocular and Monocular-Inertial. The systems are evaluated using the Root Mean Square (RMS) of the Absolute Trajectory Error (ATE).  Based on the findings, the visual input highly influences DM-VIO, and performance decreases substantially as motion blur increases, regardless of the sensor configuration. In the Monocular setup, the performance decline significantly going from centimeter precision to decimeter. The performance is slightly improved using the Monocular-Inertial configuration. ORB-SLAM3 is unaffected by motion blur performing on centimeter precision, and there is no significant difference between the sensor configurations. Nevertheless, a stochastic behavior can be noted in ORB-SLAM3 that can cause some sequences to deviate from this. In total, ORB-SLAM3 outperforms DM-VIO on the all sequences in the semi-synthetic datasets created for this thesis. The code used in this thesis is available at GitHub https://github.com/cmannila along with forked repositories of DM-VIO and ORB-SLAM3 / Visual(-Inertial) Odometry (VIO) och Simultaneous Localization and Mapping (SLAM) är av stort intresse inom datorseende (Computer Vision). Dessa system har en variation av tillämpningar såsom robotik, själv-körande bilar och VR (Virtual Reality). En ytterligare potentiell tillämpning är att integrera SLAM/VIO i huvudmonterade system, såsom glasögon, för att kunna studera beteenden och navigering hos bäraren. En komplikation till SLAM och VIO skulle kunna vara en visuell degration i det visuella systemet såsom rörelseoskärpa. Detta examensarbete försöker utvärdera robustheten mot rörelseoskärpa i två tillgängliga state-of-the-art system, DM-VIO (Delayed Marginalization Visual-Inertial Odometry) och ORB-SLAM3. Idag finns det inga tillgängliga dataset som innehåller specifikt varierande mängder rörelseoskärpa. Således, skapades ett semisyntetiskt dataset baserat på ett redan existerande, vid namn TUM VI. Detta gjordes med en dynamisk rendering av rörelseoskärpa enligt en känd rörelsebana erhållen från datasetet. Med denna teknik kunde olika mängder exponeringstid simuleras.  DM-VIO och ORB-SLAM3 utvärderades med två sensor konfigurationer, Monocular (en kamera) och Monokulär-Inertial (en kamera med Inertial Measurement Unit). Det objektiva mått som användes för att jämföra systemen var Root Mean Square av Absolute Trajectory Error i meter. Resultaten i detta arbete visar på att DM-VIO är i hög-grad beroende av den visuella signalen som används, och prestandan minskar avsevärt när rörelseoskärpan ökar, oavsett sensorkonfiguration. När enbart en kamera (Monocular) används minskar prestandan från centimeterprecision till diameter. ORB-SLAM3 påverkas inte av rörelseoskärpa och presterar med centimeterprecision för alla sekvenser. Det kan heller inte påvisas någon signifikant skillnad mellan sensorkonfigurationerna. Trots detta kan ett stokastiskt beteende i ORB-SLAM3 noteras, detta kan ha orsakat vissa sekvenser att bete sig avvikande. I helhet, ORB-SLAM3 överträffar DM-VIO på alla sekvenser i det semisyntetiska datasetet som skapats för detta arbete. Koden som använts i detta arbete finns tillgängligt på GitHub https://github.com/cmannila tillsammans med forkade repository för DM-VIO och ORB-SLAM3.
4

Visual-Inertial Odometry for Autonomous Ground Vehicles

Burusa, Akshay Kumar January 2017 (has links)
Monocular cameras are prominently used for estimating motion of Unmanned Aerial Vehicles. With growing interest in autonomous vehicle technology, the use of monocular cameras in ground vehicles is on the rise. This is especially favorable for localization in situations where Global Navigation Satellite System (GNSS) is unreliable, such as open-pit mining environments. However, most monocular camera based approaches suffer due to obscure scale information. Ground vehicles impose a greater difficulty due to high speeds and fast movements. This thesis aims to estimate the scale of monocular vision data by using an inertial sensor in addition to the camera. It is shown that the simultaneous estimation of pose and scale in autonomous ground vehicles is possible by the fusion of visual and inertial sensors in an Extended Kalman Filter (EKF) framework. However, the convergence of scale is sensitive to several factors including the initialization error. An accurate estimation of scale allows the accurate estimation of pose. This facilitates the localization of ground vehicles in the absence of GNSS, providing a reliable fall-back option. / Monokulära kameror används ofta vid rörelseestimering av obemannade flygande farkoster. Med det ökade intresset för autonoma fordon har även användningen av monokulära kameror i fordon ökat. Detta är fram för allt fördelaktigt i situationer där satellitnavigering (Global Navigation Satellite System (GNSS)) äropålitlig, exempelvis i dagbrott. De flesta system som använder sig av monokulära kameror har problem med att estimera skalan. Denna estimering blir ännu svårare på grund av ett fordons större hastigheter och snabbare rörelser. Syftet med detta exjobb är att försöka estimera skalan baserat på bild data från en monokulär kamera, genom att komplettera med data från tröghetssensorer. Det visas att simultan estimering av position och skala för ett fordon är möjligt genom fusion av bild- och tröghetsdata från sensorer med hjälp av ett utökat Kalmanfilter (EKF). Estimeringens konvergens beror på flera faktorer, inklusive initialiseringsfel. En noggrann estimering av skalan möjliggör också en noggrann estimering av positionen. Detta möjliggör lokalisering av fordon vid avsaknad av GNSS och erbjuder därmed en ökad redundans.
5

From robotics to healthcare: toward clinically-relevant 3-D human pose tracking for lower limb mobility assessments

Mitjans i Coma, Marc 11 September 2024 (has links)
With an increase in age comes an increase in the risk of frailty and mobility decline, which can lead to dangerous falls and can even be a cause of mortality. Despite these serious consequences, healthcare systems remain reactive, highlighting the need for technologies to predict functional mobility decline. In this thesis, we present an end-to-end autonomous functional mobility assessment system that seeks to bridge the gap between robotics research and clinical rehabilitation practices. Unlike many fully integrated black-box models, our approach emphasizes the need for a system that is both reliable as well as transparent to facilitate its endorsement and adoption by healthcare professionals and patients. Our proposed system is characterized by the sensor fusion of multimodal data using an optimization framework known as factor graphs. This method, widely used in robotics, enables us to obtain visually interpretable 3-D estimations of the human body in recorded footage. These representations are then used to implement autonomous versions of standardized assessments employed by physical therapists for measuring lower-limb mobility, using a combination of custom neural networks and explainable models. To improve the accuracy of the estimations, we investigate the application of the Koopman operator framework to learn linear representations of human dynamics: We leverage these outputs as prior information to enhance the temporal consistency across entire movement sequences. Furthermore, inspired by the inherent stability of natural human movement, we propose ways to impose stability constraints in the dynamics during the training of linear Koopman models. In this light, we propose a sufficient condition for the stability of discrete-time linear systems that can be represented as a set of convex constraints. Additionally, we demonstrate how it can be seamlessly integrated into larger-scale gradient descent optimization methods. Lastly, we report the performance of our human pose detection and autonomous mobility assessment systems by evaluating them on outcome mobility datasets collected from controlled laboratory settings and unconstrained real-life home environments. While we acknowledge that further research is still needed, the study results indicate that the system can demonstrate promising performance in assessing mobility in home environments. These findings underscore the significant potential of this and similar technologies to revolutionize physical therapy practices.
6

Dynamics-Enabled Localization of UAVs using Unscented Kalman Filter

Omotuyi, Oyindamola January 2021 (has links)
No description available.
7

Benchmarking VisualInertial Odometry Filterbased Methods for Vehicles

Zahid, Muhammad January 2021 (has links)
Autonomous navigation has the opportunity to make roads safer and help perform search and rescue missions by reducing human error. Odometry methods are essential to allow for autonomous navigation because they estimate how the robot will move based on the available sensors. This thesis aims to compare and evaluate the Cubature Kalman filter (CKF) based approach for visual-inertial odometry (VIO) to traditional Extended Kalman Filter (EKF) based methods on criteria such as the accuracy of the results. VIO methods use camera and IMU sensor for the predictions. The Multi-State-Constraint Kalman filter (MSCKF) was utilized as the foundation VIO approach to evaluate the underlying filter between EKF and CKF while maintaining the background conditions like visual tracking pipeline, IMU model, and measurement model constant. Evaluation metrics of absolute trajectory error (ATE) and relative error (RE) was used after tuning the filters on EuRoC and KAIST datasets. It is shown that, based on the existing implementation, the filters have no statistically significant difference in performance when predicting motion estimates, despite the fact that the absolute trajectory error of position for EKF estimation is lower. It is further shown that as the length of the trajectory increases, the estimation error for both filters rises unboundedly. Under the visual inertial framework of MSCKF, the CKF filter, which does not linearize the system, works equally as well as the well-established EKF filter and has the potential to perform better with more accurate nonlinear system and measurement models. / Autonom navigering har möjlighet att göra vägar säkrare och hjälpa till att utföra räddningsuppdrag genom att minska mänskliga fel. Odometrimetoder är viktiga för att möjliggöra autonom navigering eftersom de skattar hur roboten rör sig baserat på tillgängliga sensorer. Detta examensarbete syftar till att utvärdera Cubature Kalman filter (CKF) för visuell tröghetsodometri (VIO) och jämföra med traditionella Extended Kalman Filter (EKF) gällande bland annat noggrannhet. VIO-metoder använder kamera och IMU-sensor för skattningarna. MultiState Constraint Kalmanfiltret (MSCKF) användes som grund VIO-metoden för att utvärdera filteralgoritmerna EKF och CKF, samtidigt som de VIO-specifika delarna så som IMU-modell och mätmodell kunde förbli desamma. Utvärderingen gjordes baserat på absolut banfel (ATE) och relativa fel (RE) på EuRoC- och KAIST-datauppsättningar. Det visas att, baserat på den befintliga implementeringen, har filtren ingen statistiskt signifikant skillnad i prestanda när de förutsäger rörelsen, trots att det absoluta banafelet för positionen för EKF-uppskattning är lägre. Det visas vidare att när längden på banan ökar, ökar uppskattningsfelet för båda filtren obegränsat. Under MSCKFs visuella tröghetsramverk fungerar CKF-filtret, som inte linjäriserar systemet, lika bra som det väletablerade EKF-filtret och har potential att prestera bättre med mer exakta olinjära system och mätmodeller.
8

An Observability-Driven System Concept for Monocular-Inertial Egomotion and Landmark Position Determination

Markgraf, Marcel 25 February 2019 (has links)
In this dissertation a novel alternative system concept for monocular-inertial egomotion and landmark position determination is introduced. It is mainly motivated by an in-depth analysis of the observability and consistency of the classic simultaneous localization and mapping (SLAM) approach, which is based on a world-centric model of an agent and its environment. Within the novel system concept - a body-centric agent and environment model, - a pseudo-world centric motion propagation, - and closed-form initialization procedures are introduced. This approach allows for combining the advantageous observability properties of body-centric modeling and the advantageous motion propagation properties of world-centric modeling. A consistency focused and simulation based evaluation demonstrates the capabilities as well as the limitations of the proposed concept. / In dieser Dissertation wird ein neuartiges, alternatives Systemkonzept für die monokular-inertiale Eigenbewegungs- und Landmarkenpositionserfassung vorgestellt. Dieses Systemkonzept ist maßgeblich motiviert durch eine detaillierte Analyse der Beobachtbarkeits- und Konsistenzeigenschaften des klassischen Simultaneous Localization and Mapping (SLAM), welches auf einer weltzentrischen Modellierung eines Agenten und seiner Umgebung basiert. Innerhalb des neuen Systemkonzeptes werden - eine körperzentrische Modellierung des Agenten und seiner Umgebung, - eine pseudo-weltzentrische Bewegungspropagation, - und geschlossene Initialisierungsprozeduren eingeführt. Dieser Ansatz erlaubt es, die günstigen Beobachtbarkeitseigenschaften körperzentrischer Modellierung und die günstigen Propagationseigenschaften weltzentrischer Modellierung zu kombinieren. Sowohl die Fähigkeiten als auch die Limitierungen dieses Ansatzes werden abschließend mit Hilfe von Simulationen und einem starken Fokus auf Schätzkonsistenz demonstriert.

Page generated in 0.0817 seconds