• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 9
  • 7
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 85
  • 85
  • 30
  • 28
  • 25
  • 21
  • 19
  • 16
  • 13
  • 13
  • 13
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Design of Mobility Cyber Range and Vision-Based Adversarial Attacks on Camera Sensors in Autonomous Vehicles

Ramayee, Harish Asokan January 2021 (has links)
No description available.
62

A Self-policing Smart Parking Solution

Dalkic, Yurdaer, Deknache, Hadi January 2019 (has links)
With the exponential growth of vehicles on our streets, the need for finding an unoccupied parking spot today could most of the time be problematic, but even more in the coming future. Smart parking solutions have proved to be a helpful approach to facilitate the localization of unoccupied parking spots. In many smart parking solutions, sensors are used to determine the vacancy of a parking spot. The use of sensors can provide a highly accurate solution in terms of determining the status of parking lots. However, this is not ideal from a scalability point of view, since the need for installing and maintaining each of the sensors is not considered cost-effective. In the latest years vision based solutions have been considered more when building a smart parking solution, since cameras can easily be installed and used on a large parking area. Furthermore, the use of cameras can be developed to provide a more advanced solution for checking in at a parking spot and also for providing the information about whether a vehicle is placed unlawfully. In our thesis, we developed a dynamic vision-based smart parking prototype with the aim to detect vacant parking spots and illegally parked vehicles.
63

Bearing-Only Cooperative-Localization and Path-Planning of Ground and Aerial Robots

Sharma, Rajnikant 16 November 2011 (has links) (PDF)
In this dissertation, we focus on two fundamental problems related to the navigation of ground robots and small Unmanned Aerial Vehicle (UAVs): cooperative localization and path planning. The theme running through in all of the work is the use of bearing only sensors, with a focus on monocular video cameras mounted on ground robots and UAVs. To begin with, we derive the conditions for the complete observability of the bearing-only cooperative localization problem. The key element of this analysis is the Relative Position Measurement Graph (RPMG). The nodes of an RPMG represent vehicle states and the edges represent bearing measurements between nodes. We show that graph theoretic properties like the connectivity and the existence of a path between two nodes can be used to explain the observability of the system. We obtain the maximum rank of the observability matrix without global information and derive conditions under which the maximum rank can be achieved. Furthermore, we show that for the complete observability, all of the nodes in the graph must have a path to at least two different landmarks of known location. The complete observability can also be obtained without landmarks if the RPMG is connected and at least one of the robots has a sensor which can measure its global pose, for example a GPS receiver. We validate these conditions by simulation and experimental results. The theoretical conditions to attain complete observability in a localization system is an important step towards reliable and efficient design of localization and path planning algorithms. With such conditions, a designer does not need to resort to exhaustive simulations and/or experimentation to verify whether a given selection of a control strategy, topology of the sensor network, and sensor measurements meets the observability requirements of the system. In turn, this leads to decreased requirements of time, cost, and effort for designing a localization algorithms. We use these observability conditions to develop a technique, for camera equipped UAVs, to cooperatively geo-localize a ground target in an urban terrain. We show that the bearing-only cooperative geo-localization technique overcomes the limitation of requiring a low-flying UAV to maintain line-of-sight while flying high enough to maintain GPS lock. We design a distributed path planning algorithm using receding horizon control that improves the localization accuracy of the target and of all of the UAVs while satisfying the observability conditions. Next, we use the observability analysis to explicitly design an active local path planning algorithm for UAVs. The algorithm minimizes the uncertainties in the time-to-collision (TTC) and bearing estimates while simultaneously avoiding obstacles. Using observability analysis we show that maximizing the observability and collision avoidance are complementary tasks. We provide sufficient conditions of the environment which maximizes the chances obstacle avoidance and UAV reaching the goal. Finally, we develop a reactive path planner for UAVs using sliding mode control such that it does not require range from the obstacle, and uses bearing to obstacle to avoid cylindrical obstacles and follow straight and curved walls. The reactive guidance strategy is fast, computationally inexpensive, and guarantees collision avoidance.
64

Study of the Seismic Response of Unanchored Equipment and Contents in Fixed-Base and Base-Isolated Buildings

Nikfar, Farzad January 2016 (has links)
Immediate occupancy and functionality of critical facilities including hospitals, emergency operations centers, communications centers, and police and fire stations is of utmost importance immediately after a damaging earthquake, as they must continue to provide fundamental health, emergency, and security services in the aftermath of an extreme event. Although recent earthquakes have proven the acceptable performance of the structural system in such buildings, when designed according to recent seismic design codes, in many cases damage to the nonstructural components and systems was the main cause of disruption in their functionality. Seismic isolation is proven to be an effective technique to protect building structures from damaging earthquakes. It has been the method of choice for critical facilities, including hospitals in Japan and the United States in recent years. Seismic isolation appears to be an ideal solution for protecting the nonstructural components as well. While this claim was made three decades ago, the supporting research for freestanding (unanchored) equipment and contents (EC) is fairly new. With the focus on freestanding EC, this study investigates the seismic performance of sliding and wheel/caster-supported EC in fixed-base and base-isolated buildings. The study adopts a comparative approach to provide a better understanding of the advantages and disadvantages of using each structural system. The seismic response of sliding EC is investigated analytically in the first part of the thesis, while the response of EC supported on wheels/casters is examined through shake table experiments on two pieces of hospital equipment. The study finds base isolation to be generally effective in reducing seismic demands on freestanding EC, but it also exposes certain situations where isolation in fact increases demands on EC. Increasing the frictional resistance for sliding EC or locking the wheel/casters in the case of wheel/caster-supported EC is highly recommended for EC in base-isolated buildings to prevent excessive displacement demands. Furthermore, the study suggests several design probability functions that can be used by practicing engineers to estimate the peak seismic demands on sliding and wheel/caster-supported EC in fixed-base and base-isolated buildings. / Dissertation / Doctor of Philosophy (PhD)
65

Dynamics and controls for an omnidirectional robot

Henning, Timothy Paul January 2003 (has links)
No description available.
66

Stability of a Vision Based Platooning System

Köling, Ann, Kjellberg, Kristina January 2021 (has links)
The current development of autonomous vehiclesallow for several new applications to form and evolve. One ofthese are platooning, where several vehicles drive closely togetherwith automatic car following. The method of getting informationabout the other vehicles in a platoon can vary. One of thesemethods is using visual information from a camera. Having acamera on-board an autonomous vehicle has further potential, forexample for recognition of objects in the vehicle’s surroundings.This bachelor thesis uses small RC vehicles to test an example ofa vision based platooning system. The system is then evaluatedusing a step response, from which the stability of the systemis analyzed. Additionally, a previously developed communicationbased platooning system was tested in the same way and it’sstability compared. The main conclusion of this thesis is that it isfeasible to use a camera, ArUco marker and an Optimal VelocityRelative Velocity model to achieve a vision based platoon on asmall set of RC vehicles. / Forskningsframsteg inom området autonoma fordon möjliggör utveckling av ett flertal nya tillämpningar. En av dessa är platooning, som innebär att flera fordon kör nära varandra med automatisk farthållning. Metoden för att erhålla information om de andra fordonen i platoonen kan variera. En av dessa metoder är att använda visuell information från en kamera. Att ha en kamera ombord på ett autonomt fordon har stor potential, exempelvis för detektering av objekt i fordonets omgivning. Det här kandidatexamensarbetet använder små radiostyrda bilar för att testa ett exempel av ett kamerabaserat platooning-system. Systemet är sedan utvärderat med hjälp av ett stegsvar, från vilket stabiliteten av systemet är analyserat. Dessutom testas ett tidigare utvecklat kommunikationsbaserat platooning-system, hittills bara testat i simulering, på samma uppsättning bilar. Den huvudsakliga slutsatsen av detta arbete är att det är möjligt att använda en kamera, ArUco markör och en Optimal Velocity Relative Velocity modell för att uppnå kamerabaserad platoon med en liten uppsättning radiostyrda bilar. / Kandidatexjobb i elektroteknik 2021, KTH, Stockholm
67

TOWARDS IMPROVING TELETACTION IN TELEOPERATION TASKS USING VISION-BASED TACTILE SENSORS

Oscar Jia Jun Yu (18391263) 01 May 2024 (has links)
<p dir="ltr">Teletaction, the transmission of tactile feedback or touch, is a crucial aspect in the</p><p dir="ltr">field of teleoperation. High-quality teletaction feedback allows users to remotely manipulate</p><p dir="ltr">objects and increase the quality of the human-machine interface between the operator and</p><p dir="ltr">the robot, making complex manipulation tasks possible. Advances in the field of teletaction</p><p dir="ltr">for teleoperation however, have yet to make full use of the high-resolution 3D data provided</p><p dir="ltr">by modern vision-based tactile sensors. Existing solutions for teletaction lack in one or more</p><p dir="ltr">areas of form or function, such as fidelity or hardware footprint. In this thesis, we showcase</p><p dir="ltr">our research into a low-cost teletaction device for teleoperation that can utilize the real-time</p><p dir="ltr">high-resolution tactile information from vision-based tactile sensors, through both physical</p><p dir="ltr">3D surface reconstruction and shear displacement. We present our device, the Feelit, which</p><p dir="ltr">uses a combination of a pin-based shape display and compliant mechanisms to accomplish</p><p dir="ltr">this task. The pin-based shape display utilizes an array of 24 servomotors with miniature</p><p dir="ltr">Bowden cables, giving the device a resolution of 6x4 pins in a 15x10 mm display footprint.</p><p dir="ltr">Each pin can actuate up to 3 mm in 200 ms, while providing 80 N of force and 3 um of</p><p dir="ltr">depth resolution. Shear displacement and rotation is achieved using a compliant mechanism</p><p dir="ltr">design, allowing a minimum of 1 mm displacement laterally and 10 degrees of rotation. This</p><p dir="ltr">real-time 3D tactile reconstruction is achieved with the use of a vision-based tactile sensor,</p><p dir="ltr">the GelSight, along with an algorithm that samples the depth data and marker tracking to</p><p dir="ltr">generate actuator commands. With our device we perform a series of experiments including</p><p dir="ltr">shape recognition and relative weight identification, showing that our device has the potential</p><p dir="ltr">to expand teletaction capabilities in the teleoperation space.</p>
68

Navigation autonome par imagerie de terrain pour l'exploration planétaire / Autonomous vision-based terrain-relative navigation for planetary exploration

Simard Bilodeau, Vincent January 2015 (has links)
Abstract: The interest of major space agencies in the world for vision sensors in their mission designs has been increasing over the years. Indeed, cameras offer an efficient solution to address the ever-increasing requirements in performance. In addition, these sensors are multipurpose, lightweight, proven and a low-cost technology. Several researchers in vision sensing for space application currently focuse on the navigation system for autonomous pin-point planetary landing and for sample and return missions to small bodies. In fact, without a Global Positioning System (GPS) or radio beacon around celestial bodies, high-accuracy navigation around them is a complex task. Most of the navigation systems are based only on accurate initialization of the states and on the integration of the acceleration and the angular rate measurements from an Inertial Measurement Unit (IMU). This strategy can track very accurately sudden motions of short duration, but their estimate diverges in time and leads normally to high landing error. In order to improve navigation accuracy, many authors have proposed to fuse those IMU measurements with vision measurements using state estimators, such as Kalman filters. The first proposed vision-based navigation approach relies on feature tracking between sequences of images taken in real time during orbiting and/or landing operations. In that case, image features are image pixels that have a high probability of being recognized between images taken from different camera locations. By detecting and tracking these features through a sequence of images, the relative motion of the spacecraft can be determined. This technique, referred to as Terrain-Relative Relative Navigation (TRRN), relies on relatively simple, robust and well-developed image processing techniques. It allows the determination of the relative motion (velocity) of the spacecraft. Despite the fact that this technology has been demonstrated with space qualified hardware, its gain in accuracy remains limited since the spacecraft absolute position is not observable from the vision measurements. The vision-based navigation techniques currently studied consist in identifying features and in mapping them into an on-board cartographic database indexed by an absolute coordinate system, thereby providing absolute position determination. This technique, referred to as Terrain-Relative Absolute Navigation (TRAN), relies on very complex Image Processing Software (IPS) having an obvious lack of robustness. In fact, these software depend often on the spacecraft attitude and position, they are sensitive to illumination conditions (the elevation and azimuth of the Sun when the geo-referenced database is built must be similar to the ones present during mission), they are greatly influenced by the image noise and finally they hardly manage multiple varieties of terrain seen during the same mission (the spacecraft can fly over plain zone as well as mountainous regions, the images may contain old craters with noisy rims as well as young crater with clean rims and so on). At this moment, no real-time hardware-in-the-loop experiment has been conducted to demonstrate the applicability of this technology to space mission. The main objective of the current study is to develop autonomous vision-based navigation algorithms that provide absolute position and surface-relative velocity during the proximity operations of a planetary mission (orbiting phase and landing phase) using a combined approach of TRRN and TRAN technologies. The contributions of the study are: (1) reference mission definition, (2) advancements in the TRAN theory (image processing as well as state estimation) and (3) practical implementation of vision-based navigation. / Résumé: L’intérêt des principales agences spatiales envers les technologies basées sur la vision artificielle ne cesse de croître. En effet, les caméras offrent une solution efficace pour répondre aux exigences de performance, toujours plus élevées, des missions spatiales. De surcroît, ces capteurs sont multi-usages, légers, éprouvés et peu coûteux. Plusieurs chercheurs dans le domaine de la vision artificielle se concentrent actuellement sur les systèmes autonomes pour l’atterrissage de précision sur des planètes et sur les missions d’échantillonnage sur des astéroïdes. En effet, sans système de positionnement global « Global Positioning System (GPS) » ou de balises radio autour de ces corps célestes, la navigation de précision est une tâche très complexe. La plupart des systèmes de navigation sont basés seulement sur l’intégration des mesures provenant d’une centrale inertielle. Cette stratégie peut être utilisée pour suivre les mouvements du véhicule spatial seulement sur une courte durée, car les données estimées divergent rapidement. Dans le but d’améliorer la précision de la navigation, plusieurs auteurs ont proposé de fusionner les mesures provenant de la centrale inertielle avec des mesures d’images du terrain. Les premiers algorithmes de navigation utilisant l’imagerie du terrain qui ont été proposés reposent sur l’extraction et le suivi de traits caractéristiques dans une séquence d’images prises en temps réel pendant les phases d’orbite et/ou d’atterrissage de la mission. Dans ce cas, les traits caractéristiques de l’image correspondent à des pixels ayant une forte probabilité d’être reconnus entre des images prises avec différentes positions de caméra. En détectant et en suivant ces traits caractéristiques, le déplacement relatif du véhicule (la vitesse) peut être déterminé. Ces techniques, nommées navigation relative, utilisent des algorithmes de traitement d’images robustes, faciles à implémenter et bien développés. Bien que cette technologie a été éprouvée sur du matériel de qualité spatiale, le gain en précision demeure limité étant donné que la position absolue du véhicule n’est pas observable dans les mesures extraites de l’image. Les techniques de navigation basées sur la vision artificielle actuellement étudiées consistent à identifier des traits caractéristiques dans l’image pour les apparier avec ceux contenus dans une base de données géo-référencées de manière à fournir une mesure de position absolue au filtre de navigation. Cependant, cette technique, nommée navigation absolue, implique l’utilisation d’algorithmes de traitement d’images très complexes souffrant pour le moment des problèmes de robustesse. En effet, ces algorithmes dépendent souvent de la position et de l’attitude du véhicule. Ils sont très sensibles aux conditions d’illuminations (l’élévation et l’azimut du Soleil présents lorsque la base de données géo-référencée est construite doit être similaire à ceux observés pendant la mission). Ils sont grandement influencés par le bruit dans l’image et enfin ils supportent mal les multiples variétés de terrain rencontrées pendant la même mission (le véhicule peut survoler autant des zones de plaine que des régions montagneuses, les images peuvent contenir des vieux cratères avec des contours flous aussi bien que des cratères jeunes avec des contours bien définis, etc.). De plus, actuellement, aucune expérimentation en temps réel et sur du matériel de qualité spatiale n’a été réalisée pour démontrer l’applicabilité de cette technologie pour les missions spatiales. Par conséquent, l’objectif principal de ce projet de recherche est de développer un système de navigation autonome par imagerie du terrain qui fournit la position absolue et la vitesse relative au terrain d’un véhicule spatial pendant les opérations à basse altitude sur une planète. Les contributions de ce travail sont : (1) la définition d’une mission de référence, (2) l’avancement de la théorie de la navigation par imagerie du terrain (algorithmes de traitement d’images et estimation d’états) et (3) implémentation pratique de cette technologie.
69

Vers le vol à voile longue distance pour drones autonomes / Towards Vision-Based Autonomous Cross-Country Soaring for UAVs

Stolle, Martin Tobias 03 April 2017 (has links)
Les petit drones à voilure fixe rendent services aux secteurs de la recherche, de l'armée et de l'industrie, mais souffrent toujours de portée et de charge utile limitées. Le vol thermique permet de réduire la consommation d'énergie. Cependant,sans télédétection d'ascendances, un drone ne peut bénéficier d'une ascendance qu'en la rencontrant par hasard. Dans cette thèse, un nouveau cadre pour le vol à voile longue distance autonome est élaboré, permettant à un drone planeur de localiser visuellement des ascendances sous-cumulus et d’en récolter l'énergie de manière efficace. S'appuyant sur le filtre de Kalman non parfumé, une méthode de vision monoculaire est établie pour l'estimation des paramètres d’ascendances. Sa capacité de fournir des estimations convergentes et cohérentes est évaluée par des simulations Monte Carlo. Les incertitudes de modèle, le bruit de traitement de l'image et les trajectoires de l'observateur peuvent dégrader ces estimés. Par conséquent, un deuxième axe de cette thèse est la conception d'un planificateur de trajectoire robuste basé sur des cartes d'ascendances. Le planificateur fait le compromis entre le temps de vol et le risque d’un atterrissage forcé dans les champs tout en tenant compte des incertitudes d'estimation dans le processus de prise de décision. Il est illustré que la charge de calcul du planificateur de trajectoire proposé est réalisable sur une plate-forme informatique peu coûteuse. Les algorithmes proposés d’estimation ainsi que de planification sont évalués conjointement dans un simulateur de vol à 6 axes, mettant en évidence des améliorations significatives par rapport aux vols à voile longue distance autonomes actuels. / Small fixed-wing Unmanned Aerial Vehicles (UAVs) provide utility to research, military, and industrial sectors at comparablyreasonable cost, but still suffer from both limited operational ranges and payload capacities. Thermal soaring flight for UAVsoffers a significant potential to reduce the energy consumption. However, without remote sensing of updrafts, a glider UAVcan only benefit from an updraft when encountering it by chance. In this thesis, a new framework for autonomous cross-country soaring is elaborated, enabling a glider UAV to visually localize sub-cumulus thermal updrafts and to efficiently gain energy from them.Relying on the Unscented Kalman Filter, a monocular vision-based method is established, for remotely estimatingsub-cumulus updraft parameters. Its capability of providing convergent and consistent state estimates is assessed relyingon Monte Carlo Simulations. Model uncertainties, image processing noise, and poor observer trajectories can degrade theestimated updraft parameters. Therefore, a second focus of this thesis is the design of a robust probabilistic path plannerfor map-based autonomous cross-country soaring. The proposed path planner balances between the flight time and theoutlanding risk by taking into account the estimation uncertainties in the decision making process. The suggested updraftestimation and path planning algorithms are jointly assessed in a 6 Degrees Of Freedom simulator, highlighting significantperformance improvements with respect to state of the art approaches in autonomous cross-country soaring while it is alsoshown that the path planner is implementable on a low-cost computer platform.
70

Utilisation of photometric moments in visual servoing / Utilisation de moments photométriques en asservissement visuel

Bakthavatchalam, Manikandan 17 March 2015 (has links)
Cette thèse s'intéresse à l'asservissement visuel, une technique de commande à retour d'information visuelle permettant de contrôler le mouvement de systèmes équipées de caméras tels que des robots. Pour l'asservissement visuel, il est essentiel de synthétiser les informations obtenues via la caméra et ainsi établir la relation entre l'évolution de ces informations et le déplacement de la caméra dans l'espace. Celles-ci se basent généralement sur l'extraction et le suivi de primitives géométriques comme des points ou des lignes droites dans l'image. Il a été montré que le suivi visuel et les méthodes de traitement d'images restent encore un frein à l'expansion des techniques d'asservissement visuel. C'est pourquoi la distribution de l'intensité lumineuse de l'image a également été utilisée comme caractéristique visuelle. Finalement, les caractéristiques visuelles basée sur les moments de l'image ont permis de définir des lois de commande découplées. Cependant ces lois de commande sont conditionnées par l'obtention d'une région parfaitement segmentée ou d'un ensemble discret de points dans la scène. Ce travail propose donc une stratégie de capture de l'intensité lumineuse de façon indirecte, par le biais des moments calculés sur toute l'image. Ces caractéristiques globales sont dénommées moments photométriques. Les développements théoriques établis dans cette thèse tendent à définir une modélisation analytique de la matrice d'interaction relative aux moments photométriques. Ces derniers permettent de réaliser une tâche d'asservissement visuel dans des scènes complexes sans suivi visuel ni appariement. Un problème pratique rencontré par cette méthode dense d'asservissement visuel est l'apparition et la disparition de portions de l'image durant la réalisation de la tâche. Ce type de problème peut perturber la commande, voire dans le pire des cas conduire à l’échec de la réalisation de la tâche. Afin de résoudre ce problème, une modélisation incluant des poids spatiaux est proposée. Ainsi, la pondération spatiale, disposant d'une structure spécifique, est introduite de telle sorte qu'un modèle analytique de la matrice d'interaction peut être obtenue comme une simple fonction de la nouvelle formulation des moments photométriques. Une partie de ce travail apporte également une contribution au problème de la commande simultanée des mouvements de rotation autour des axes du plan image. Cette approche définit les caractéristiques visuelles de façon à ce que l'asservissement soit optimal en fonction de critères spécifiques. Quelques critères de sélection basées sur la matrice d'interaction ont été proposés. Ce travail ouvre donc sur d'intéressantes perspectives pour la sélection d'informations visuelles pour l'asservissement visuel basé sur les moments de l'image. / This thesis is concerned with visual servoing, a feedback control technique for controlling camera-equipped actuated systems like robots. For visual servoing, it is essential to synthesize visual information from the camera image in the form of visual features and establish the relationship between their variations and the spatial motion of the camera. The earliest visual features are dependent on the extraction and visual tracking of geometric primitives like points and straight lines in the image. It was shown that visual tracking and image processing procedures are a bottleneck to the expansion of visual servoing methods. That is why the image intensity distribution has also been used directly as a visual feature. Finally, visual features based on image moments allowed to design decoupled control laws but they are restricted by the availability of a well-segmented regions or a discrete set of points in the scene. This work proposes the strategy of capturing the image intensities not directly, but in the form of moments computed on the whole image plane. These global features have been termed photometric moments. Theoretical developments are made to derive the analytical model for the interaction matrix of the photometric moments. Photometric moments enable to perform visual servoing on complex scenes without visual tracking or image matching procedures, as long as there is no severe violation of the zero border assumption (ZBA). A practical issue encountered in such dense VS methods is the appearance and disappearance of portions of the scene during the visual servoing. Such unmodelled effects strongly violate the ZBA assumption and can disturb the control and in the worst case, result in complete failure to convergence. To handle this important practical problem, an improved modelling scheme for the moments that allows for inclusion of spatial weights is proposed. Then, spatial weighting functions with a specific structure are exploited such that an analytical model for the interaction matrix can be obtained as simple functions of the newly formulated moments. A part of this work provides an additional contribution towards the problem of simultaneous control of rotational motions around the image axes. The approach is based on connecting the design of the visual feature such that the visual servoing is optimal with respect to specific criteria. Few selection criteria based on the interaction matrix was proposed. This contribution opens interesting possibilities and finds immediate applications in the selection of visual features in image moments-based VS.

Page generated in 0.0718 seconds