• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 9
  • 7
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 85
  • 85
  • 30
  • 28
  • 25
  • 21
  • 19
  • 16
  • 13
  • 13
  • 13
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Advanced and natural interaction system for motion-impaired users

Manresa Yee, Cristina Suemay 30 September 2009 (has links)
Human-computer interaction is an important area that searches for better and more comfortable systems to promote communication between humans and machines. Vision-based interfaces can offer a more natural and appealing way of communication. Moreover, it can help in the e-accessibility component of the e-inclusion. The aim is to develop a usable system, that is, the end-user must consider the use of this device effective, efficient and satisfactory. The research's main contribution is SINA, a hands-free interface based on computer vision techniques for motion impaired users. This interface does not require the user to use his upper body limbs, as only nose motion is considered. Besides the technical aspect, user's satisfaction when using an interface is a critical issue. The approach that we have adopted is to integrate usability evaluation at relevant points of the software development.
72

Stochastically optimized monocular vision-based navigation and guidance

Watanabe, Yoko 07 December 2007 (has links)
The objective of this thesis is to design a relative navigation and guidance system for unmanned aerial vehicles (UAVs) for vision-based control applications. The vision-based navigation, guidance and control has been one of the most focused on research topics for the automation of UAVs. This is because in nature, birds and insects use vision as the exclusive sensor for object detection and navigation. In particular, this thesis studies the monocular vision-based navigation and guidance. Since 2-D vision-based measurements are nonlinear with respect to the 3-D relative states, an extended Kalman filter (EKF) is applied in the navigation system design. The EKF-based navigation system is integrated with a real-time image processing algorithm and is tested in simulations and flight tests. The first closed-loop vision-based formation flight has been achieved. In addition, vision-based 3-D terrain recovery was performed in simulations. A vision-based obstacle avoidance problem is specially addressed in this thesis. A navigation and guidance system is designed for a UAV to achieve a mission of waypoint tracking while avoiding unforeseen stationary obstacles by using vision information. A 3-D collision criterion is established by using a collision-cone approach. A minimum-effort guidance (MEG) law is applied for a guidance design, and it is shown that the control effort can be reduced by using the MEG-based guidance instead of a conventional guidance law. The system is evaluated in a 6 DoF flight simulation and also in a flight test. For monocular vision-based control problems, vision-based estimation performance highly depends on the relative motion of the vehicle with respect to the target. Therefore, this thesis aims to derive an optimal guidance law to achieve a given mission under the condition of using the EKF-based relative navigation. Stochastic optimization is formulated to minimize the expected cost including the guidance error and the control effort. A suboptimal guidance law is derived based on an idea of the one-step-ahead (OSA) optimization. Simulation results show that the suggested guidance law significantly improves the guidance performance. Furthermore, the OSA optimization is generalized as the n-step-ahead optimization for an arbitrary number of n, and their optimality and computational cost are investigated.
73

[en] A COMPUTER VISION APPLICATION FOR HAND-GESTURES HUMAN COMPUTER INTERACTION / [pt] UMA APLICAÇÃO DE VISÃO COMPUTACIONAL QUE UTILIZA GESTOS DA MÃO PARA INTERAGIR COM O COMPUTADOR

MICHEL ALAIN QUINTANA TRUYENQUE 15 June 2005 (has links)
[pt] A Visão Computacional pode ser utilizada para capturar gestos e criar dispositivos de interação com computadores mais intuitivos e rápidos. Os dispositivos comerciais atuais de interação baseados em gestos utilizam equipamentos caros (dispositivos de seguimento, luvas, câmeras especiais, etc.) e ambientes especiais que dificultam a difusão para o público em geral. Este trabalho apresenta um estudo sobre a viabilidade de utilizarmos câmeras Web como dispositivo de interação baseado em gestos da Mão. Em nosso estudo consideramos que a mão humana está limpa, isto é, sem nenhum dispositivo (mecânico, magnético ou óptico) colocado nela. Consideramos ainda que o ambiente onde ocorre a interação tem as características de um ambiente de trabalho normal, ou seja, sem luzes ou panos de fundo especiais. Para avaliar a viabilidade deste mecanismo de interação, desenvolvemos alguns protótipos. Neles os gestos da mão e as posições dos dedos são utilizados para simular algumas funções presentes em mouses e teclados, tais como selecionar estados e objetos e definir direções e posições. Com base nestes protótipos apresentamos algumas conclusões e sugestões para trabalhos futuros. / [en] Computer Vision can be used to capture gestures and create more intuitive and faster devices to interact with computers. Current commercial gesture-based interaction devices make use of expensive equipment (tracking devices, gloves, special cameras, etc.) and special environments that make the dissemination of such devices to the general public difficult. This work presents a study on the feasibility of using Web cameras as interaction devices based on hand-gestures. In our study, we consider that the hand is clean, that is, it has no (mechanical, magnetic or optical) device. We also consider that the environment where the interaction takes place has the characteristics of a normal working place, that is, without special lights or backgrounds. In order to evaluate the feasibility of such interaction mechanism, we have developed some prototypes of interaction devices. In these prototypes, hand gestures and the position of fingers were used to simulate some mouse and keyboard functions, such as selecting states and objects, and defining directions and positions. Based on these prototypes, we present some conclusions and suggestions for future works.
74

Amélioration de performance de la navigation basée vision pour la robotique autonome : une approche par couplage vision/commande / Performance improvment of vision-based navigation for autonomous robotics : a vision and control coupling approach

Roggeman, Hélène 13 December 2017 (has links)
L'objectif de cette thèse est de réaliser des missions diverses de navigation autonome en environnement intérieur et encombré avec des robots terrestres. La perception de l'environnement est assurée par un banc stéréo embarqué sur le robot et permet entre autres de calculer la localisation de l'engin grâce à un algorithme d'odométrie visuelle. Mais quand la qualité de la scène perçue par les caméras est faible, la localisation visuelle ne peut pas être calculée de façon précise. Deux solutions sont proposées pour remédier à ce problème. La première solution est l'utilisation d'une méthode de fusion de données multi-capteurs pour obtenir un calcul robuste de la localisation. La deuxième solution est la prédiction de la qualité de scène future afin d'adapter la trajectoire du robot pour s'assurer que la localisation reste précise. Dans les deux cas, la boucle de commande est basée sur l'utilisation de la commande prédictive afin de prendre en compte les différents objectifs de la mission : ralliement de points, exploration, évitement d'obstacles. Une deuxième problématique étudiée est la navigation par points de passage avec évitement d'obstacles mobiles à partir des informations visuelles uniquement. Les obstacles mobiles sont détectés dans les images puis leur position et vitesse sont estimées afin de prédire leur trajectoire future et ainsi de pouvoir anticiper leur déplacement dans la stratégie de commande. De nombreuses expériences ont été réalisées en situation réelle et ont permis de montrer l'efficacité des solutions proposées. / The aim of this thesis is to perform various autonomous navigation missions in indoor and cluttered environments with mobile robots. The environment perception is ensured by an embedded stereo-rig and a visual odometry algorithm which computes the localization of the robot. However, when the quality of the scene perceived by the cameras is poor, the visual localization cannot be computed with a high precision. Two solutions are proposed to tackle this problem. The first one is the data fusion from multiple sensors to perform a robust computation of the localization. The second solution is the prediction of the future scene quality in order to adapt the robot's trajectory to ensure that the localization remains accurate. In the two cases, the control loop is based on model predictive control, which offers the possibility to consider simultaneously the different objectives of the mission : waypoint navigation, exploration, obstacle avoidance. A second issue studied is waypoint navigation with avoidance of mobile obstacles using only the visual information. The mobile obstacles are detected in the images and their position and velocity are estimated in order to predict their future trajectory and consider it in the control strategy. Numerous experiments were carried out and demonstrated the effectiveness of the proposed solutions.
75

Etude photométrique des lunes glacées de Jupiter / Photometric study of Jupiter's moons

Belgacem, Ines 15 November 2019 (has links)
Les satellites glacés de Jupiter sont d'un grand intérêt scientifique dans la recherche d'habitabilité au sein de notre système solaire. Elles abritent probablement toutes trois des océans d'eau liquide sous leur croûte glacée. Leurs surfaces présentent différents stades d’évolution – celle de Callisto est très ancienne et entièrement recouverte de cratères, celle de Ganymede est un mélange de terrains sombres et cratérisés et de plaines claires et plus jeunes et la surface d’Europa est la plus jeune et présente des signes d’activité récente. Cette thèse porte sur la photométrie, c’est à dire l’étude de l’énergie lumineuse réfléchie par une surface, en fonction des géométries d’éclairement et d’observation. Les études photométriques permettent de déterminer l’état physique et la microtexture des surfaces (porosité, structure interne, forme des grains, rugosité, transparence…). Une bonne connaissance photométrique est également d'une importance cruciale dans la correction des jeux de données pour toute étude cartographique ou spectroscopique ainsi que pour les futures missions de cette décennie : Europa Clipper de la NASA et JUpiter ICy Moons Explorer de l’ESA.Deux types d’information sont nécessaires pour réaliser une étude photométrique : des données de réflectance et des données géométriques (conditions d’illumination et d'observation). Pour obtenir les premières, nous avons utilisé et calibré des images de missions spatiales passées - Voyager, New Horizons et Galileo. Pour obtenir les secondes, nous avons développé des outils permettant de corriger les métadonnées de ces images (ex : la position et l'orientation des sondes) afin d’obtenir des informations géométriques précises. Nous avons, d’autre part, développé un outil d’inversion pour estimer les paramètres photométriques de Hapke sur des régions d’Europa, Ganymede et Callisto sur l’ensemble du jeu de données en un seul calcul. Enfin, nous discutons des liens possibles entre les paramètres photométriques estimés, la microtexture de la surface et les processus endogènes/exogènes mis en jeu. / Jupiter's icy moons are of great interest in the search for habitability in our Solar System. They probably all harbor liquid water ocean underneath their icy crust. Their surfaces present different stages of evolution – Callisto is heavily cratered and the oldest, Ganymede shows a combination of dark cratered terrain and younger bright plains and Europa’s surface is the youngest with signs of recent and maybe current activity. This work focuses on photometry, i.e. the study of the light scattered by a surface in relation to the illumination and observation geometry. Photometric studies give us insight on the physical state and microtexture of the surface (compaction, internal structure, shape, roughness, transparency…). A good photometric knowledge is also of crucial importance in the correction of datasets for any mapping or spectroscopic study as well as for the future missions of this decade – NASA’s Europa Clipper and ESA’s JUpiter ICy moons Explorer.Two pieces of information are necessary to conduct a photometric study – reflectance data and geometric information (illumination, viewing conditions). For the former, we have used and calibrated images from past space missions – Voyager, New Horizons and Galileo. For the latter, we have developed tools to correct these images metadata (e.g. spacecraft position and orientation) to derive precise geometric information. Moreover, we have developed a Bayesian inversion tool to estimate Hapke’s photometric parameters on regions of Europa, Ganymede and Callisto. We estimate all parameters on our entire dataset at once. Finally, we discuss the possible links between the photometric parameters, the surface microtexture and endogenic/exogenic processes.
76

Three Enabling Technologies for Vision-Based, Forest-Fire Perimeter Surveillance Using Multiple Unmanned Aerial Systems

Holt, Ryan S. 21 June 2007 (has links) (PDF)
The ability to gather and process information regarding the condition of forest fires is essential to cost-effective, safe, and efficient fire fighting. Advances in sensory and autopilot technology have made miniature unmanned aerial systems (UASs) an important tool in the acquisition of information. This thesis addresses some of the challenges faced when employing UASs for forest-fire perimeter surveillance; namely, perimeter tracking, cooperative perimeter surveillance, and path planning. Solutions to the first two issues are presented and a method for understanding path planning within the context of a forest-fire environment is demonstrated. Both simulation and hardware results are provided for each solution.
77

Vision-Based Guidance for Air-to-Air Tracking and Rendezvous of Unmanned Aircraft Systems

Nichols, Joseph Walter 13 August 2013 (has links) (PDF)
This dissertation develops the visual pursuit method for air-to-air tracking and rendezvous of unmanned aircraft systems. It also shows the development of vector-field and proportional-integral methods for controlling UAS flight in formation with other aircraft. The visual pursuit method is a nonlinear guidance method that uses vision-based line of sight angles as inputs to the algorithm that produces pitch rate, bank angle and airspeed commands for the autopilot to use in aircraft control. The method is shown to be convergent about the center of the camera image frame and to be stable in the sense of Lyapunov. In the lateral direction, the guidance method is optimized to balance the pursuit heading with respect to the prevailing wind and the location of the target on the image plane to improve tracking performance in high winds and reduce bank angle effort. In both simulation and flight experimentation, visual pursuit is shown to be effective in providing flight guidance in strong winds. Visual pursuit is also shown to be effective in guiding the seeker while performing aerial docking with a towed aerial drogue. Flight trials demonstrated the ability to guide to within a few meters of the drogue. Further research developed a method to improve docking performance by artificially increasing the length of the line of sight vector at close range to the target to prevent flight control saturation. This improvement to visual pursuit was shown to be an effective method for providing guidance during aerial docking simulations. An analysis of the visual pursuit method is provided using the method of adjoints to evaluate the effects of airspeed, closing velocity, system time constant, sensor delay and target motion on docking performance. A method for predicting docking accuracy is developed and shown to be useful for predicting docking performance for small and large unmanned aircraft systems.
78

Coordinate­Free Spacecraft Formation Control with Global Shape Convergence under Vision­Based Sensing

Mirzaeedodangeh, Omid January 2023 (has links)
Formation control in multi-agent systems represents a groundbreaking intersection of various research fields with lots of emerging applications in various technologies. The realm of space exploration also can benefit significantly from formation control, facilitating a wide range of functions from astronomical observations, and climate monitoring to enhancing telecommunications, and on-orbit servicing and assembly. In this thesis, we present a novel 3D formation control scheme for directed graphs in a leader-follower configuration, achieving (almost) global convergence to the desired shape. Specifically, we introduce three controlled variables representing bispherical coordinates that uniquely describe the formation in 3D. Acyclic triangulated directed graphs (a class of minimally acyclic persistent graphs) are used to model the inter-agent sensing topology, while the agents’ dynamics are governed by the single-integrator model and 2nd order nonlinear version representing spacecraft formation flight. The analysis demonstrates that the proposed decentralized robust formation controller using prescribed performance control ensures (almost) global asymptotic stability while avoiding potential shape ambiguities in the final formation. Furthermore, the control laws are implementable in arbitrarily oriented local coordinate frames of follower agents using only low-cost onboard vision sensors, making them suitable for practical applications. Formation maneuvering and collision avoidance among agents are also addressed which play crucial roles in the safety of space operations. Finally, we validate our formation control approach by simulation studies. / Formationskontroll i system med flera agenter representerar en banbrytande skärningspunkt av olika forskningsområden med massor av nya tillämpningar inom olika teknologier. Rymdutforskningens rike kan också dra stor nytta av formationskontroll, underlättar ett brett utbud av funktioner från astronomiska observationer och klimat övervakning för att förbättra telekommunikation och service och montering i omloppsbana. I denna avhandling presenterar vi ett nytt 3D-formationskontrollschema för riktade grafer i en ledare-följare-konfiguration, vilket uppnår (nästan) global konvergens till önskad form. Specifikt introducerar vi tre kontrollerade variabler som representerar bisfäriska koordinater som unikt beskriver formationen i 3D. Acykliska triangulerade riktade grafer (en klass av minimalt acykliska beständiga grafer) används för att modellera avkänningstopologin mellan agenter, medan agenternas dynamik styrs av singelintegratormodellen och 2:a ordningen olinjär version som representerar rymdfarkostbildningsflygning. Analysen visar att den föreslagna decentraliserade robusta formationskontrollanten använder föreskriven prestanda kontroll säkerställer (nästan) global asymptotisk stabilitet samtidigt som potentiell form undviks oklarheter i den slutliga formationen. Dessutom är kontrolllagarna implementerbara i godtyckligt orienterade lokala koordinatramar för efterföljare som endast använder lågkostnad ombord visionsensorer, vilket gör dem lämpliga för praktiska tillämpningar. Formationsmanövrering och undvikande av kollisioner mellan agenter tas också upp som spelar avgörande roller i säkerheten vid rymdoperationer. Slutligen validerar vi vår strategi för formningskontroll genom simuleringsstudier
79

Autonomous Navigation in Partially-Known Environment using Nano Drones with AI-based Obstacle Avoidance : A Vision-based Reactive Planning Approach for Autonomous Navigation of Nano Drones / Autonom Navigering i Delvis Kända Miljöer med Hjälp av Nanodrönare med AI-baserat Undvikande av Hinder : En Synbaserad Reaktiv Planeringsmetod för Autonom Navigering av Nanodrönare

Sartori, Mattia January 2023 (has links)
The adoption of small-size Unmanned Aerial Vehicles (UAVs) in the commercial and professional sectors is rapidly growing. The miniaturisation of sensors and processors, the advancements in connected edge intelligence and the exponential interest in Artificial Intelligence (AI) are boosting the affirmation of autonomous nano-size drones in the Internet of Things (IoT) ecosystem. However, achieving safe autonomous navigation and high-level tasks like exploration and surveillance with these tiny platforms is extremely challenging due to their limited resources. Lightweight and reliable solutions to this challenge are subject to ongoing research. This work focuses on enabling the autonomous flight of a pocket-size, 30-gram platform called Crazyflie in a partially known environment. We implement a modular pipeline for the safe navigation of the nano drone between waypoints. In particular, we propose an AI-aided, vision-based reactive planning method for obstacle avoidance. We deal with the constraints of the nano drone by splitting the navigation task into two parts: a deep learning-based object detector runs on external hardware while the planning algorithm is executed onboard. For designing the reactive approach, we take inspiration from existing sensorbased navigation solutions and obtain a novel method for obstacle avoidance that does not rely on distance information. In the study, we also analyse the communication aspect and the latencies involved in edge offloading. Moreover, we share insights into the finetuning of an SSD MobileNet V2 object detector on a custom dataset of low-resolution, grayscale images acquired with the drone. The results show the ability to command the drone at ∼ 8 FPS and a model performance reaching a COCO mAP of 60.8. Field experiments demonstrate the feasibility of the solution with the drone flying at a top speed of 1 m/s while steering away from an obstacle placed in an unknown position and reaching the target destination. Additionally, we study the impact of a parameter determining the strength of the avoidance action and its influence on total path length, traversal time and task completion. The outcome demonstrates the compatibility of the communication delay and the model performance with the requirements of the real-time navigation task and a successful obstacle avoidance rate reaching 100% in the best-case scenario. By exploiting the modularity of the proposed working pipeline, future work could target the improvement of the single parts and aim at a fully onboard implementation of the navigation task, pushing the boundaries of autonomous exploration with nano drones. / Användningen av små obemannade flygfarkoster (UAV) inom den kommersiella och professionella sektorn ökar snabbt. Miniatyriseringen av sensorer och processorer, framstegen inom connected edge intelligence och det exponentiella intresset för artificiell intelligens (AI) ökar användningen av autonoma drönare i nanostorlek i ekosystemet för sakernas internet (IoT). Att uppnå säker autonom navigering och uppgifter på hög nivå, som utforskning och övervakning, med dessa små plattformar är dock extremt utmanande på grund av deras begränsade resurser. Lättviktiga och tillförlitliga lösningar på denna utmaning är föremål för pågående forskning. Detta arbete fokuserar på att möjliggöra autonom flygning av en 30-grams plattform i fickformat som kallas Crazyflie i en delvis känd miljö. Vi implementerar en modulär pipeline för säker navigering av nanodrönaren mellan riktpunkter. I synnerhet föreslår vi en AI-assisterad, visionsbaserad reaktiv planeringsmetod för att undvika hinder. Vi hanterar nanodrönarens begränsningar genom att dela upp navigeringsuppgiften i två delar: en djupinlärningsbaserad objektdetektor körs på extern hårdvara medan planeringsalgoritmen exekveras ombord. För att utforma den reaktiva metoden hämtar vi inspiration från befintliga sensorbaserade navigeringslösningar och tar fram en ny metod för hinderundvikande som inte är beroende av avståndsinformation. I studien analyserar vi även kommunikationsaspekten och de svarstider som är involverade i edge offloading. Dessutom delar vi med oss av insikter om finjusteringen av en SSD MobileNet V2-objektdetektor på en skräddarsydd dataset av lågupplösta gråskalebilder som tagits med drönaren. Resultaten visar förmågan att styra drönaren med ∼ 8 FPS och en modellprestanda som når en COCO mAP på 60.8. Fältexperiment visar att lösningen är genomförbar med drönaren som flyger med en topphastighet på 1 m/s samtidigt som den styr bort från ett hinder som placerats i en okänd position och når måldestinationen. Vi studerar även effekten av en parameter som bestämmer styrkan i undvikandeåtgärden och dess påverkan på den totala väglängden, tidsåtgången och slutförandet av uppgiften. Resultatet visar att kommunikationsfördröjningen och modellens prestanda är kompatibla med kraven för realtidsnavigering och ett lyckat undvikande av hinder som i bästa fall uppgår till 100%. Genom att utnyttja modulariteten i den föreslagna arbetspipelinen kan framtida arbete inriktas på förbättring av de enskilda delarna och syfta till en helt inbyggd implementering av navigeringsuppgiften, vilket flyttar gränserna för autonom utforskning med nano-drönare.
80

Vision based control and landing of Micro aerial vehicles / Visionsbaserad styrning och landning av drönare

Karlsson, Christoffer January 2019 (has links)
This bachelors thesis presents a vision based control system for the quadrotor aerial vehicle,Crazy ie 2.0, developed by Bitcraze AB. The main goal of this thesis is to design andimplement an o-board control system based on visual input, in order to control the positionand orientation of the vehicle with respect to a single ducial marker. By integrating a cameraand wireless video transmitter onto the MAV platform, we are able to achieve autonomousnavigation and landing in relatively close proximity to the dedicated target location.The control system was developed in the programming language Python and all processing ofthe vision-data take place on an o-board computer. This thesis describes the methods usedfor developing and implementing the control system and a number of experiments have beencarried out in order to determine the performance of the overall vision control system. Withthe proposed method of using ducial markers for calculating the control demands for thequadrotor, we are able to achieve autonomous targeted landing within a radius of 10centimetres away from the target location. / I detta examensarbete presenteras ett visionsbaserat kontrollsystem for dronaren Crazy ie 2.0som har utvecklats av Bitcraze AB. Malet med detta arbete ar att utforma och implementeraett externt kontrollsystem baserat pa data som inhamtas av en kamera for att reglera fordonetsposition och riktning med avseende pa en markor placerad i synfaltet av kameran. Genom attintegrera kameran tillsammans med en tradlos videosandare pa plattformen, visar vi i dennaavhandling att det ar mojligt att astadkomma autonom navigering och landning i narheten avmarkoren.Kontrollsystemet utvecklades i programmeringsspraket Python och all processering avvisions-datan sker pa en extern dator. Metoderna som anvands for att utvecklakontrollsystemet och som beskrivs i denna rapport har testats under ett ertal experiment somvisar pa hur val systemet kan detektera markoren och hur val de olika ingaendekomponenterna samspelar for att kunna utfora den autonoma styrningen. Genom den metodsom presenteras i den har rapporten for att berakna styrsignalerna till dronaren med hjalp avvisuell data, visar vi att det ar mojligt att astadkomma autonom styrning och landning motmalet inom en radie av 10 centimeter.

Page generated in 0.0732 seconds