Spelling suggestions: "subject:"unidirectional""
31 |
Omnidirectional Robot / Omnidirektionell RobotHedvall, Axel, Rydén, Filip January 2021 (has links)
Robots are being used more and more in today’s society. These robots need to be mobile and have a good understanding of their surroundings. This bachelor’s thesis in mechatronics aims to see how a mobile robot can be constructed, and how it can best map its surroundings. The robot was built to have three omni wheels to allow it to move freely in the plane and stepper motors to provide accurate movement. Ultrasonic sensors were placed around the robot to be used as a tool to determine its surroundings. The brain of the robot was an Arduino UNO, which with the help of an ESP-01, communicated with aserver over Wi-Fi. The server received the data from the ultrasonic senors and drew a map on a web page. Multiple test were made to evaluate the different systems. The robot moved really well and with high precision after some tweaking. The ultrasonic sensors were also very precise and the communication between the robot and the server worked very well. All the different systems were combined to make the robot move autonomously. The robot could navigate by itself and avoid obstacles. Although the mapping worked from a technical point of view, it was hard to read and could be done better. / Robotar är något som används mer och mer i dagens moderna samhälle. Dessa robotar behöver vara mobila och haen god uppfattning om miljön de befinner sig i. Detta kandidatexamensarbete inom mekatronik ska undersöka hur en mobil robot kan byggas, och hur den kan kartlägga miljönden befinner sig i. Roboten som konstruerades hade tre omnihjul för att kunna röra sig fritt längs markplanet och stegmotorer för precis drift. Ultraljudsensorer placerades runt om roboten för att ge den en uppfattning av omgivningen. Hjärnan i roboten var en Arduino UNO som med hjälp av en ESP-01 kommunicerade över Wi-Fi till en server. Servern tog emotsensordata från roboten och ritade upp det som en karta ien webbläsare. Det utfördes tester för att utvärdera de olika delsystemen. Driften på roboten fungerade utmärkt med god precisionefter några iterationer. Ultraljudsensorerna hade också godprecision och kommunikationen mellan roboten och servern fungerade mycket bra. De olika delsystemen kombinerades för att ge roboten självkörning. Roboten kunde navigera själv och undvika hinder. Trots att kartan fungeradeur ett tekniskt perspektiv så var den svårtydd och kunde förbättrats.
|
32 |
Omnidirectional Vision for an Autonomous Surface VehicleGong, Xiaojin 07 February 2009 (has links)
Due to the wide field of view, omnidirectional cameras have been extensively used in many applications, including surveillance and autonomous navigation. In order to implement a fully autonomous system, one of the essential problems is construction of an accurate, dynamic environment model. In Computer Vision this is called structure from stereo or motion (SFSM). The work in this dissertation addresses omnidirectional vision based SFSM for the navigation of an autonomous surface vehicle (ASV), and implements a vision system capable of locating stationary obstacles and detecting moving objects in real time.
The environments where the ASV navigates are complex and fully of noise, system performance hence is a primary concern. In this dissertation, we thoroughly investigate the performance of range estimation for our omnidirectional vision system, regarding to different omnidirectional stereo configurations and considering kinds of noise, for instance, disturbances in calibration, stereo configuration, and image processing. The result of performance analysis is very important for our applications, which not only impacts the ASV's navigation, also guides the development of our omnidirectional stereo vision system.
Another big challenge is to deal with noisy image data attained from riverine environments. In our vision system, a four-step image processing procedure is designed: feature detection, feature tracking, motion detection, and outlier rejection. The choice of point-wise features and outlier rejection based method makes motion detection and stationary obstacle detection efficient. Long run outdoor experiments are conducted in real time and show the effectiveness of the system. / Ph. D.
|
33 |
Design, Simulation, and Experimental Validation of a Novel High-Speed Omnidirectional Underwater Propulsion MechanismNjaka, Taylor Dean 11 January 2021 (has links)
This dissertation explores a novel omnidirectional propulsion mechanism for observation-class underwater vehicles, enabling for operation in extreme, hostile, or otherwise high-speed turbulent environments where unprecedented speed and agility are necessary. With a small overall profile, the mechanism consists of two sets of counter-rotating blades operating at frequencies high enough to dampen vibrational effects on onboard sensors. Each rotor is individually powered to allow for roll control via relative motor effort and attached to a swashplate mechanism, providing quick and powerful manipulation of fluid-flow direction in the hull's coordinate frame without the need to track rotor position. The omnidirectional mechanism exploits properties emerging from its continuous counter-rotating blades to generate near-instantaneous forces and moments in six degrees of freedom (DOF) of considerable magnitude, and is designed to allow each DOF to be controlled independently by one of six decoupled control parameters. The work presented in this dissertation validates the mechanism through physical small-scale experimentation, confirming near-instantaneous reaction time, and aligning with computational fluid dynamic (CFD) results presented for the proposed theorized full-scale implementation. Specifically, it is demonstrated that the mechanism can generate sway thrust at 10-20% surge thrust capacity in both simulation and physical tests. It is also shown that the magnitude of forces and moments generated is directly proportional to motor effort and corresponding commands, in par with theory. Any apparent couplings between different control modes are deeply understood and shown to be trivially accounted for, effectively uncoupling all six control parameters. The design, principles, and bullard-pull simulation of the proposed full-scale mechanism and vehicle implementation are then thoroughly discussed. Kinematic and hydrodynamic analyses of the hull and surrounding fluid forces during different maneuvers are presented, followed by the mechanical design and kinematic analysis of each subsystem. To estimate proposed full-scale performance specifications and UUV turbulence rejection, a full six-DOF maneuvering model is constructed from first principles utilizing CFD and regression techniques. This dissertation thoroughly examines the working principles and performance of a novel omnidirectional propulsion mechanism. With the small-scale model and full scale simulation and analysis, the work presented successfully demonstrates the mechanism can generate nearly instantaneous omnidirectional forces underwater in a controlled manner, with application to high-speed agile vehicles in dynamic environments. / Doctor of Philosophy / This dissertation explores a novel omnidirectional propulsion mechanism for observation-class underwater vehicles, enabling for operation in extreme, hostile, or otherwise high-speed turbulent environments where unprecedented speed and agility are necessary. The mechanism utilizes independently-powered rotors to command near-instantaneous forces and moments in all six degrees of freedom (DOF). The design allows each DOF to be independently controlled by one of six decoupled control parameters. The method for generating lateral thrust through the mechanism is originally verified through computational fluid dynamic (CFD) tests, but the complete novelty of the lateral maneuver calls for physical verification for any noteworthy validation. The work presented in this dissertation validates the mechanism through physical small-scale experimentation, confirming near-instantaneous reaction time, and aligning with CFD results presented for the proposed theorized full-scale implementation. Specifically, it is demonstrated that the mechanism can generate sway (side/side) thrust at 10-20% surge (forward/backward) thrust capacity in both simulation and physical tests. It is also shown that the magnitude of forces and moments generated is directly proportional to motor effort and corresponding commands, in par with theory. Finally, a full six-DOF model for underwater vehicle trajectory is constructed utilizing detailed maneuvering techniques to estimate full-scale performance. With the small-scale model and full-scale simulation and analysis, the work successfully demonstrates the mechanism can generate nearly instantaneous omnidirectional forces underwater in a controlled manner, with application to high-speed agile vehicles in dynamic environments.
|
34 |
6.78MHz Omnidirectional Wireless Power Transfer System for Portable Devices ApplicationFeng, Junjie 11 January 2021 (has links)
Wireless power transfer (WPT) with loosely coupled coils is a promising solution to deliver power to a battery in a variety of applications. Due to its convenience, wireless power transfer technology has become popular in consumer electronics. Thus far, the majority of the coupled coils in these systems are planar structure, and the magnetic field induced by the transmitter coil is in one direction, meaning that the energy power transfer capability degrades greatly when there is some angle misalignment between the coupled coils.
To improve the charging flexibility, a three–dimensional (3D) coils structure is proposed to transfer energy in different directions. With appropriate modulation current flowing through each transmitter coil, the magnetic field rotates in different directions and covers all the directions in 3D space. With omnidirectional magnetic field, the charging platform can provide energy transfer in any direction; therefore, the angle alignment between the transmitter coil and receiver coil is no longer needed.
Compensation networks are normally used to improve the power transfer capability of a WPT system with loosely coupled coils. The resonant circuits, formed by the loosely coupled coils and external compensation inductors or capacitors, are crucial in the converter design. In WPT system, the coupling coefficient between the transmitting coil and the receiving coil is subject to the receiver's positioning. The variable coupling condition is a big challenge to the resonant topology selection. The detailed requirements of the resonant converter in an omnidirectional WPT system are identified as follows: 1). coupling independent resonant frequency; 2). load independent output voltage; 3). load independent transmitter coil current; 4). maximum efficiency power transfer; 5). soft switching of active devices. A LCCL-LC resonant converter is derived to satisfy all of the five requirements.
In consumer electronics applications, Megahertz (MHz) WPT systems are used to improve the charging spatial freedom. 6.78 MHz is selected as the system operation in AirFuel standard, a wireless charging standard for commercial electronics. The zero voltage switching (ZVS) operation of the switching devices is essential in reducing the switching loss and the switching related electromagnetic interference (EMI) issue in a MHz system; therefore, a comprehensive evaluation of ZVS condition in an omnidirectional WPT system is performed. And a design methodology of the LCCL-LC converter to achieve ZVS operation is proposed.
The big hurdle of the WPT technology is the safety issue related to human exposure of electromagnetic fields (EMF). A double layer shield structure, including a magnetic layer and a conductive layer, is proposed in a three dimensional charging setup to reduce the stray magnetic field level. A parametric analysis of the double shield structure is conducted to improve the attenuation capability of the shielding structure.
In an omnidirectional WPT system, the energy can be transferred in any direction; however the receiving devices has its preferred field direction based on its positioning and orientation. To focus power transfer towards targeted loads, a smart detection algorithm for identifying the positioning and orientation of receiver devices based on the input power information is presented. The system efficiency is further improved by a maximum efficiency point tracking function. A novel power flow control with a load combination strategy to charge multiple loads simultaneously is explained. The charging speed of the omnidirectional WPT system is greatly improved with proposed power flow control. / Doctor of Philosophy / Wireless power transfer (WPT) is a promising solution to deliver power to a battery in a variety of applications. Due to its convenience, wireless power transfer technology with loosely coupled coils has become popular in consumer electronics. In such system, the receiving coil embedded in the receiving device picks up magnetic field induced by the transmitter coil; therefore, energy is transferred through the magnetic field and contactless charging is achieved. Thus far, the majority of the coupled coils in these systems are planar structure, and the magnetic field induced by the transmitter coil is in one direction, meaning that the energy power transfer capability degrades greatly when there is some angle misalignment between the coupled coils.
To improve the charging flexibility, a three–dimensional (3D) coils structure is proposed to transfer energy in different directions, also known as in omnidirectional manner. With omnidirectional magnetic field, the charging platform can provide energy transfer in any direction; therefore, the angle alignment between the transmitter coil and receiver coil is no longer needed.
In a WPT system with loosely coupled coils, the energy transfer capability suffers from weak coupling condition. To improve the power transfer capability, the electrical resonance concept between the inductor and capacitor at the power transfer frequency is adopted. A novel compensation network is proposed to form a resonant tank with the loosely coupled coils and maximize the power transfer at the operating frequency.
As for the WPT system with loosely coupled coils, the energy transfer capability is also proportional to the operating frequency. Therefore, Megahertz (MHz) WPT systems are used to improve the charging spatial freedom. 6.78 MHz is selected as the system operation in AirFuel standard, a wireless charging standard for commercial electronics. The zero voltage switching (ZVS) operation of the switching devices is essential in reducing the switching loss and the switching related electromagnetic interference (EMI) issue in a MHz system; therefore, a comprehensive evaluation of ZVS condition in an omnidirectional WPT system is performed.
The big hurdle of the WPT technology is the safety concern related to human exposure of electromagnetic fields (EMF). Therefore, a double layer shield structure is first applied in a three dimensional charging setup to confine the electromagnetic fields effectively. The stray field level in our charging platform is well below the safety level required by the regulation agent.
Although the energy can be transferred in an omnidirectional manner in the proposed charging platform, the energy should be directed to the target loads to avoid unnecessary energy waste. Therefore, a smart detection method is proposed to detect the receiver coil's orientation and focus the energy transfer to certain direction preferred by the receiver in the setup. The energy beaming strategy greatly improves the charging speed of the charging setup.
|
35 |
Multimodal 3D User Interfaces for Augmented Reality and Omni-Directional VideoRovelo Ruiz, Gustavo Alberto 29 July 2015 (has links)
[EN] Human-Computer Interaction is a multidisciplinary research field that combines, amongst others, Computer Science and Psychology. It studies human-computer interfaces from the point of view of both, technology and the user experience.
Researchers in this area have now a great opportunity, mostly because the technology required to develop 3D user interfaces for computer applications (e.g. visualization, tracking or portable devices) is now more affordable than a few years ago.
Augmented Reality and Omni-Directional Video are two promising examples of this type of interfaces where the user is able to interact with the application in the three-dimensional space beyond the 2D screen.
The work described in this thesis is focused on the evaluation of interaction aspects in both types of applications. The main goal is contributing to increase the knowledge about this new type of interfaces to improve their design. We evaluate how computer interfaces can convey information to the user in Augmented Reality applications exploiting human multisensory capabilities. Furthermore, we evaluate how the user can give commands to the system using more than one type of input modality, studying Omnidirectional Video gesture-based interaction.
We describe the experiments we performed, outline the results for each particular scenario and discuss the general implications of our findings. / [ES] El campo de la Interacción Persona-Computadora es un área multidisciplinaria que combina, entre otras a las Ciencias de la Computación y Psicología. Estudia la interacción entre los sistemas computacionales y las personas considerando tanto el desarrollo tecnológico, como la experiencia del usuario.
Los dispositivos necesarios para crear interfaces de usuario 3D son ahora más asequibles que nunca (v.gr. dispositivos de visualización, de seguimiento o móviles) abriendo así un área de oportunidad para los investigadores de esta disciplina. La Realidad Aumentada y el Video Omnidireccional son dos ejemplos de este tipo de interfaces en donde el usuario es capaz de interactuar en el espacio tridimensional más allá de la pantalla de la computadora.
El trabajo presentado en esta tesis se centra en la evaluación de la interacción del usuario con estos dos tipos de aplicaciones. El objetivo principal es contribuir a incrementar la base de conocimiento sobre este tipo de interfaces y así, mejorar su diseño.
En este trabajo investigamos de qué manera se pueden emplear de forma eficiente las interfaces multimodales para proporcionar información relevante en aplicaciones de Realidad Aumentada. Además, evaluamos de qué forma el usuario puede usar interfaces 3D usando más de un tipo de interacción; para ello evaluamos la interacción basada en gestos para Video Omnidireccional.
A lo largo de este documento se describen los experimentos realizados y los resultados obtenidos para cada caso en particular. Se presenta además una discusión general de los resultados. / [CA] El camp de la Interacció Persona-Ordinador és una àrea d'investigació multidisciplinar que combina, entre d'altres, les Ciències de la Informàtica i de la Psicologia. Estudia la interacció entre els sistemes computacionals i les persones considerant tant el desenvolupament tecnològic, com l'experiència de l'usuari.
Els dispositius necessaris per a crear interfícies d'usuari 3D són ara més assequibles que mai (v.gr. dispositius de visualització, de seguiment o mòbils) obrint així una àrea d'oportunitat per als investigadors d'aquesta disciplina. La Realitat Augmentada i el Vídeo Omnidireccional són dos exemples d'aquest tipus d'interfícies on l'usuari és capaç d'interactuar en l'espai tridimensional més enllà de la pantalla de l'ordinador.
El treball presentat en aquesta tesi se centra en l'avaluació de la interacció de l'usuari amb aquests dos tipus d'aplicacions. L'objectiu principal és contribuir a augmentar el coneixement sobre aquest nou tipus d'interfícies i així, millorar el seu disseny. En aquest treball investiguem de quina manera es poden utilitzar de forma eficient les interfícies multimodals per a proporcionar informació rellevant en aplicacions de Realitat Augmentada. A més, avaluem com l'usuari pot utilitzar interfícies 3D utilitzant més d'un tipus d'interacció; per aquesta raó, avaluem la interacció basada en gest per a Vídeo Omnidireccional.
Al llarg d'aquest document es descriuen els experiments realitzats i els resultats obtinguts per a cada cas particular. A més a més, es presenta una discussió general dels resultats. / Rovelo Ruiz, GA. (2015). Multimodal 3D User Interfaces for Augmented Reality and Omni-Directional Video [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/53916
|
36 |
Local visual feature based localisation and mapping by mobile robotsAndreasson, Henrik January 2008 (has links)
This thesis addresses the problems of registration, localisation and simultaneous localisation and mapping (SLAM), relying particularly on local visual features extracted from camera images. These fundamental problems in mobile robot navigation are tightly coupled. Localisation requires a representation of the environment (a map) and registration methods to estimate the pose of the robot relative to the map given the robot’s sensory readings. To create a map, sensor data must be accumulated into a consistent representation and therefore the pose of the robot needs to be estimated, which is again the problem of localisation. The major contributions of this thesis are new methods proposed to address the registration, localisation and SLAM problems, considering two different sensor configurations. The first part of the thesis concerns a sensor configuration consisting of an omni-directional camera and odometry, while the second part assumes a standard camera together with a 3D laser range scanner. The main difference is that the former configuration allows for a very inexpensive set-up and (considering the possibility to include visual odometry) the realisation of purely visual navigation approaches. By contrast, the second configuration was chosen to study the usefulness of colour or intensity information in connection with 3D point clouds (“coloured point clouds”), both for improved 3D resolution (“super resolution”) and approaches to the fundamental problems of navigation that exploit the complementary strengths of visual and range information. Considering the omni-directional camera/odometry setup, the first part introduces a new registration method based on a measure of image similarity. This registration method is then used to develop a localisation method, which is robust to the changes in dynamic environments, and a visual approach to metric SLAM, which does not require position estimation of local image features and thus provides a very efficient approach. The second part, which considers a standard camera together with a 3D laser range scanner, starts with the proposal and evaluation of non-iterative interpolation methods. These methods use colour information from the camera to obtain range information at the resolution of the camera image, or even with sub-pixel accuracy, from the low resolution range information provided by the range scanner. Based on the ability to determine depth values for local visual features, a new registration method is then introduced, which combines the depth of local image features and variance estimates obtained from the 3D laser range scanner to realise a vision-aided 6D registration method, which does not require an initial pose estimate. This is possible because of the discriminative power of the local image features used to determine point correspondences (data association). The vision-aided registration method is further developed into a 6D SLAM approach where the optimisation constraint is based on distances of paired local visual features. Finally, the methods introduced in the second part are combined with a novel adaptive normal distribution transform (NDT) representation of coloured 3D point clouds into a robotic difference detection system.
|
37 |
Auto-localização e construção de mapas de ambiente para robôs móveis baseados em visão omnidirecional estéreo. / Simultaneous localization and map building for mobile robots with omnidirectional estereo vision.Oliveira, Paulo Roberto Godoi de 14 April 2008 (has links)
Este projeto consiste no desenvolvimento de um sistema para auto-localização e construção de mapas de ambiente para robôs móveis em um ambiente estruturado, ou seja, que pode ser descrito através de primitivas geométricas. O mapa é construído a partir da reconstrução de imagens adquiridas por um sistema de visão omnidirecional estéreo baseado em um espelho duplo de perfil hiperbólico. A partir de uma única imagem obtida, utilizandose algoritmos de visão estéreo, realiza-se a reconstrução tridimensional do ambiente em torno do robô e, assim, obtêm-se as distâncias de objetos presentes no ambiente ao sistema de visão. A partir da correspondência da reconstrução de várias imagens tomadas em diferentes posições cria-se o mapa do ambiente. Além do mapa global do ambiente o sistema também realiza o cálculo da localização do robô no ambiente utilizando informações obtidas na correspondência da reconstrução da seqüência de imagens e a odometria do robô. O sistema de construção de mapas de ambiente e auto-localização do robô é testado em um ambiente virtual e um ambiente real. Os resultados obtidos tanto na construção do mapa global do ambiente, como na localização do robô, mostram que o sistema é capaz de obter informação com a acuracidade necessária para permitir a sua utilização para navegação de robôs móveis. O tempo computacional necessário para reconstruir as imagens, calcular a posição do robô e criar o mapa global do ambiente possibilita que o sistema desenvolvido seja usado em uma aplicação que necessite da geração do mapa global em um intervalo de tempo na ordem de poucos segundos. Ressalta-se que este projeto teve como ponto de partida um projeto de iniciação científica financiado pela FAPESP. Esse trabalho de iniciação científica foi publicado na forma de um trabalho de conclusão de curso (Oliveira, 2005). / This project aims the development of a system for self localization and environment map building for mobile robots in a structured environment. The map is built from images acquired by an omnidirectional stereo system with a hyperbolic double lobed mirror. From a single acquired image, using stereo vision algorithms, the environment around the robot is tridimensionally reconstruct and the distances of objects in the environment from the system are calculated. From the matching of several reconstructed environments obtained from images taken in different positions the global environment map is created. Besides the global map the system also calculates the localization of the mobile robot using information obtained from the matching of the sequence of image reconstructions and the robot odometry. The map building and robot localization system is tested in virtual and real environments. The computational time required to make the calculation is of the order of few seconds. The results obtained both for the map building and for the robot localization show that the system is capable of generating information with enough accuracy to allow it to be used for mobile robot navigation. This project had as start point a scientific initiation project supported by FAPESP. The scientific initiation project was published as a graduation work (Oliveira, 2005).
|
38 |
Visual odometry: comparing a stereo and a multi-camera approach / Odometria visual: comparando métodos estéreo e multi-câmeraPereira, Ana Rita 25 July 2017 (has links)
The purpose of this project is to implement, analyze and compare visual odometry approaches to help the localization task in autonomous vehicles. The stereo visual odometry algorithm Libviso2 is compared with a proposed omnidirectional multi-camera approach. The proposed method consists of performing monocular visual odometry on all cameras individually and selecting the best estimate through a voting scheme involving all cameras. The omnidirectionality of the vision system allows the part of the surroundings richest in features to be used in the relative pose estimation. Experiments are carried out using cameras Bumblebee XB3 and Ladybug 2, fixed on the roof of a vehicle. The voting process of the proposed omnidirectional multi-camera method leads to some improvements relatively to the individual monocular estimates. However, stereo visual odometry provides considerably more accurate results. / O objetivo deste mestrado é implementar, analisar e comparar abordagens de odometria visual, de forma a contribuir para a localização de um veículo autônomo. O algoritmo de odometria visual estéreo Libviso2 é comparado com um método proposto, que usa um sistema multi-câmera omnidirecional. De acordo com este método, odometria visual monocular é calculada para cada câmera individualmente e, seguidamente, a melhor estimativa é selecionada através de um processo de votação que involve todas as câmeras. O fato de o sistema de visão ser omnidirecional faz com que a parte dos arredores mais rica em características possa sempre ser usada para estimar a pose relativa do veículo. Nas experiências são utilizadas as câmeras Bumblebee XB3 e Ladybug 2, fixadas no teto de um veículo. O processo de votação do método multi-câmera omnidirecional proposto apresenta melhorias relativamente às estimativas monoculares individuais. No entanto, a odometria visual estéreo fornece resultados mais precisos.
|
39 |
Construção de mapas de ambiente para navegação de robôs móveis com visão omnidirecional estéreo. / Map building for mobile robot navigation with omnidirectional stereo vision.Cláudia Cristina Ghirardello Deccó 23 April 2004 (has links)
O problema de navegação de robôs móveis tem sido estudado ao longo de vários anos, com o objetivo de se construir um robô com elevado grau de autonomia. O aumento da autonomia de um robô móvel está relacionado com a capacidade de aquisição de informações e com a automatização de tarefas, tal como a construção de mapas de ambiente. Sistemas de visão são amplamente utilizados em tarefas de robôs autônomos devido a grande quantidade de informação contida em uma imagem. Além disso, sensores omnidirecionais catadióptricos permitem ainda a obtenção de informação visual em uma imagem de 360º, dispensando o movimento da câmera em direções de interesse para a tarefa do robô. Mapas de ambiente podem ser construídos para a implementação de estratégias de navegações mais autônomas. Nesse trabalho desenvolveu-se uma metodologia para a construção de mapas para navegação, os quais são a representação da geometria do ambiente. Contém a informação adquirida por um sensor catadióptrico omnidirecional estéreo, construído por uma câmera e um espelho hiperbólico. Para a construção de mapas, os processos de alinhamento, correspondência e integração, são efetuados utilizando-se métricas de diferença angular e de distância entre os pontos. A partir da fusão dos mapas locais cria-se um mapa global do ambiente. O processo aqui desenvolvido para a construção do mapa global permite a adequação de algoritmos de planejamento de trajetória, estimativa de espaço livre e auto-localização, de maneira a obter uma navegação autônoma. / The problem of mobile robot navigation has been studied for many years, aiming at build a robot with an high degree of autonomy. The increase in autonomy of a mobile robot is related to its capacity of acquisition of information and the automation of tasks, such as the environment map building. In this aspect vision has been widely used due to the great amount of information in an image. Besides that catadioptric omnidirectional sensors allow to get visual information in a 360o image, discharging the need of camera movement in directions of interest for the robot task. Environment maps may be built for an implementation of strategies of more autonomous navigations. In this work a methodology is developed for building maps for robot navigations, which are the representation of the environment geometry. The map contains the information received by a stereo omnidirectional catadioptric sensor built by a camera and a hyperbolic mirror. For the map building, the processes of alignment, registration and integration are performed using metric of angular difference and distance between the points. From the fusion of local maps a global map of the environment is created. The method developed in this work for global map building allows to be coupled with algorithms of path planning, self-location and free space estimation, so that autonomous robot navigation can be obtained.
|
40 |
Enable the next generation of interactive video streaming / Rendre possible la transmission via l’internet des prochaines générations de vidéos interactivesCorbillon, Xavier 30 October 2018 (has links)
Les vidéos omnidirectionnelles, également appelées vidéos sphériques ou vidéos360°, sont des vidéos avec des pixels enregistrés dans toutes les directions de l’espace. Un utilisateur qui regarde un tel contenu avec un Casques de Réalité Virtuelle (CRV) peut sélectionner la partie de la vidéo à afficher, usuellement nommée viewport, en bougeant la tête. Pour se sentir totalement immergé à l’intérieur du contenu, l’utilisateur a besoin de voir au moins 90 viewports par seconde en 4K. Avec les technologies de streaming traditionnelles, fournir une telle qualité nécessiterait un débit de plus de100 Mbit s−1, ce qui est bien trop élevé. Dans cette thèse, je présente mes contributions pour rendre possible le streaming de vidéos omnidirectionnelles hautement immersives sur l’Internet. On peut distinguer six contributions : une proposition d’architecture de streaming viewport adaptatif réutilisant une partie des technologies existantes ; une extension de cette architecture pour des vidéos à six degrés de liberté ; deux études théoriques des vidéos à qualité spatiale non-homogène; un logiciel open source de manipulation des vidéos 360°; et un jeu d’enregistrements de déplacements d’utilisateurs regardant des vidéos 360°. / Omnidirectional videos, also denoted as spherical videos or 360° videos, are videos with pixels recorded from a given viewpoint in every direction of space. A user watching such an omnidirectional content with a Head Mounted Display (HMD) can select the portion of the videoto display, usually denoted as viewport, by moving her head. To feel high immersion inside the content a user needs to see viewport with 4K resolutionand 90 Hz frame rate. With traditional streaming technologies, providing such quality would require a data rate of more than 100 Mbit s−1, which is far too high compared to the median Internet access band width. In this dissertation, I present my contributions to enable the streaming of highly immersive omnidirectional videos on the Internet. We can distinguish six contributions : a viewport-adaptive streaming architecture proposal reusing a part of existing technologies ; an extension of this architecture for videos with six degrees of freedom ; two theoretical studies of videos with non homogeneous spatial quality ; an open-source software for handling 360° videos ; and a dataset of recorded users’ trajectories while watching 360° videos.
|
Page generated in 0.1076 seconds