Spelling suggestions: "subject:"deactive anda sensorbased bplanning"" "subject:"deactive anda sensorbased deplanning""
1 |
ROBOT NAVIGATION IN CROWDED DYNAMIC SCENESXie, Zhanteng, 0000-0002-5442-1252 08 1900 (has links)
Autonomous mobile robots are beginning to try to help us provide different delivery services in people's lives, such as delivering medicines in hospitals, delivering goods in warehouses, and delivering food in restaurants. To realize this vision, robots need to navigate autonomously and efficiently through complex, crowded, and dynamic environments filled with static obstacles, such as tables and chairs, as well as people and/or other robots, and to achieve this using the computational resources available onboard a mobile robot. This dissertation improves the state-of-the-art in autonomous navigation by developing learning-based algorithms to model the environment around the robot, predict changes in the environment, and control the robot, all of which can run onboard a mobile robot in real time. Specifically, this dissertation first proposes a set of specialized preprocessed data representations to extract and encode useful high-level information about crowded dynamic environments from raw sensor data (i.e., a short history of lidar data, kinematic data about nearby pedestrians, and a sub-goal that leads the robots towards its final destination). Then, using these combined preprocessed data representations, this dissertation proposes a novel crowd-aware navigation control policy that can balance collision avoidance and speed in crowded dynamic scenes by designing a velocity obstacle-based reward function that is used to train the robot leveraging deep reinforcement learning techniques. This dissertation then proposes a series of hardware-friendly prediction algorithms, based on variational autoencoder networks, to predict a distribution of possible future states in dynamic scenes by exploiting the kinematics and dynamics of the robot and its surrounding objects. Furthermore, this dissertation proposes a novel predictive uncertainty-aware navigation framework to improve the safety performance of current existing control policies by incorporating the output of the proposed stochastic environment prediction algorithms into general navigation frameworks. Many different collected real-world datasets as well as a series of 3D simulation experiments and hardware experiments are used to demonstrate the effectiveness of these proposed novel learning-based prediction and control algorithms. The new algorithms outperform other state-of-the-art algorithms in terms of collision avoidance, robot speed, and prediction accuracy across a range of environments, crowd densities, and robot models. It is believed that all the work included in this dissertation will promote the development of autonomous navigation for modern mobile robots, provide highly innovative solutions to the open problem of autonomous navigation in crowded dynamic scenes, and make our daily lives more convenient and efficient. / Mechanical Engineering
|
2 |
Contributions to optimal and reactive vision-based trajectory generation for a quadrotor UAV / Contributions à la génération de trajectoires optimales et réactives basées vision pour un quadrirotorPenin, Bryan 11 December 2018 (has links)
La vision représente un des plus importants signaux en robotique. Une caméra monoculaire peut fournir de riches informations visuelles à une fréquence raisonnable pouvant être utilisées pour la commande, l’estimation d’état ou la navigation dans des environnements inconnus par exemple. Il est cependant nécessaire de respecter des contraintes visuelles spécifiques telles que la visibilité de mesures images et les occultations durant le mouvement afin de garder certaines cibles visuelles dans le champ de vision. Les quadrirotors sont dotés de capacités de mouvement très réactives du fait de leur structure compacte et de la configuration des moteurs. De plus, la vision par une caméra embarquée (fixe) va subir des rotations dues au sous-actionnement du système. Dans cette thèsenous voulons bénéficier de l’agilité du quadrirotor pour réaliser plusieurs tâches de navigation basées vision. Nous supposons que l’estimation d’état repose uniquement sur la fusion capteurs d’une centrale inertielle (IMU) et d’une caméra monoculaire qui fournit des estimations de pose précises. Les contraintes visuelles sont donc critiques et difficiles dans un tel contexte. Dans cette thèse nous exploitons l’optimisation numérique pour générer des trajectoires faisables satisfaisant un certain nombre de contraintes d’état, d’entrées et visuelles non linéaires. A l’aide la platitude différentielle et de la paramétrisation par des B-splines nous proposons une stratégie de replanification performante inspirée de la commande prédictive pour générer des trajectoires lisses et agiles. Enfin, nous présentons un algorithme de planification en temps minimum qui supporte des pertes de visibilité intermittentes afin de naviguer dans des environnements encombrés plus vastes. Cette contribution porte l’incertitude de l’estimation d’état au niveau de la planification pour produire des trajectoires robustes et sûres. Les développements théoriques discutés dans cette thèse sont corroborés par des simulations et expériences en utilisant un quadrirotor. Les résultats reportés montrent l’efficacité des techniques proposées. / Vision constitutes one of the most important cues in robotics. A single monocular camera can provide rich visual information at a reasonable rate that can be used as a feedback for control, state estimation of mobile robots or safe navigation in unknown environments for instance. However, it is necessary to satisfy particular visual constraints on the image such as visibility and occlusion constraints during motion to keep some visual targets visible. Quadrotors are endowed with very reactive motion capabilities due to their compact structure and motor configuration. Moreover, vision from a (fixed) on-board camera will suffer from rotation motions due to the system underactuation. In this thesis, we want to benefit from the system aggressiveness to perform several vision-based navigation tasks. We assume state estimation relies solely on sensor fusion of an onboard inertial measurement unit (IMU) and a monocular camera that provides reliable pose estimates. Therefore, visual constraints are challenging and critical in this context. In this thesis we exploit numerical optimization to design feasible trajectories satisfying several state, input and visual nonlinear constraints. With the help of differential flatness and B-spline parametrization we will propose an efficient replanning strategy inspired form Model Predictive Control to generate smooth and agile trajectories. Finally, we propose a minimum-time planning algorithm that handles intermittent visibility losses in order to navigate in larger cluttered environments. This contribution brings state estimation uncertainty at the planning stage to produce robust and safe trajectories. All the theoretical developments discussed in this thesis are corroborated by simulations and experiments run by using a quadrotor UAV. The reported results show the effectiveness of proposed techniques.
|
Page generated in 0.0951 seconds