211 |
Intelligent velocity control of a bounding quadruped robotFaragalli, Michele January 2009 (has links)
The Platform for Ambulating Wheels (PAW) is a hybrid quadruped wheeled-legged robot that can bound, gallop, roll and brake at high speeds, and perform inclined turning. In previous work, the PAW’s controller used fixed touchdown and liftoff angles to achieve a stable bounding gait, and these angles were predetermined through an extensive trial and error process. In this work, an intelligent velocity controller is developed to allow the robot to autonomously find the touchdown and liftoff angles to bound at a desired velocity. This enables the robot to track desired velocities between 0.9 and 1.3 m/s, as shown in a Matlab-Adams co-simulation model of bounding. The controller also demonstrates tracking capabilities in the presence of minor terrain changes. To implement this controller on the physical platform, an Extended Kalman Filter (EKF) is developed to estimate the forward velocity of the robot required as a controller input. The EKF combines the data from an Inertial Measurement Unit and an estimate of forward velocity found kinematically using measurements from motor encoders and leg potentiometers. The accuracy of the EKF estimate of the forward velocity is validated in simulation and using high speed camera experiments. Finally, the intelligent controller is implemented and tested on the physical platform demonstrating adequate velocity tracking for set points between 0.9 m/s and 1.3 m/s, as well as transitions between set points in this range. / Le « Platform for Ambulating Wheels » (PAW) est un robot quadrupède qui possède des roues au bout de ses quatre jambes. Sa combinaison de roues et jambes lui permet de rouler, d’effectuer des virages en inclinant son corps, de sauter, de bondir et de galloper. Dans les travaux précédents, le robot utilisait des angles fixes, trouvés par essais et erreurs, pour pouvoir bondir à une certaine vitesse. Un contrôleur intelligent capable de trouver les angles de façon autonome afin de suivre une vitesse prédéterminée est développé dans ce mémoire. Premièrement, la performance du contrôleur est évaluée dans une simulation MSC Adams et MATLAB démontrant les capacités à suivre des vitesses entre 0.9 et 1.3 m/s. Le contrôleur démontre une capacité à suivre la vitesse désirée même en présence de changement de terrain mineur.Ensuite, un filtre Kalman pour système non-linéaire est développé pour estimer la vitesse du robot, un paramètre nécessaire pour introduire le système de contrôle intelligent sur le robot. Les données d’une unité de mesure inertielle et une estimation de la vitesse par des équations cinématiques sont combinés dans le filtre pour estimer plus précisément la vitesse du robot. La précision du filtre est validée en comparant ses résultats contre ceux acquis en simulation et par une caméra à haute vitesse.Finalement, le contrôleur intelligent est évalué sur le robot en utilisant la vitesse estimée par le filtre Kalman. Les résultats expérimentaux du contrôleur démontre qu’il est capable de bien suivre des vitesses entre 0.9 et 1.3 m/s.
|
212 |
Wide-baseline stereo for three-dimensional urban scenesFan, Shu Fei January 2010 (has links)
Like humans, computer vision systems can better infer a scene's 3-D structure by processing its 2-D images taken from multiple viewpoints. While this seems effortless for humans, it is still a challenge for computer vision. Underlying the act of associating the different perspectives is a problem called wide-baseline stereo, which computes the geometric relationship between two overlapping views. Wide-baseline stereo can be problematic when working on images taken of real-life urban environments, due to practical issues such as poor image quality or ambiguity raised by repetitive patterns. We analyze why these factors pose difficulties for current methods and propose principles that can make wide-baseline stereo more effective, in terms of both robustness and accuracy. We treat wide-baseline stereo as a sequence of three sub-problems: feature detection, feature matching, and fundamental matrix estimation. We propose improvements for each of these and test them on real images of 3-D urban scenes. For feature detection, we demonstrate that when we use both image intensity contrast and entropy-based visual saliency, we are better at repeatably extracting features of a 3-D scene. We use intensity contrast as a cue for obtaining initial feature seeds, which are then evaluated and locally adapted according to an entropy-based saliency measure. We select features with high saliency scores. Experimental comparisons against peer feature detectors show that our method detects more regular structures and fewer noisy patterns. As a result, our method detects features with high repeatability, which is conducive to the subsequent feature matching. In the case of feature matching, we show that we can match features more robustly when using both local feature appearance and regional image information. We model global image information with a graph, whose nodes contain local feature appearances and edges encode semi-local proximity structure. Working on this graph, we convert t / L'utilisation de plusieurs points de vue d'une scène pour en déterminer sa structure tridimensionnelle est un exercice effectué autant par l'humain que par certains systèmes de vision artificielle. Mais alors qu'il ne requiert aucun effort pour l'humain, il représente un défi pour le domaine de la vision par ordinateur. Un problème sous-jacent à celui d'associer plusieurs perspectives d'une scène est celui de la stéréoscopie pour une longue ligne de base, qui consiste à déterminer les relations géométriques entre deux vues qui se chevauchent. La stéréo pour une longue ligne de base (lorsque les deux points de vue sont éloignés) peut être problématique dans un environnement urbain, en raison d'une qualité parfois pauvre des images, et aussi de l'ambiguïté que peut soulever des formes répétitives. Cette thèse analyse les raisons pour lesquelles ces facteurs peuvent être problématiques pour les méthodes actuelles et propose des principes qui permettent une stéréo plus efficace, autant au point de vue de la robustesse que de la précision. La stéréo d'images provenant de points de vues éloignés est divisée en trois sous-problémes: la détection de caractéristiques visuelles, leur appariement, ainsi que l'estimation de la matrice fondamentale. Des améliorations sont proposées pour chaque élément, et des expérimentations sur des données réelles de scènes urbaines sont présentées. Pour la détection de caractéristiques, il est démontré que lorsque le contraste en intensité des images ainsi que la saillance visuelle basée sur l'entropie sont utilisés, nous obtenons de meilleurs résultats de détection de caractéristiques de scènes tridimensionnelles. Le contraste en intensité est utilisé pour obtenir des points de départ pour les caractéristiques, qui sont ensuite évalués et adapté localement selon une mesure de saillance basée sur l'entropie. Les caractéristiques ayant obtenues une mesure élevée de sai
|
213 |
Attributed Object Maps| Descriptive Object Models as High-level Semantic Features for Mobile RoboticsDelmerico, Jeffrey A. 06 December 2013 (has links)
<p> This dissertation presents the concept of a mid-level representation of a mobile robot's environment: localized class-level object models marked up with the object's properties and anchored to a low level map such as an occupancy grid. An <i>attributed object map</i> allows for high level reasoning and contextualization of robot tasks from semantically meaningful elements of the environment. This approach is compatible with, and complementary to, existing methods of semantic mapping and robotic knowledge representation, but provides an internal model of the world that is both human-intelligible and permits inference about place and task for the robotic agent. This representation provides natural semantic context for many environments—an inventory of objects and structural components along with their locations and descriptions—and would sit at an intermediate level of abstraction between low level features, and semantic maps of the whole environment. High level robotic tasks such as place categorization, topological mapping, object search, and natural language direction following could be enabled or improved with this internal model. </p><p> The proposed system utilizes a bottom-up approach to object modeling that leverages existing work in object detection and description, which are well developed research areas in computer vision. Our approach integrates many image-based object detections into a coherent and localized model for each object instance, using 3D data from a registered range sensor. By observing an object repeatedly at the frame-level, we can construct such models in an online way, in real time. This ensures that the models are robust to the noise of false-positive detections and still extract useful information from partial observations of the object. The detection and modeling steps we present do not rely on prior knowledge of specific object instances, enabling the modeling of objects in unknown environments. The construction of an <i>attributed object map</i> during exploration is possible with minimal assumptions, relying only on knowledge of the object classes that might be contained therein. We present techniques for modeling objects that can be described with parameterized models and whose quantitative attributes can be inferred from those models. In addition, we develop methods for generating non-parametric point cloud models for objects of classes that are better described qualitatively with semantic attributes. In particular, we propose an approach for automatic foreground object segmentation that permits the extraction of the object within a bounding box detection, using only a class-level model of that object's scale. </p><p> We employ semantic attribute classifiers from the computer vision literature, using the visual features of each detection to describe the object's properties, including shape, material, and presence or absence of parts. We integrate per-frame attribute values into an aggregated representation that we call an <i>object attribute descriptor.</i> This method averages the confidence in each attribute classification over time, smoothing the noise in individual observations and reinforcing those attributes that are repeatedly observed. This descriptor provides a compact representation of the model's properties, and offers a way to mark up objects in the environment with descriptions that could be used as semantic features for high-level robot tasks. </p><p> We propose and develop a system for detecting, modeling, and describing objects in an unknown environment that uses minimal assumptions and prior knowledge. We demonstrate results in parametric object modeling of stairways, and semantic attribute description of several non-parametric object classes. This system is deployable on a mobile robot equipped with an RGB-D camera, and runs in real time on commodity compute hardware. The object models contained in an <i>attributed object map</i> provide context for other robotic tasks, and offer a mutually human and robot-intelligible representation of the world that can be created online during robotic exploration of an unknown environment.</p>
|
214 |
Localization and navigation of a holonomic indoor airship using on-board sensorsValdmanis, Mikelis January 2011 (has links)
Two approaches to navigation and localization of a holonomic, unmanned, indoor airship capable of 6-degree-of-freedom (DOF) motion using on-board sensors are presented. First, obstacle avoidance and primitive navigation were attempted using a light-weight video camera. Two optical flow algorithms were investigated. Optical flow estimates the motion of the environment relative to the camera by computing temporal and spatial fluctuations of image brightness. Inferences on the nature of the visible environment, such as obstacles, would then be made based on the optical flow field. Results showed that neither algorithm would be adequate for navigation of the airship.Localization of the airship in a restricted state space – three translational DOF and yaw rotation – and a known environment was achieved using an advanced Monte Carlo Localization (MCL) algorithm and a laser range scanner. MCL is a probabilistic algorithm that generates many random estimates, called particles, of potential airship states. During each operational time step each particle's location is adjusted based on airship motion estimates and particles are assigned weights by evaluating simulated sensor measurements for the particles' poses against the actual measurements. A new set of particles is drawn from the previous set with probability proportional to the weights. After several time steps the set converges to the true position of the airship. The MCL algorithm achieves global localization, position tracking, and recovery from the "kidnapped robot" problem. Results from off-line processing of airship flight data, using MCL, are presented and the possibilities for on-line implementation are discussed. / Deux approches de navigation et localisation d'un drone intérieur équipé de capteurs et capable de six degrés de liberté seront présentées. Premièrement, des vols ayant comme simple but d'éviter des obstacles et de naviguer le drone ont été exécutés à l'aide d'une caméra vidéo. Deux algorithmes de flux optique ont été étudiés. Le flux optique estime le déplacement de l'environnement relatif à la caméra en calculant les variations dans la clarté de l'image. Les traits caractéristiques de l'environnement, comme les obstacles, sont alors déterminés en se basant sur le champ de flux optique. Les résultats démontrent que ni l'un ni l'autre des algorithmes sont adéquats pour naviguer le drone.La localisation du drone dans une représentation d'état, caractérisée par trois degrés de liberté en translation et par la vitesse de lacet, ainsi que dans un environnement connu a été accomplie en utilisant l'algorithme avancé de Localisation Monte Carlo (MCL) et un télémètre laser. MCL est un algorithme probabiliste qui génère aléatoirement plusieurs estimés, nommés particules, d'états potentiels du drone. À chaque incrément de temps, la position de chaque particule est ajustée selon les déplacements estimés du drone et ces particules sont pondérées en comparant les valeurs estimées du capteur avec les valeurs actuelles. Ensuite, un nouvel ensemble de particules est créé à partir du précédent en considérant la pondération des particules. Après plusieurs incréments de temps, l'ensemble converge vers la position réelle du drone. L'algorithme MCL accompli alors une localisation globale, un suivi de position et une résolution du problème du robot « kidnappé ». L'analyse hors-ligne des résultats avec l'algorithme MCL est présentée et les possibilités d'implémenter cette méthode en ligne sont discutées.
|
215 |
Design, dynamics and control of a fast two-wheeled quasiholonomic robotSalerno, Alessio. January 2006 (has links)
The control of wheeled mobile robots is particularly challenging because of the presence of nonholonomic constraints. Modern two-wheeled mobile robot control is further complicated by the presence of one unstable equilibrium point, which requires a continuous stabilization of the intermediate body by means of sensors. In order to simplify the control of these systems, Quasimoro, a novel two-wheeled mobile robot, is proposed. The control of Quasimoro is simplified by means of its mechanical design. The robot is designed for quasiholonomy, a property that simplifies the control of nonoholonomic systems. To further simplify the control, the robot is designed so as to have a stable equilibrium point. / A nonholonomic robotic mechanical system that can be rendered quasiholonomic by control is termed, in this thesis, quasiholonomic. This is the case of Quasimoro. / This work proposes a model-based design methodology for wheeled mobile robots, intended to decrease the development costs, under which the prototype is built only when the system requirements are fully met. Following this methodology, the proposed robot is then designed and prototyped. / The conceptual design of the robot is undertaken by means of a detailed analysis of the most suitable drive systems and their layout. The mathematical model of the robot is formulated in the framework of the Lagrange formalism, by resorting to the concept of holonomy matrix, while the controllability analysis is conducted using modern tools from geometric control. / The embodiment design entails the simulation of three virtual prototypes aimed at further simplifying the robot control. To this end, a robot drive system, based on the use of a timing belt transmission and a bicycle wheel, is designed, calibrated and tested. Due to Quasimoro's drive system, the stabilization of the intermediate body, a well-known challenge in two-wheeled mobile robot control, is achieved without the use of additional mechanical stabilizers---such as casters---or of sensors---such as gyros. / The intended application of the proposed robot is the augmentation of wheelchair users, a field that tremendously benefits from the cost-effectiveness and control simplification of the system at hand. For purposes of validation, a full-scale proof-of-concept prototype of the robot is realized. Moreover, the robot functionality is demonstrated by means of motion control experiments.
|
216 |
The parallelotriangular mechanism and the line constraint problemKarakusevic, Vladimir January 2012 (has links)
The primary objective of this thesis is to introduce a new and complete solution for the direct kinematics for the general parallelotriangular mechanism with six DOF or the so-called spatial double-triangular mechanism. An overview of parallel manipulators is given with an emphasis on their geometric models. A concise observation is given on the direct and inverse kinematics of both serial and parallel manipulators and their relation. Furthermore the direct kinematics solutions for planar and spherical parallelotriangular manipulators are presented, briefly. Analytical tools such as the algebra of dual numbers and spatial trigonometry are introduced. These were used to obtain previous solutions to the direct kinematics problem of parallelotriangular manipulators. An introduction to spatial kinematic mapping and the use of quaternion algebra in solving direct kinematic problem is given with a view of applying it to double triangular manipulators. Using spatial trigonometry and spatial mappings, together with the geometric model of the manipulator itself the direct kinematics of the spatial double triangular sixDOF manipulator is formulated and its monovariate polynomial solutions obtained.Finally a numerical example is presented and the possible applications of the spatial double triangular six DOF manipulator is suggested in form of a flight simulator platform or as a six DOF joint. / L'objectif primaire de cette thèse et d'introduire une solution nouvelle et complété de la kinematique directe pour les mécanismes généraux parallelotriangulaire avec six degrés de liberté appelée les mécanismes spatiaux en triangles doubles. Premièrement on observe les types des manipulateurs parallèles existant en soulignant ces modèles géométriques. Une observation concise est donnée à cause de la kinematique directe et inverse des manipulateurs sériels et parallèles avec les relations entre eux. En outre, la solution de la kinematique directe pour les manipulateurs parallèles planaires et sphériques est présentés. Apres ça un groupe des outils d'analyse est introduit, comme l'algèbre des nombres duaux comme aussi la trigonométrie spatiale, utilisé pour obtenir des solutions valables des problèmes direct de la kinematique pour les manipulateurs parallèle triangulaires. On introduit les transformations spatiales et l'utilisation de l 'algèbre des quaternions en trouvant la solution du problème directe de la kinematique avec un point de vue critique vis-à-vis utilisation de cette algèbre dans le cas des manipulateurs parallelotriangulaire. En utilisant la trigonométrie et les transformations spatiales la kinematique directe du manipulateur spatiale parallelotriangulaire avec six dégrée de liberté est formulée et on a trouve la solution fermée de ce problème, en suivant le model géométrique du même manipulateur. Finalement l'exemple numérique est présenté avec les applications possibles de manipulateurs parallelotriangulaire spatiale avec six degré de liberté comme un utile de la simulation ou bien comme un joint avec six dégrée de liberté
|
217 |
The application of neural networks to object identification and location for intelligent robotsYu, Min-Hsang January 1995 (has links)
No description available.
|
218 |
Neural networks for automatic arc weldingLi, Ping January 1995 (has links)
No description available.
|
219 |
Electrically driven underwater manipulator for remote operated vehiclesSarafis, Ilias Thoma January 1994 (has links)
No description available.
|
220 |
The simultaneous processing of mutually supportive sensory and non-sensory plans in direct response to the current environmentEaston, Kerry Louise January 1996 (has links)
No description available.
|
Page generated in 0.0359 seconds