31 |
MICRO-ROS FOR MOBILE ROBOTICS SYSTEMSNguyen, Peter January 2022 (has links)
The complexity of mobile robots increases as more parts are added to the system. Introducing microcontrollers into a mobile robot abstracts and modularises the system architecture, creating a demand for seamless microcontroller integration. The Robot Operating System (ROS) used by ABB’s new mobile robot, the mobile YuMi prototype (mYuMi), allows standardised robot software libraries and packages to simplify robotic creations. As ABB is porting over from ROS1 to ROS2, the ROS2 compatible Microcontroller Robot Operating System (micro-ROS) will be incorporated into the system to smoothly integrate microcontrollers into mYuMi. In order to display the validity of micro-ROS, this project used tracing and latency measurements with external applications to test the remote communication between mYuMi using ROS2 and microcontrollers using micro-ROS, with three different microcontrollers tested. The communication was evaluated in different scenarios with a test bench, using ping pong communication to get the round-trip time. A reinforcement of the test results was presented by demonstrating the use of micro-ROS live in a prototype developed, where mYuMi controlled a 1D rangefinder and an RC servo motor by utilising two microcontrollers. The results concluded that the micro-ROS delay could be analysed in theory with external applications, equivalent micro-ROS functionality should apply to most microcontrollers, and the test results and prototype displayed the potential of micro-ROS matching ROS2 in terms of delay and stability.
|
32 |
Driving by Speaking: Natural Language Control of Robotic WheelchairsHecht, Steven A. 16 August 2013 (has links)
No description available.
|
33 |
Mobile robotic design. Robotic colour and accelerometer sensor.Mills, Euclid Weatley January 2010 (has links)
This thesis investigates the problem of sensors used with mobile robots. Firstly,
a colour sensor is considered, for its ability to detect objects having the three
primary colours Red, Green and Blue (RGB). Secondly, an accelerometer was
investigated, from which velocity was derived from the raw data using numerical
integration. The purpose of the design and development of the sensors was to
use them for robotic navigation and collision avoidance. This report presents the
results of experiments carried out on the colour sensor and the accelerometer.
A discussion of the results and some conclusions are also presented. It proved
feasible to achieve the goal of detecting colours successfully but only for a
limited distance. The accelerometer proved reliable but is not yet being applied
in real time. Both the colour sensor and the accelerometer proved to be
inexpensive. Some recommendations are made to improve both the colour
sensor and the accelerometer sensors.
|
34 |
Driving By Speaking: Capabilities and Requirements of a Vocal JoystickYanick, Anthony Joseph 19 June 2012 (has links)
No description available.
|
35 |
Fusion of Laser Range-Finding and Computer Vision Data for Traffic Detection by Autonomous VehiclesCacciola, Stephen J. 21 January 2008 (has links)
The DARPA Challenges were created in response to a Congressional and Department of Defense (DoD) mandate that one-third of US operational ground combat vehicles be unmanned by the year 2015. The Urban Challenge is the latest competition that tasks industry, academia, and inventors with designing an autonomous vehicle that can safely operate in an urban environment.
A basic and important capability needed in a successful competition vehicle is the ability to detect and classify objects. The most important objects to classify are other vehicles on the road. Navigating traffic, which includes other autonomous vehicles, is critical in the obstacle avoidance and decision making processes. This thesis provides an overview of the algorithms and software designed to detect and locate these vehicles. By combining the individual strengths of laser range-finding and vision processing, the two sensors are able to more accurately detect and locate vehicles than either sensor acting alone.
The range-finding module uses the built-in object detection capabilities of IBEO Alasca laser rangefinders to detect the location, size, and velocity of nearby objects. The Alasca units are designed for automotive use, and so they alone are able to identify nearby obstacles as vehicles with a high level of certainty. After some basic filtering, an object detected by the Alasca scanner is given an initial classification based on its location, size, and velocity. The vision module uses the location of these objects as determined by the ranger finder to extract regions of interest from large images through perspective transformation. These regions of the image are then examined for distinct characteristics common to all vehicles such as tail lights and tires. Checking multiple characteristics helps reduce the number of false-negative detections. Since the entire image is never processed, the image size and resolution can be maximized to ensure the characteristics are as clear as possible. The existence of these characteristics is then used to modify the certainty level from the IBEO and determine if a given object is a vehicle. / Master of Science
|
36 |
Biologically Inspired Algorithms for Visual Navigation and Object Perception in Mobile RoboticsNorthcutt, Brandon D. January 2016 (has links)
There is a large gap between the visual capabilities of biological organisms and visual capabilities of autonomous robots. Even the most simple of flying insects is able to fly within complex environments, locate food, avoid obstacles and elude predators with seeming ease. This stands in stark contrast to even the most advanced of modern ground based or flying autonomous robots, which are only capable of autonomous navigation within simple environments and will fail spectacularly if the expected environment is modified even slightly. This dissertation provides a narrative of the author's graduate research into biologically inspired algorithms for visual perception and navigation with autonomous robotics applications. This research led to several novel algorithms and neural network implementations, which provide improved capabilities of visual sensation with exceedingly light computational requirements. A new computationally-minimal approach to visual motion detection was developed and demonstrated to provide obstacle avoidance without the need for directional specificity. In addition, a novel method of calculating sparse range estimates to visual object boundaries was demonstrated for localization, navigation and mapping using one-dimensional image arrays. Lastly, an assembly of recurrent inhibitory neural networks was developed to provide multiple concurrent object detection, visual feature binding, and internal neural representation of visual objects. These algorithms are promising avenues for future research and are likely to lead to more general, robust and computationally minimal systems of passive visual sensation for a wide variety of autonomous robotics applications.
|
37 |
Long term appearance-based mapping with vision and laserPaul, Rohan January 2012 (has links)
This thesis is about appearance-based topological mapping for mobile robots using vision and laser. Our goal is life-long continual operation in outdoor unstruc- tured workspaces. We present a new probabilistic framework for appearance-based mapping and navigation incorporating spatial and visual appearance. Locations are encoded prob- abilistically as random graphs possessing latent distributions over visual features and pair-wise euclidean distances generating observations modeled as 3D constellations of features observed via noisy range and visual detectors. Multi-modal distributions over inter-feature distances are learnt using non-parametric kernel density estima- tion. Inference is accelerated by executing a Delaunay tessellation of the observed graph with minimal loss in performance, scaling log-linearly with scene complexity. Next, we demonstrate how a robot can, through introspection and then targeted data retrieval, improve its own place recognition performance. We introduce the idea of a dynamic sampling set, the onboard workspace representation, that adapts with increasing visual experience of continually operating robot. Based on a topic based probabilistic model of images, we use a measure of perplexity to evaluate how well a working set of background images explains the robot’s online view of the world. O/ine, the robot then searches an external resource to seek additional background images that bolster its ability to localize in its environment when used next. Finally, we present an online and incremental approach allowing an exploring robot to generate apt and compact summaries of its life experience using canon- ical images that capture the essence of the robot’s visual experience-illustrating both what was ordinary and what was extraordinary. Leveraging probabilistic topic models and an incremental graph clustering technique we present an algorithm that scales well with time and variation of experience, generating a summary that evolves incrementally with the novelty of data.
|
38 |
Srovnání lokalizačních technik / Comparison of Localization TechniquesSkalka, Marek January 2011 (has links)
This work compares localization techniques used in mobile robotics. Localization - how to determine one's own position within a space - is one of the fundamental challenges of robotics. The introduction is devoted to a detailed description of localization and to the categorization of localization techniques. In subsequent chapters, category by category, various localization techniques and their variants are described and their strengths and weaknesses are compared. The work successively addresses: probabilistic localization techniques used for inaccurate sensor measurements processing and for providing reliable position estimate; relative localization techniques used for evaluation of relative changes in the robot position; and absolute localization techniques for finding and estimating the absolute position of the robot in the environment.
|
39 |
Robotic First Aid : Using a mobile robot to localise and visualise points of interest for first aidHotze, Wolfgang January 2016 (has links)
Domestic robots developed to support human beings by performing daily tasks such as cleaning should also be able to help in emergencies by finding, analysing, and assisting persons in need of first aid. Here such a robot capable of performing some useful task related to first aid is referred to as a First Aid Mobile Robot (FAMR). One challenge which to the author's knowledge has not been solved is how such a FAMR can find a fallen person's pose within an environment, recognising locations of points of interest for first aid such as the mouth, nose, chin, chest and hands on a map. To overcome the challenge, a new approach is introduced based on leveraging a robot's capabilities (multiple sensors and mobility), called AHBL. AHBL comprises four steps: Anomaly detection, Human detection, Body part recognition, and Localisation on a map. It was broken down into four steps for modularity (e.g., a different way of detecting anomalies can be slipped in without changing the other modules) and because it was not clear which step is hardest to implement. As a result of evaluating AHBL, a FAMR developed for this work was able to find the pose of a fallen person (a mannequin) in a known environment with an average success rate of 83%, and an average localisation discrepancy of 1.47cm between estimated body part locations and ground truth. The presented approach can be adapted for use in other robots and contexts, and can act as a starting point toward designing systems for autonomous robotic first aid.
|
40 |
Desenvolvimento de um sistema de planejamento de trajetória para veículos autônomos agrícolas / Development of a path planning system for autonomous agricultural vehiclesSanches, Rodrigo Marcon 18 October 2012 (has links)
O objetivo deste trabalho é desenvolver um sistema de navegação global para que veículos agrícolas autônomos possam executar missões em campos de cultivo através de um sistema de planejamento de trajetórias. Missões podem ser entendidas como sendo tarefas (p.ex.: de monitoramento, coleta de amostras, etc.) através de pequenas rotas que os veículos devem seguir ao longo de seus trabalhos diários, percorrendo a menor distância possível entre os pontos de origem e destino. O planejamento de trajetória foi dividido em etapas para facilitar o entendimento de cada uma delas. O mapeamento apresentado neste trabalho foi feito em regiões de cultivo de café nos estados de São Paulo e Minas Gerais. Os pontos do mapa foram amostrados utilizando um módulo receptor de sinal GPS (Global Positioning System) ao longo dos caminhos onde é possível a passagem do veículo dentro da plantação. Uma etapa importante para o sucesso deste sistema é a etapa de pré-processamento dos dados. Nesta etapa são inseridas as relações entre os pontos do mapeamento da área. As missões foram pré-definidas de modo a testar o cálculo do caminho de custo mínimo que é realizado através do algoritmo de Dijkstra. A cada ponto da rota é fornecido o ângulo de direção com o qual o veículo deve estar em relação ao Norte geográfico. De acordo com a mudança pretendida do ângulo de direção é proposta uma suavização nesta mudança através da alteração do percurso para um arco de circunferência. Neste caso, o raio de giro é informado. A última etapa consiste em fornecer a velocidade máxima de deslocamento do veículo em função da mudança de direção e velocidade angular máxima do centro de massa do veículo. O sistema proposto neste trabalho foi capaz de determinar o caminho com a menor distância entre dois pontos do mapeamento (coordenadas geográficas) e o calcular da distância entre os pontos. Embora a fórmula utilizada para calcular a distância entre duas coordenadas geográficas considerar o formato da Terra como sendo uma esfera, isto não gerou erro significativo para a aplicação proposta. A suavização proposta possibilitou, em alguns pontos, o aumento da velocidade de deslocamento por fazer a mudança do ângulo de direção de forma menos abrupta. / The objective of this work is to develop a global navigation system for autonomous agricultural vehicles can perform missions in crop fields through a system of path planning. Missions can be understood as tasks (eg monitoring, sampling, etc.). Through small routes that vehicles must follow throughout their daily jobs, traveling the shortest possible distance between the points of origin and destination. The path planning was divided into steps to make it easy to understand each one. The mapping presented in this work was done in coffee-growing regions in the state of São Paulo and Minas Gerais. The map points have been sampled using a GPS receiver module along the path where it is possible to move the vehicle within the plantation. An important step for the success of this system is the data pre-processing step. In this step are inserted the relations between the points of the mapping. The missions are predefined in order to test if the calculation of the minimum cost path made by Dijkstra algorithm is correct. At each point of the route is given the vehicle heading angle (vehicle position towards the geographic North). According to the intended change of the heading angle is proposed a smoothing method to smooth this change by changing the route to an arc. In this case, the turning radius is reported. The last step is to provide the maximum speed of the vehicle due to the change of direction and maximum angular speed of the center of mass. The system proposed in this paper was able to determine the path with the shortest distance between two points of the mapping (geographic coordinates) and calculate the distance between these points. Although the formula used to calculate the distance between two geographical coordinates consider the shape of the Earth as a sphere, this did not generate significant errors for the proposed application. The proposed smoothing allowed, in some cases, to increase the vehicle speed by making the change of heading angle less abrupt.
|
Page generated in 0.038 seconds