• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 18
  • 18
  • 9
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Visual servo control for a human-following robot

Burke, Michael Glen 03 1900 (has links)
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: This thesis presents work completed on the design of control and vision components for use in a monocular vision-based human-following robot. The use of vision in a controller feedback loop is referred to as vision-based or visual servo control. Typically, visual servo techniques can be categorised into imagebased visual servoing and position-based visual servoing. This thesis discusses each of these approaches, and argues that a position-based visual servo control approach is more suited to human following. A position-based visual servo strategy consists of three distinct phases: target recognition, target pose estimation and controller calculations. The thesis discusses approaches to each of these phases in detail, and presents a complete, functioning system combining these approaches for the purposes of human following. Traditional approaches to human following typically involve a controller that causes platforms to navigate directly towards targets, but this work argues that better following performance can be obtained through the use of a controller that incorporates target orientation information. Although a purely direction-based controller, aiming to minimise both orientation and translation errors, suffers from various limitations, this thesis shows that a hybrid, gain-scheduling combination of two traditional controllers offers better targetfollowing performance than its components. In the case of human following the inclusion of target orientation information requires that a definition and means of estimating a human’s orientation be available. This work presents a human orientation measure and experimental results to show that it is suitable for the purposes of wheeled platform control. Results of human following using the proposed hybrid, gain-scheduling controller incorporating this measure are presented to confirm this. / AFRIKAANSE OPSOMMING: Die ontwerp van ’n visiestelsel en beheer-komponente van ’n enkel-kamera robot vir die volging van mense word hier aangebied. Die gebruik van visuele terugvoer in die beheerlus word visie-gebaseerde of visuele servobeheer genoem. Visuele servobeheer tegnieke kan tipies onderskei word tussen beeld-gebaseerde servobeheer en posisie-gebaseerde visuele servobeheer. Altwee benaderings word hier bespreek. Die posisie-gebaseerde benadering word aanbeveel vir die volging van mense. Die posisie-gebaseerde servobeheertegniek bestaan uit drie duidelike fases: teiken herkenning, teiken oriëntasie bepaling en die beheerder berekeninge. Benaderings tot elk van hierdie fases word hier in detail bespreek. Dan word ’n volledige funksionele stelsel aangebied wat hierdie fases saamvoeg sodat mense gevolg kan word. Meer tradisionele benaderings tot die volging van mense gebruik tipies ’n beheerder wat die platvorm direk laat navigeer na die teikens, maar hier word geargumenteer dat beter werkverrigting verkry kan word deur ’n beheerder wat die teiken oriëntasie inligting ook gebruik. ’n Suiwer rigting-gebaseerde beheerder, wat beide oriëntasie en translasie foute minimeer, is onderhewig aan verskeie beperkings. Hier word egter aangetoon dat ’n hibriede, aanwinsskedulerende kombinasie van die twee tradisionele beheerders beter teikenvolging werkverrigting bied as die onderliggende twee tegnieke. In die geval van die volging van mense vereis die insluiting van teiken oriëntasie inligting dat ’n definisie van die persoon se oriëntasie beskikbaar is en dat dit geskat kan word. ’n Oriëntasie maatstaf vir mense word hier aangebied en dit word eksperimenteel getoon dat dit geskik is om ’n platvorm met wiele te beheer. Die resultate van die volging van mense wat die voorgestelde hibriede, aanwins-skedulerende beheerder gebruik, met hierdie maatstaf, word ter ondersteuning aangebied.
12

Sistema de visão omnidirecional aplicado no controle de robôs móveis. / Omnidirectional vision system applied to mobile robots control.

Grassi Júnior, Valdir 07 May 2002 (has links)
Sistemas de visão omnidirecional produzem imagens de 360º do ambiente podendo ser utilizados em navegação, tele-operação e controle servo visual de robôs. Este tipo de sistema dispensa o movimento da câmera para determinada direção de atenção mas requer processamento não convencional da imagem, uma vez que a imagem adquirida se encontra mapeada em coordenadas polares não lineares. Uma maneira efetiva de se obter uma imagem em um sistema omnidirecional é com o uso combinado de lentes e espelhos. Várias formas de espelhos convexos podem ser utilizadas montando-se uma câmera com o seu eixo óptico alinhado com o centro do espelho. Dentre as formas usadas, tem-se os cônicos, parabólicos, hiperbólicos e esféricos. Neste trabalho foi implementado um sistema de visão omnidirecional utilizando um espelho hiperbólico. Este sistema de visão desenvolvido é embarcado em um robô móvel e aplicado em uma tarefa de controle. A tarefa de controle de interesse neste trabalho é a de fazer com que o robô mantenha uma distância constante de um determinado alvo móvel. Esta tarefa é realizada com a realimentação em tempo real de informações visuais do alvo obtidas pelo sistema de visão para controle do robô utilizando uma abordagem de controle servo visual. / Omnidirectional vision systems can get images with a 360-degree of field of view. This type of system is very well suited for tasks such as robotic navigation, tele-operation and visual servoing. Such systems do not require the movement of the camera to the direction of attention of the robot. On the other hand, it requires a non-conventional image processing as the image captured by this vision system is mapped on a non-linear polar coordinate system. One effective way to obtain an image in an omnidirectional system is through the use of lenses and mirrors. Several different shapes of convex mirrors can be used, mounting the center of the mirror aligned with the camera optical axis. The most commonly used mirror shapes are conic, parabolic, hyperbolic and spherical. In this work a hyperbolical mirror was used to build an omnidirectional vision system. This system was mounted on a mobile robot and used in a control task. The task of interest here is the tracking in real time of a moving target keeping the distance between the robot and the target constant. This task is accomplished with data acquisition from the omnidirectional vision system, that is used as feedback to control the mobile robot in a visual servo approach.
13

Sistema de visão omnidirecional aplicado no controle de robôs móveis. / Omnidirectional vision system applied to mobile robots control.

Valdir Grassi Júnior 07 May 2002 (has links)
Sistemas de visão omnidirecional produzem imagens de 360º do ambiente podendo ser utilizados em navegação, tele-operação e controle servo visual de robôs. Este tipo de sistema dispensa o movimento da câmera para determinada direção de atenção mas requer processamento não convencional da imagem, uma vez que a imagem adquirida se encontra mapeada em coordenadas polares não lineares. Uma maneira efetiva de se obter uma imagem em um sistema omnidirecional é com o uso combinado de lentes e espelhos. Várias formas de espelhos convexos podem ser utilizadas montando-se uma câmera com o seu eixo óptico alinhado com o centro do espelho. Dentre as formas usadas, tem-se os cônicos, parabólicos, hiperbólicos e esféricos. Neste trabalho foi implementado um sistema de visão omnidirecional utilizando um espelho hiperbólico. Este sistema de visão desenvolvido é embarcado em um robô móvel e aplicado em uma tarefa de controle. A tarefa de controle de interesse neste trabalho é a de fazer com que o robô mantenha uma distância constante de um determinado alvo móvel. Esta tarefa é realizada com a realimentação em tempo real de informações visuais do alvo obtidas pelo sistema de visão para controle do robô utilizando uma abordagem de controle servo visual. / Omnidirectional vision systems can get images with a 360-degree of field of view. This type of system is very well suited for tasks such as robotic navigation, tele-operation and visual servoing. Such systems do not require the movement of the camera to the direction of attention of the robot. On the other hand, it requires a non-conventional image processing as the image captured by this vision system is mapped on a non-linear polar coordinate system. One effective way to obtain an image in an omnidirectional system is through the use of lenses and mirrors. Several different shapes of convex mirrors can be used, mounting the center of the mirror aligned with the camera optical axis. The most commonly used mirror shapes are conic, parabolic, hyperbolic and spherical. In this work a hyperbolical mirror was used to build an omnidirectional vision system. This system was mounted on a mobile robot and used in a control task. The task of interest here is the tracking in real time of a moving target keeping the distance between the robot and the target constant. This task is accomplished with data acquisition from the omnidirectional vision system, that is used as feedback to control the mobile robot in a visual servo approach.
14

Robotic Single Cell Manipulation for Biological and Clinical Applications

Leung, Clement 14 December 2011 (has links)
Single cell manipulation techniques have important applications in laboratory and clinical procedures such as intracytoplasmic sperm injection (ICSI) and polar body biopsy for preimplantation genetic diagnosis (PGD). Conventionally, manipulation of cells conducted in these procedures have been performed manually, which entails long training hours and stringent skills. Conventional single cell manipulation also has the limitation of low success rates and poor reproducibility due to human fatigue and skill variations across operators. This research focuses on the integration of computer vision microscopy and control algorithms into a system for the automation of the following single cell manipulation techniques: (1) sperm immobilization, (2) cell aspiration into a micropipette, and cell positioning inside a micropipette, and (3) rotational control of cells in three dimensions. These automated techniques eliminate the need for significant human involvement and long training. Through experimental trials on live cells, the automated techniques demonstrated high success rates.
15

Robotic Single Cell Manipulation for Biological and Clinical Applications

Leung, Clement 14 December 2011 (has links)
Single cell manipulation techniques have important applications in laboratory and clinical procedures such as intracytoplasmic sperm injection (ICSI) and polar body biopsy for preimplantation genetic diagnosis (PGD). Conventionally, manipulation of cells conducted in these procedures have been performed manually, which entails long training hours and stringent skills. Conventional single cell manipulation also has the limitation of low success rates and poor reproducibility due to human fatigue and skill variations across operators. This research focuses on the integration of computer vision microscopy and control algorithms into a system for the automation of the following single cell manipulation techniques: (1) sperm immobilization, (2) cell aspiration into a micropipette, and cell positioning inside a micropipette, and (3) rotational control of cells in three dimensions. These automated techniques eliminate the need for significant human involvement and long training. Through experimental trials on live cells, the automated techniques demonstrated high success rates.
16

A Behavior Based Robot Contol System Architecture For Navigation In Environments With Randomly Allocated Walls

Altuntas, Berrin 01 January 2004 (has links) (PDF)
Integration of knowledge to the control system of a robot is the best way to emerge intelligence to robot. The most useful knowledge for a robot control system that aims to visit the landmarks in an environment is the enviromental knowledge. The most natural representation of the robot&rsquo / s environment is a map. This study presents a behavior based robot control system architecture that is based on subsumption and motor schema architectures and enables the robot to construct the map of the environment by using proximity sensors, odometry sensors, compass and image. The knowledge produced after processing the sensor values, is stored in Short Term Memory (STM) or Long Term Memory (LTM) of the robot, according to the persistence requirements of the knowledge. The knowledge stored in the STM acts as a sensor value, while LTM stores the map of the environment. The map of the environment is not a priori information for the robot, but it constructs the map as it moves in the environment. By the help of the map constructed the robot will be enabled to visit non-visited areas in the environment and to localize itself in its internal world. The controller is designed for a real robot Khepera equipped with the sensors required. The controller was tested on simulator called Webots version 2.0 on Linux operating system.
17

Uncalibrated robotic visual servo tracking for large residual problems

Munnae, Jomkwun 17 November 2010 (has links)
In visually guided control of a robot, a large residual problem occurs when the robot configuration is not in the neighborhood of the target acquisition configuration. Most existing uncalibrated visual servoing algorithms use quasi-Gauss-Newton methods which are effective for small residual problems. The solution used in this study switches between a full quasi-Newton method for large residual case and the quasi-Gauss-Newton methods for the small case. Visual servoing to handle large residual problems for tracking a moving target has not previously appeared in the literature. For large residual problems various Hessian approximations are introduced including an approximation of the entire Hessian matrix, the dynamic BFGS (DBFGS) algorithm, and two distinct approximations of the residual term, the modified BFGS (MBFGS) algorithm and the dynamic full Newton method with BFGS (DFN-BFGS) algorithm. Due to the fact that the quasi-Gauss-Newton method has the advantage of fast convergence, the quasi-Gauss-Newton step is used as the iteration is sufficiently near the desired solution. A switching algorithm combines a full quasi-Newton method and a quasi-Gauss-Newton method. Switching occurs if the image error norm is less than the switching criterion, which is heuristically selected. An adaptive forgetting factor called the dynamic adaptive forgetting factor (DAFF) is presented. The DAFF method is a heuristic scheme to determine the forgetting factor value based on the image error norm. Compared to other existing adaptive forgetting factor schemes, the DAFF method yields the best performance for both convergence time and the RMS error. Simulation results verify validity of the proposed switching algorithms with the DAFF method for large residual problems. The switching MBFGS algorithm with the DAFF method significantly improves tracking performance in the presence of noise. This work is the first successfully developed model independent, vision-guided control for large residual with capability to stably track a moving target with a robot.
18

Robot navigation in sensor space

Keeratipranon, Narongdech January 2009 (has links)
This thesis investigates the problem of robot navigation using only landmark bearings. The proposed system allows a robot to move to a ground target location specified by the sensor values observed at this ground target posi- tion. The control actions are computed based on the difference between the current landmark bearings and the target landmark bearings. No Cartesian coordinates with respect to the ground are computed by the control system. The robot navigates using solely information from the bearing sensor space. Most existing robot navigation systems require a ground frame (2D Cartesian coordinate system) in order to navigate from a ground point A to a ground point B. The commonly used sensors such as laser range scanner, sonar, infrared, and vision do not directly provide the 2D ground coordi- nates of the robot. The existing systems use the sensor measurements to localise the robot with respect to a map, a set of 2D coordinates of the objects of interest. It is more natural to navigate between the points in the sensor space corresponding to A and B without requiring the Cartesian map and the localisation process. Research on animals has revealed how insects are able to exploit very limited computational and memory resources to successfully navigate to a desired destination without computing Cartesian positions. For example, a honeybee balances the left and right optical flows to navigate in a nar- row corridor. Unlike many other ants, Cataglyphis bicolor does not secrete pheromone trails in order to find its way home but instead uses the sun as a compass to keep track of its home direction vector. The home vector can be inaccurate, so the ant also uses landmark recognition. More precisely, it takes snapshots and compass headings of some landmarks. To return home, the ant tries to line up the landmarks exactly as they were before it started wandering. This thesis introduces a navigation method based on reflex actions in sensor space. The sensor vector is made of the bearings of some landmarks, and the reflex action is a gradient descent with respect to the distance in sensor space between the current sensor vector and the target sensor vec- tor. Our theoretical analysis shows that except for some fully characterized pathological cases, any point is reachable from any other point by reflex action in the bearing sensor space provided the environment contains three landmarks and is free of obstacles. The trajectories of a robot using reflex navigation, like other image- based visual control strategies, do not correspond necessarily to the shortest paths on the ground, because the sensor error is minimized, not the moving distance on the ground. However, we show that the use of a sequence of waypoints in sensor space can address this problem. In order to identify relevant waypoints, we train a Self Organising Map (SOM) from a set of observations uniformly distributed with respect to the ground. This SOM provides a sense of location to the robot, and allows a form of path planning in sensor space. The navigation proposed system is analysed theoretically, and evaluated both in simulation and with experiments on a real robot.

Page generated in 0.0594 seconds