• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

DataMart for an Autonomous Guided Vehicle Using ColdFusion

RAVINDRAN, RAMYA 11 October 2001 (has links)
No description available.
2

Interface de operação para veículos não tripulados

Ferreira, António Sérgio Borges dos Santos January 2010 (has links)
Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 2010
3

Desenvolvimento de um Sistema de Vis?o Global para uma Frota de Mini-Rob?s M?veis

Aires, Kelson R?mulo Teixeira 28 March 2001 (has links)
Made available in DSpace on 2014-12-17T14:56:22Z (GMT). No. of bitstreams: 1 KelsonRTA.pdf: 822338 bytes, checksum: 2e3a06ae915ace956aac24995e24973f (MD5) Previous issue date: 2001-03-28 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / Navigation based on visual feedback for robots, working in a closed environment, can be obtained settling a camera in each robot (local vision system). However, this solution requests a camera and capacity of local processing for each robot. When possible, a global vision system is a cheapest solution for this problem. In this case, one or a little amount of cameras, covering all the workspace, can be shared by the entire team of robots, saving the cost of a great amount of cameras and the associated processing hardware needed in a local vision system. This work presents the implementation and experimental results of a global vision system for mobile mini-robots, using robot soccer as test platform. The proposed vision system consists of a camera, a frame grabber and a computer (PC) for image processing. The PC is responsible for the team motion control, based on the visual feedback, sending commands to the robots through a radio link. In order for the system to be able to unequivocally recognize each robot, each one has a label on its top, consisting of two colored circles. Image processing algorithms were developed for the eficient computation, in real time, of all objects position (robot and ball) and orientation (robot). A great problem found was to label the color, in real time, of each colored point of the image, in time-varying illumination conditions. To overcome this problem, an automatic camera calibration, based on clustering K-means algorithm, was implemented. This method guarantees that similar pixels will be clustered around a unique color class. The obtained experimental results shown that the position and orientation of each robot can be obtained with a precision of few millimeters. The updating of the position and orientation was attained in real time, analyzing 30 frames per second / A navega??o baseada em realimenta??o visual para rob?s, trabalhando em um ambiente fechado, pode ser obtida instalando-se uma c?mera em cada rob? (sistema de vis?o local). Esta solu??o, entretanto, requer uma c?mera e capacidade de processamento embarcado para cada rob?. Quando poss?vel, um sistema de vis?o global ? uma solu??o barata para este problema. Neste caso, uma ou uma pequena quantidade de c?meras, cobrindo todo o espa?o de trabalho, pode ser compartilhada pelos rob?s, diminuindo o custo de uma grande quantidade de c?meras e o hardware de processamento necess?rio a um sistema de vis?o local. Este trabalho apresenta a implementa??o e os resultados experimentais de um sistema de vis?o global para uma frota de mini-rob?s m?veis, utilizando como plataforma de testes uma partida de futebol entre rob?s. O sistema de vis?o proposto consiste de uma c?mera, uma placa digitalizadora de imagens e um computador (PC) para o processamento das imagens. O PC ? respons?vel pelo controle dos rob?s, baseado em realimenta??o visual, enviando comandos aos rob?s atrav?s de um transmissor de r?dio. Com o objetivo de possibilitar ao sistema reconhecer unicamente cada rob?, eles possuem r?tulos em seu topo, consistindo de dois c?rculos coloridos. Algoritmos de processamento de imagem foram desenvolvidos para o c?mputo eficiente, em tempo real, da posi??o (rob? e bola) e orienta??o (rob?) dos objetos em campo. Um grande problema encontrado foi rotular a cor, em tempo real, cada ponto colorido da imagem, em condi??es de varia??o de luminosidade. Para resolver este problema, um software de calibra??o autom?tica da c?mera, baseado no algoritmo de aglomera??o K-means, foi implementado. Este m?todo garante que pixels similares sejam agrupados ao redor de uma ?nica classe de cor. Os resultados experimentais obtidos mostram que a posi??o e a orienta??o de cada rob? pode ser obtida com uma precis?o de poucos mil?metros. A atualiza??o das informa??es de posi??o e orienta??o foi realizada em tempo real, analisando 30 quadros por segundo
4

Methods and Metrics for Human Control of Multi-Robot Teams

Anderson, Jeffrey D. 15 November 2006 (has links) (PDF)
Human-controlled robots are utilized in many situations and such use is becoming widespread. This thesis details research that allows a single human to interact with a team of robots performing tasks that require cooperation. The research provides insight into effective interaction design methods and appropriate interface techniques. The use of team-level autonomy is shown to decrease human workload while simultaneously improving individual robot efficiency and robot-team cooperation. An indoor human-robot interaction testbed was developed at the BYU MAGICC Lab to facilitate experimentation. The testbed consists of eight robots equipped with wireless modems, a field on which the robots move, an overhead camera and image processing software which tracks robot position and heading, a simulator which allows development and testing without hardware utilization and a graphical user interface which enables human control of either simulated or hardware robots. The image processing system was essential for effective robot hardware operation and is described in detail. The system produced accurate robot position and heading information 30 times per second for a maximum of 12 robots, was relatively insensitive to lighting conditions and was easily reconfigurable. The completed testbed was utilized to create a game for testing human-robot interaction schemes. The game required a human controlling three robots to find and tag three robot opponents in a maze. Finding an opponent could be accomplished by individual robots, but tagging an opponent required cooperation between at least two robots. The game was played by 11 subjects in five different autonomy modes ranging from limited robot autonomy to advanced individual autonomy with basic team-level autonomy. Participants were interrupted during the game by a secondary spatial reasoning task which prevented them from interacting with the robots for short periods of time. Robot performance during that interruption provided a measure of both individual and team neglect tolerance. Individual robot neglect tolerance and performance did not directly correspond to those quantities at the team level. The interaction mode with the highest levels of individual and team autonomy was most effective; it minimized game time and human workload and maximized team neglect tolerance.

Page generated in 0.0349 seconds