• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • Tagged with
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Řízení čtyřkolového mobilního robotu / 4 Wheel mobile robot control

Deďo, Michal January 2011 (has links)
The purpose of this thesis is to design and implement four-wheel mobile robot control which will be used in future in the field of mapping and localization. Concretely, it will be a design of drive control with microcontrollers Xmega, which will also process the signals of the sensors. Communication with the PC will ensure the BlueTooth module. In view of the future use of the robot, there will be designed and carried out modifications of the mechanical part. Correctness and functionality of all parts of the robot will be verified by carrying out basic movements.
2

Techniques for Extracting Contours and Merging Maps

Adluru, Nagesh January 2008 (has links)
Understanding machine vision can certainly improve our understanding of artificial intelligence as vision happens to be one of the basic intellectual activities of living beings. Since the notion of computation unifies the concept of a machine, computer vision can be understood as an application of modern approaches for achieving artificial intelligence, like machine learning and cognitive psychology. Computer vision mainly involves processing of different types of sensor data resulting in "perception of machines". Perception of machines plays a very important role in several artificial intelligence applications with sensors. There are numerous practical situations where we acquire sensor data for e.g. from mobile robots, security cameras, service and recreational robots. Making sense of this sensor data is very important so that we have increased automation in using the data. Tools from image processing, shape analysis and probabilistic inferences i.e. learning theory form the artillery for current generation of computer vision researchers. In my thesis I will address some of the most annoying components of two important open problems viz. object recognition and autonomous navigation that remain central in robotic, or in other words computational, intelligence. These problems are concerned with inducing computers, the abilities to recognize and navigate similar to those of humans. Object boundaries are very useful descriptors for recognizing objects. Extracting boundaries from real images has been a notoriously open problem for several decades in the vision community. In the first part I will present novel techniques for extracting object boundaries. The techniques are based on practically successful state-of-the-art Bayesian filtering framework, well founded geometric properties relating boundaries and skeletons and robust high-level shape analyses Acquiring global maps of the environments is crucial for robots to localize and be able to navigate autonomously. Though there has been a lot of progress in achieving autonomous mobility, for e.g. as in DARPA grand-challenges of 2005 and 2007, the mapping problem itself remains to be unsolved which is essential for robust autonomy in hard cases like rescue arenas and collaborative exploration. In the second part I will present techniques for merging maps acquired by multiple and single robots. We developed physics-based energy minimization techniques and also shape based techniques for scalable merging of maps. Our shape based techniques are a product of combining of high-level vision techniques that exploit similarities among maps and strong statistical methods that can handle uncertainties in Bayesian sense. / Computer and Information Science
3

Scalable online decentralized smoothing and mapping

Cunningham, Alexander G. 22 May 2014 (has links)
Many applications for field robots can benefit from large numbers of robots, especially applications where the objective is for the robots to cover or explore a region. A key enabling technology for robust autonomy in these teams of small and cheap robots is the development of collaborative perception to account for the shortcomings of the small and cheap sensors on the robots. In this dissertation, I present DDF-SAM to address the decentralized data fusion (DDF) inference problem with a smoothing and mapping (SAM) approach to single-robot mapping that is online, scalable and consistent while supporting a variety of sensing modalities. The DDF-SAM approach performs fully decentralized simultaneous localization and mapping in which robots choose a relevant subset of variables from their local map to share with neighbors. Each robot summarizes their local map to yield a density on exactly this chosen set of variables, and then distributes this summarized map to neighboring robots, allowing map information to propagate throughout the network. Each robot fuses summarized maps it receives to yield a map solution with an extended sensor horizon. I introduce two primary variations on DDF-SAM, one that uses a batch nonlinear constrained optimization procedure to combine maps, DDF-SAM 1.0, and one that uses an incremental solving approach for substantially faster performance, DDF-SAM 2.0. I validate these systems using a combination of real-world and simulated experiments. In addition, I evaluate design trade-offs for operations within DDF-SAM, with a focus on efficient approximate map summarization to minimize communication costs.
4

From Human to Robot Grasping

Romero, Javier January 2011 (has links)
Imagine that a robot fetched this thesis for you from a book shelf. How doyou think the robot would have been programmed? One possibility is thatexperienced engineers had written low level descriptions of all imaginabletasks, including grasping a small book from this particular shelf. A secondoption would be that the robot tried to learn how to grasp books from yourshelf autonomously, resulting in hours of trial-and-error and several bookson the floor.In this thesis, we argue in favor of a third approach where you teach therobot how to grasp books from your shelf through grasping by demonstration.It is based on the idea of robots learning grasping actions by observinghumans performing them. This imposes minimum requirements on the humanteacher: no programming knowledge and, in this thesis, no need for specialsensory devices. It also maximizes the amount of sources from which therobot can learn: any video footage showing a task performed by a human couldpotentially be used in the learning process. And hopefully it reduces theamount of books that end up on the floor. This document explores the challenges involved in the creation of such asystem. First, the robot should be able to understand what the teacher isdoing with their hands. This means, it needs to estimate the pose of theteacher's hands by visually observing their in the absence of markers or anyother input devices which could interfere with the demonstration. Second,the robot should translate the human representation acquired in terms ofhand poses to its own embodiment. Since the kinematics of the robot arepotentially very different from the human one, defining a similarity measureapplicable to very different bodies becomes a challenge. Third, theexecution of the grasp should be continuously monitored to react toinaccuracies in the robot perception or changes in the grasping scenario.While visual data can help correcting the reaching movement to the object,tactile data enables accurate adaptation of the grasp itself, therebyadjusting the robot's internal model of the scene to reality. Finally,acquiring compact models of human grasping actions can help in bothperceiving human demonstrations more accurately and executing them in a morehuman-like manner. Moreover, modeling human grasps can provide us withinsights about what makes an artificial hand design anthropomorphic,assisting the design of new robotic manipulators and hand prostheses. All these modules try to solve particular subproblems of a grasping bydemonstration system. We hope the research on these subproblems performed inthis thesis will both bring us closer to our dream of a learning robot andcontribute to the multiple research fields where these subproblems arecoming from. / QC 20111125

Page generated in 0.0471 seconds