• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 6
  • 5
  • 4
  • 4
  • 2
  • 1
  • Tagged with
  • 129
  • 129
  • 39
  • 33
  • 32
  • 28
  • 26
  • 22
  • 21
  • 15
  • 14
  • 12
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Teamwork in a swarm of robots: an experiment in search and retrieval

Nouyan, Shervin 24 September 2008 (has links)
In this thesis, we investigate the problem of path formation and prey retrieval in a swarm of robots. We present two swarm intelligence control mechanisms used for distributed robot path formation. In the first, the robots form linear chains. We study three variants of robot chains, which vary in the degree of motion allowed<p>to the chain structure. The second mechanism is called vectorfield. In this case,<p>the robots form a pattern that globally indicates the direction towards a goal or<p>home location. Both algorithms were designed following the swarm robotics control<p>principles: simplicity of control, locality of sensing and communication, homogeneity<p>and distributedness.<p><p>We test each controller on a task that consists in forming a path between two<p>objects—the prey and the nest—and to retrieve the prey to the nest. The difficulty<p>of the task is given by four constraints. First, the prey requires concurrent, physical<p>handling by multiple robots to be moved. Second, each robot’s perceptual range<p>is small when compared to the distance between the nest and the prey; moreover,<p>perception is unreliable. Third, no robot has any explicit knowledge about the<p>environment beyond its perceptual range. Fourth, communication among robots is<p>unreliable and limited to a small set of simple signals that are locally broadcast.<p><p>In simulation experiments we test our controllers under a wide range of conditions,<p>changing the distance between nest and prey, varying the number of robots<p>used, and introducing different obstacle configurations in the environment. Furthermore,<p>we tested the controllers for robustness by adding noise to the different sensors,<p>and for fault tolerance by completely removing a sensor or actuator. We validate the<p>chain controller in experiments with up to twelve physical robots. We believe that<p>these experiments are among the most sophisticated examples of self-organisation<p>in robotics to date. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
102

AR-Supported Supervision of Conditional Autonomous Robots: Considerations for Pedicle Screw Placement in the Future

Schreiter, Josefine, Schott, Danny, Schwenderling, Lovis, Hansen, Christian, Heinrich, Florian, Joeres, Fabian 16 May 2024 (has links)
Robotic assistance is applied in orthopedic interventions for pedicle screw placement (PSP). While current robots do not act autonomously, they are expected to have higher autonomy under surgeon supervision in the mid-term. Augmented reality (AR) is promising to support this supervision and to enable human–robot interaction (HRI). To outline a futuristic scenario for robotic PSP, the current workflow was analyzed through literature review and expert discussion. Based on this, a hypothetical workflow of the intervention was developed, which additionally contains the analysis of the necessary information exchange between human and robot. A video see-through AR prototype was designed and implemented. A robotic arm with an orthopedic drill mock-up simulated the robotic assistance. The AR prototype included a user interface to enable HRI. The interface provides data to facilitate understanding of the robot’s ”intentions”, e.g., patient-specific CT images, the current workflow phase, or the next planned robot motion. Two-dimensional and three-dimensional visualization illustrated patient-specific medical data and the drilling process. The findings of this work contribute a valuable approach in terms of addressing future clinical needs and highlighting the importance of AR support for HRI.
103

Demonstration of passive acoustic detection and tracking of unmanned underwater vehicles

Railey, Kristen Elizabeth January 2018 (has links)
Thesis: S.M., Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Mechanical Engineering; and the Woods Hole Oceanographic Institution), 2018. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 93-99). / In terms of national security, the advancement of unmanned underwater vehicle (UUV) technology has transformed UUVs from tools for intelligence, surveillance, and reconnaissance and mine countermeasures to autonomous platforms that can perform complex tasks like tracking submarines, jamming, and smart mining. Today, they play a major role in asymmetric warfare, as UUVs have attributes that are desirable for less-established navies. They are covert, easy to deploy, low-cost, and low-risk to personnel. The concern of protecting against UUVs of malicious intent is that existing defense systems fall short in detecting, tracking, and preventing the vehicles from causing harm. Addressing this gap in technology, this thesis is the first to demonstrate passively detecting and tracking UUVs in realistic environments strictly from the vehicle's self-generated noise. This work contributes the first power spectral density estimate of an underway micro-UUV, field experiments in a pond and river detecting a UUV with energy thresholding and spectral filters, and field experiments in a pond and river tracking a UUV using conventional and adaptive beamforming. The spectral filters resulted in a probability of detection of 96 % and false alarms of 18 % at a distance of 100 m, with boat traffic in a river environment. Tracking the vehicle with adaptive beamforming resulted in a 6.2 ± 5.7° absolute difference in bearing. The principal achievement of this work is to quantify how well a UUV can be covertly tracked with knowledge of its spectral features. This work can be implemented into existing passive acoustic surveillance systems and be applied to larger classes of UUVs, which potentially have louder identifying acoustic signatures. / by Kristen Elizabeth Railey. / S.M.
104

Robot odour localisation in enclosed and cluttered environments using naïve physics

Kowadlo, Gideon January 2007 (has links)
Odour localisation is the problem of finding the source of an odour or other volatile chemical. It promises many valuable practical and humanitarian applications. Most localisation methods require a robot to reactively track an odour plume along its entire length. This approach is time consuming and may be not be possible in a cluttered indoor environment, where airflow tends to form sectors of circulating airflow. Such environments may be encountered in crawl-ways under floors, roof cavities, mines, caves, tree-canopies, air-ducts, sewers or tunnel systems. Operation in these places is important for such applications as search and rescue and locating the sources of toxic chemicals in an industrial setting. This thesis addresses odour localisation in this class of environments. The solution consists of a sense-map-plan-act style control scheme (and low level behaviour based controller) with two main stages. Firstly, the airflow in the environment is modelled using naive physics rules which are encapsulated into an algorithm named a Naive Reasoning Machine. It was used in preference to conventional methods as it is fast, does not require boundary conditions, and most importantly, provides approximate solutions to the degree of accuracy required for the task, with analogical data structures that are readily useful to a reasoning algorithm. Secondly, a reasoning algorithm navigates the robot to specific target locations that are determined with a physical map, the airflow map, and knowledge of odour dispersal. Sensor measurements at the target positions provide information regarding the likelihood that odour was emitted from potential odour source locations. The target positions and their traversal are determined so that all the potential odour source sites are accounted for. The core method provides values corresponding to the confidence that the odour source is located in a given region. A second search stage exploiting vision is then used to locate the specific location of the odour source within the predicted region. This comprises the second part of a bi-modal, two-stage search, with each stage exploiting complementary sensing modalities. Single hypothesis airflow modelling faces limitations due to the fact that large differences between airflow topologies are predicted for only small variations in a physical map. This is due to uncertainties in the map and approximations in the modelling process. Furthermore, there are uncertainties regarding the flow direction through inlet/outlet ducts. A method is presented for dealing with these uncertainties, by generating multiple airflow hypotheses. As the robot performs odour localisation, airflow in the environment is measured and used to adjust the confidences of the hypotheses using Bayesian inference. The best hypothesis is then selected, which allows the completion of the localisation task. This method improves the robustness of odour localisation in the presence of uncertainties, making it possible where the single hypothesis method would fail. It also demonstrates the potential for integrating naive physics into a statistical framework. Extensive experimental results are presented to support the methods described above.
105

Sensor Fusion with Coordinated Mobile Robots / Sensorfusion med koordinerade mobila robotar

Holmberg, Per January 2003 (has links)
<p>Robust localization is a prerequisite for mobile robot autonomy. In many situations the GPS signal is not available and thus an additional localization system is required. A simple approach is to apply localization based on dead reckoning by use of wheel encoders but it results in large estimation errors. With exteroceptive sensors such as a laser range finder natural landmarks in the environment of the robot can be extracted from raw range data. Landmarks are extracted with the Hough transform and a recursive line segment algorithm. By applying data association and Kalman filtering along with process models the landmarks can be used in combination with wheel encoders for estimating the global position of the robot. If several robots can cooperate better position estimates are to be expected because robots can be seen as mobile landmarks and one robot can supervise the movement of another. The centralized Kalman filter presented in this master thesis systematically treats robots and extracted landmarks such that benefits from several robots are utilized. Experiments in different indoor environments with two different robots show that long distances can be traveled while the positional uncertainty is kept low. The benefit from cooperating robots in the sense of reduced positional uncertainty is also shown in an experiment. </p><p>Except for localization algorithms a typical autonomous robot task in the form of change detection is solved. The change detection method, which requires robust localization, is aimed to be used for surveillance. The implemented algorithm accounts for measurement- and positional uncertainty when determining whether something in the environment has changed. Consecutive true changes as well as sporadic false changes are detected in an illustrative experiment.</p>
106

Robot Task Learning from Human Demonstration

Ekvall, Staffan January 2007 (has links)
Today, most robots used in the industry are preprogrammed and require a welldefined and controlled environment. Reprogramming such robots is often a costly process requiring an expert. By enabling robots to learn tasks from human demonstration, robot installation and task reprogramming are simplified. In a longer time perspective, the vision is that robots will move out of factories into our homes and offices. Robots should be able to learn how to set a table or how to fill the dishwasher. Clearly, robot learning mechanisms are required to enable robots to adapt and operate in a dynamic environment, in contrast to the well defined factory assembly line. This thesis presents contributions in the field of robot task learning. A distinction is made between direct and indirect learning. Using direct learning, the robot learns tasks while being directly controlled by a human, for example in a teleoperative setting. Indirect learning, however, allows the robot to learn tasks by observing a human performing them. A challenging and realistic assumption that is decisive for the indirect learning approach is that the task relevant objects are not necessarily at the same location at execution time as when the learning took place. Thus, it is not sufficient to learn movement trajectories and absolute coordinates. Different methods are required for a robot that is to learn tasks in a dynamic home or office environment. This thesis presents contributions to several of these enabling technologies. Object detection and recognition are used together with pose estimation in a Programming by Demonstration scenario. The vision system is integrated with a localization module which enables the robot to learn mobile tasks. The robot is able to recognize human grasp types, map human grasps to its own hand and also evaluate suitable grasps before grasping an object. The robot can learn tasks from a single demonstration, but it also has the ability to adapt and refine its knowledge as more demonstrations are given. Here, the ability to generalize over multiple demonstrations is important and we investigate a method for automatically identifying the underlying constraints of the tasks. The majority of the methods have been implemented on a real, mobile robot, featuring a camera, an arm for manipulation and a parallel-jaw gripper. The experiments were conducted in an everyday environment with real, textured objects of various shape, size and color. / QC 20100706
107

Sensor Fusion with Coordinated Mobile Robots / Sensorfusion med koordinerade mobila robotar

Holmberg, Per January 2003 (has links)
Robust localization is a prerequisite for mobile robot autonomy. In many situations the GPS signal is not available and thus an additional localization system is required. A simple approach is to apply localization based on dead reckoning by use of wheel encoders but it results in large estimation errors. With exteroceptive sensors such as a laser range finder natural landmarks in the environment of the robot can be extracted from raw range data. Landmarks are extracted with the Hough transform and a recursive line segment algorithm. By applying data association and Kalman filtering along with process models the landmarks can be used in combination with wheel encoders for estimating the global position of the robot. If several robots can cooperate better position estimates are to be expected because robots can be seen as mobile landmarks and one robot can supervise the movement of another. The centralized Kalman filter presented in this master thesis systematically treats robots and extracted landmarks such that benefits from several robots are utilized. Experiments in different indoor environments with two different robots show that long distances can be traveled while the positional uncertainty is kept low. The benefit from cooperating robots in the sense of reduced positional uncertainty is also shown in an experiment. Except for localization algorithms a typical autonomous robot task in the form of change detection is solved. The change detection method, which requires robust localization, is aimed to be used for surveillance. The implemented algorithm accounts for measurement- and positional uncertainty when determining whether something in the environment has changed. Consecutive true changes as well as sporadic false changes are detected in an illustrative experiment.
108

Optimal, Multi-Modal Control with Applications in Robotics

Mehta, Tejas R. 04 April 2007 (has links)
The objective of this dissertation is to incorporate the concept of optimality to multi-modal control and apply the theoretical results to obtain successful navigation strategies for autonomous mobile robots. The main idea in multi-modal control is to breakup a complex control task into simpler tasks. In particular, number of control modes are constructed, each with respect to a particular task, and these modes are combined according to some supervisory control logic in order to complete the overall control task. This way of modularizing the control task lends itself particularly well to the control of autonomous mobile robot, as evidenced by the success of behavior-based robotics. Many challenging and interesting research issues arise when employing multi-modal control. This thesis aims to address these issues within an optimal control framework. In particular, the contributions of this dissertation are as follows: We first addressed the problem of inferring global behaviors from a collection of local rules (i.e., feedback control laws). Next, we addressed the issue of adaptively varying the multi-modal control system to further improve performance. Inspired by adaptive multi-modal control, we presented a constructivist framework for the learning from example problem. This framework was applied to the DARPA sponsored Learning Applied to Ground Robots (LAGR) project. Next, we addressed the optimal control of multi-modal systems with infinite dimensional constraints. These constraints are formulated as multi-modal, multi-dimensional (M3D) systems, where the dimensions of the state and control spaces change between modes to account for the constraints, to ease the computational burdens associated with traditional methods. Finally, we used multi-modal control strategies to develop effective navigation strategies for autonomous mobile robots. The theoretical results presented in this thesis are verified by conducting simulated experiments using Matlab and actual experiments using the Magellan Pro robot platform and the LAGR robot. In closing, the main strength of multi-modal control lies in breaking up complex control task into simpler tasks. This divide-and-conquer approach helps modularize the control system. This has the same effect on complex control systems that object-oriented programming has for large-scale computer programs, namely it allows greater simplicity, flexibility, and adaptability.
109

Robot Tool Behavior: A Developmental Approach to Autonomous Tool Use

Stoytchev, Alexander 11 June 2007 (has links)
The ability to use tools is one of the hallmarks of intelligence. Tool use is fundamental to human life and has been for at least the last two million years. We use tools to extend our reach, to amplify our physical strength, and to achieve many other tasks. A large number of animals have also been observed to use tools. Despite the widespread use of tools in the animal world, however, studies of autonomous robotic tool use are still rare. This dissertation examines the problem of autonomous tool use in robots from the point of view of developmental robotics. Therefore, the main focus is not on optimizing robotic solutions for specific tool tasks but on designing algorithms and representations that a robot can use to develop tool-using abilities. The dissertation describes a developmental sequence/trajectory that a robot can take in order to learn how to use tools autonomously. The developmental sequence begins with learning a model of the robot's body since the body is the most consistent and predictable part of the environment. Specifically, the robot learns which perceptual features are associated with its own body and which with the environment. Next, the robot can begin to identify certain patterns exhibited by the body itself and to learn a robot body schema model which can also be used to encode goal-oriented behaviors. The robot can also use its body as a well defined reference frame from which the properties of environmental objects can be explored by relating them to the body. Finally, the robot can begin to relate two environmental objects to one another and to learn that certain actions with the first object can affect the second object, i.e., the first object can be used as a tool. The main contributions of the dissertation can be broadly summarized as follows: it demonstrates a method for autonomous self-detection in robots; it demonstrates a model for extendable robot body schema which can be used to achieve goal-oriented behaviors, including video-guided behaviors; it demonstrates a behavior-grounded method for learning the affordances of tools which can also be used to solve tool-using tasks.
110

Worst-case robot navigation in deterministic environments

Mudgal, Apurva 02 December 2009 (has links)
We design and analyze algorithms for the following two robot navigation problems: 1. TARGET SEARCH. Given a robot located at a point s in the plane, how will a robot navigate to a goal t in the presence of unknown obstacles ? 2. LOCALIZATION. A robot is "lost" in an environment with a map of its surroundings. How will it find its true location by traveling the minimum distance ? Since efficient algorithms for these two problems will make a robot completely autonomous, they have held the interest of both robotics and computer science communities. Previous work has focussed mainly on designing competitive algorithms where the robot's performance is compared to that of an omniscient adversary. For example, a competitive algorithm for target search will compare the distance traveled by the robot with the shortest path from s to t. We analyze these problems from the worst-case perspective, which, in our view, is a more appropriate measure. Our results are : 1. For target search, we analyze an algorithm called Dynamic A*. The robot continuously moves to the goal on the shortest path which it recomputes on the discovery of obstacles. A variant of this algorithm has been employed in Mars Rover prototypes. We show that D* takes O(n log n) time on planar graphs and also show a comparable bound on arbitrary graphs. Thus, our results show that D* combines the optimistic possibility of reaching the goal very soon while competing with depth-first search within a logarithmic factor. 2. For the localization problem, worst-case analysis compares the performance of the robot with the optimal decision tree over the set of possible locations. No approximation algorithm has been known. We give a polylogarithmic approximation algorithm and also show a near-tight lower bound for the grid graphs commonly used in practice. The key idea is to plan travel on a "majority-rule map" which eliminates uncertainty and permits a link to the half-Group Steiner problem. We also extend the problem to polygonal maps by discretizing the domain using novel geometric techniques.

Page generated in 0.0653 seconds