• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1206
  • 263
  • 234
  • 204
  • 181
  • 114
  • 36
  • 34
  • 20
  • 18
  • 13
  • 13
  • 9
  • 9
  • 7
  • Tagged with
  • 2786
  • 570
  • 547
  • 526
  • 484
  • 416
  • 408
  • 395
  • 351
  • 291
  • 260
  • 253
  • 215
  • 215
  • 213
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
821

Design and Optimization of a Compass Robot with Subject to Stability Constraint

Keshavarzbagheri, Zohreh 2012 August 1900 (has links)
In the first part of this thesis, the design of a compass robot is explored by considering its components and their interaction with each other. Three components including robot's structure, gear and motor are interacting during design process to achieve better performance, higher stability and lower cost. In addition, the modeling of the system is upgraded by considering the torque-velocity constraint in the motor. Adding this constraint of DC motor make the interaction of different components more complicated since it affects the gear and walking dynamics. After achieving the design method, different actuators (motor+ gear+ batteries) are selected for a given structure and the their performance is compared in the terms of cost, efficiency and their effect on the walking stability. In the second part of the thesis, structural optimization of the compass robot with stability constraint is investigated. The stability of a compass robot as a hybrid system is analyzed by Poincare map. Including stability analysis in the optimization process, makes it very complicated. In addition, the objective function of the system has to be evaluated in the convergent limit cycle. Different methods are examined to solve this problem. Limit cycle convergence is the best solution among the existing methods. By adding convergence constraint to the optimization, in addition of making the stability analysis valid, it helps the optimization estimates the correct objective function in each iteration. Finally, the optimization process is improved in two steps. The first step is using a predictive model in the optimization which covers the stable domain so that one does not need to check the stability of walking in each iteration. The Support Vector Domain Description (SVDD) approach which is applied to establish the stable domain, improve the decreases the optimization time. Another important step to upgrade the optimization is developing a computational algorithm which obtains the convergent limit cycle and its fixed-point in a short time. This algorithm speeds up the optimization time tremendously and allows the optimization search in a broader area. Combining SVDD approach in combination with Fixed-Point Finder Algorithm improve the optimization in the terms of time and broader area for search.
822

A component-based layered abstraction model for software portability across autonomous mobile robots

Smith, Robert January 2005 (has links)
Today's autonomous robots come in a variety of shapes and sizes from all terrain vehicles clambering over rubble, to robots the size of coffee cups zipping about a laboratory. The diversity of these robots is extraordinary; but so is the diversity of the software created to control them even when the basic tasks many robots undertake are practically the same (such as obstacle detection, tracking, or path planning). It would be beneficial if some reuse of these coded sub-tasks could be achieved. However, most of the present day robot software is monolithic, very specialised and not at all modular, which hinders the reuse and sharing of code between robot platforms. One difficulty is that the hardware details of a robot are usually tightly woven into the high-level controllers. When these details are not decoupled and explicitly encapsulated, the entire code set must be revised if the robot platform changes. An even bigger challenge is that a robot is a context-aware device. Hence, the possible interpretations of the state of the robot and its environment vary along with its context. For example, as the robots differ in size and shape, the meaning of concepts such as direction, speed, and distance can change { objects that are considered far from one robot, might seem near to a much larger robot. When designing reusable robot software, these variable interpretations of the environment must be considered. Similarly, so must variations in context dependent robot instructions { for example, `move fast' has different abstractions; a `virtual robot' layer to manage the robot's platform abstractions; and high-level abstraction components that are used to describe the state of the robot and its environment. The prototype is able to support binary code portability and dynamic code extensibility across a range of different robots (demonstrated on eight diverse robot platform configurations). These outcomes significantly ease the burden on robot software developers when deploying a new robot (or even reconfiguring old robots) since high-level binary controllers can be executed unchanged on different robots. Furthermore, since the control code is completely decoupled from the platform information, these concerns can be managed separately, thereby providing a flexible means for managing different configurations of robots. These systems and techniques all improve the robot software design, development, and deployment process. Different meanings depending on the robot's size, environmental context and task being undertaken. What is needed is a unifying cross-platform software engineering approach for robots that will encourage the development of code that is portable, modular and robust. Toward this end, this research presents a complete abstraction model and implementation prototype that contain a suite of techniques to form and manage the robot hardware, platform, and environment abstractions. The system includes the interfaces and software components required for hardware device and operating system abstractions; a `virtual robot' layer to manage the robot's platform abstractions; and high-level abstraction components that are used to describe the state of the robot and its environment. The prototype is able to support binary code portability and dynamic code extensibility across a range of different robots (demonstrated on eight diverse robot platform configurations). These outcomes significantly ease the burden on robot software developers when deploying a new robot (or even reconfiguring old robots) since high-level binary controllers can be executed unchanged on different robots. Furthermore, since the control code is completely decoupled from the platform information, these concerns can be managed separately, thereby providing a flexible means for managing different configurations of robots. These systems and techniques all improve the robot software design, development, and deployment process.
823

Real-Time Multi-Sensor Localisation and Mapping Algorithms for Mobile Robots

Matsumoto, Takeshi, takeshi.matsumoto@flinders.edu.au January 2010 (has links)
A mobile robot system provides a grounded platform for a wide variety of interactive systems to be developed and deployed. The mobility provided by the robot presents unique challenges as it must observe the state of the surroundings while observing the state of itself with respect to the environment. The scope of the discipline includes the mechanical and hardware issues, which limit and direct the capabilities of the software considerations. The systems that are integrated into the mobile robot platform include both specific task oriented and fundamental modules that define the core behaviour of the robot. While the earlier can sometimes be developed separately and integrated at a later stage, the core modules are often custom designed early on to suit the individual robot system depending on the configuration of the mechanical components. This thesis covers the issues encountered and the resolutions that were implemented during the development of a low cost mobile robot platform using off the shelf sensors, with a particular focus on the algorithmic side of the system. The incrementally developed modules target the localisation and mapping aspects by incorporating a number of different sensors to gather the information of the surroundings from different perspectives by simultaneously or sequentially combining the measurements to disambiguate and support each other. Although there is a heavy focus on the image processing techniques, the integration with the other sensors and the characteristics of the platform itself are included in the designs and analyses of the core and interactive modules. A visual odometry technique is implemented for the localisation module, which includes calibration processes, feature tracking, synchronisation between multiple sensors, as well as short and long term landmark identification to calculate the relative pose of the robot in real time. The mapping module considers the interpretation and the representation of sensor readings to simplify and hasten the interactions between multiple sensors, while selecting the appropriate attributes and characteristics to construct a multi-attributed model of the environment. The modules that are developed are applied to realistic indoor scenarios, which are taken into consideration in some of the algorithms to enhance the performance through known constraints. As the performance of algorithms depends significantly on the hardware, the environment, and the number of concurrently running sensors and modules, comparisons are made against various implementations that have been developed throughout the project.
824

Spatial Language for Mobile Robots: The Formation and Generative Grounding of Toponyms

Ms Ruth Schulz Unknown Date (has links)
No description available.
825

Automated assembly of industrial transformer cores utilising dual cooperating mobile robots bearing a common electromagnetic gripper

Postma, Bradley Theodore, b.postma@cullens.com.au January 2000 (has links)
Automation of the industrial transformer core assembly process is highly desirable. A survey undertaken by the author however, revealed that due to the high cost of existing fully automated systems, Australian manufacturers producing low to medium transformer volumes continue to maintain a manual construction approach. The conceptual design of a cost-effective automation system for core assembly from pre-cut lamination stacks was consequently undertaken. The major hurdle for automating the existing manual process was identified as the difficulty in reliably handling and accurately positioning the constituent core laminations, which number in their thousands, during transformer core construction. Technical evaluation of the proposed pick-and-place core assembly system, incorporating two mobile robots bearing a common gripper, is presented herein to address these requirements. A unique robotic gripper, having the capability to selectively pick a given number of steel laminations (typically two or three) concurrently from a stack, has the potential to significantly increase productivity. The only available avenue for picking multiple laminations was deemed to be a gripper based on magnetism. Closed form analytical and finite element models for an electromagnet-stack system were contrived and their force distributions obtained. The theoretical findings were validated by experiment using a specially constructed prototype. Critical parameters for reliably lifting the required number of laminations were identified and a full scale electromagnet, that overcame inherent suction forces present in the stack during picking, was subsequently developed. A mechanical docking arrangement is envisaged that will ensure precise lamination placement. Owing to the grippers unwieldy length however, conventional robots cannot be used for assembling larger cores. Two wheeled mobile robots (WMRs) compliantly coupled to either end of the gripper could be considered although a review of the current literature revealed the absence of a suitable controller. Dynamic modelling for a single WMR was therefore undertaken and later expanded upon for the dual WMR system conceived. Nonlinear adaptive controllers for both WMR systems were developed and subsequently investigated via simulation. Neglecting the systems dynamics resulted in analogous, simplified kinematic control schemes, that were verified experimentally using prototypes. Additional cooperative control laws ensuring the synchronisation of the two robots were also implemented on the prototype system.
826

Vision-based navigation and decentralized control of mobile robots.

Low, May Peng Emily, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2007 (has links)
The first part of this thesis documents experimental investigation into the use of vision for wheeled robot navigation problems. Specifically, using a video camera as a source of feedback to control a wheeled robot toward a static and a moving object in an environment in real-time. The wheeled robot control algorithms are dependent on information from a vision system and an estimator. The vision system design consists of a pan video camera and a visual gaze algorithm which attempts to search and continuously maintain an object of interest within limited camera field of view. Several vision-based algorithms are presented to recognize simple objects of interest in an environment and to calculate relevant parameters required by the control algorithms. An estimator is designed for state estimation of the motion of an object using visual measurements. The estimator uses noisy measurements of relative bearing to an object and object's size on an image plane formed by perspective projection. These measurements can be obtained from the vision system. A set of algorithms have been designed and experimentally investigated using a pan video camera and two wheeled robots in real-time in a laboratory setting. Experimental results and discussion are presented on the performance of the vision-based control algorithms where a wheeled robot successfully approached an object in various motions. The second part of this thesis investigates the coordination problem of flocking in multi-robot system using concepts from graph theory. New control laws are presented for flocking motion of groups of mobile robots based on several leaders. Simulation results are provided to illustrate the control laws and its applications.
827

Topics in navigation and guidance of wheeled robots

Teimoori Sangani, Hamid, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2009 (has links)
Navigation and guidance of mobile robots towards steady or maneuvering objects (targets) is one of the most important areas of robotics that has attracted a lot of attention in recent decades. However, in most of the existing methods, both the line-of-sight angle (bearing) and the relative distance (range) are assumed to be available for navigation and guidance algorithms. There is also a relatively large body of research on navigation and guidance with bearings-only measurements. In contrast, only a few results on navigation and guidance towards an unknown target using range-only measurements have been published. Various problems of navigation, guidance, location estimation and target tracking based on range-only measurements often arise in new wireless networks related applications. Recent advances in these applications allow us to use inexpensive transponders and receivers for range-only measurements which provide information in dynamic and noisy environments without the necessity of line-of-sight. To take advantage of these sensors, algorithms must be developed for range-only navigation. The main part of this thesis is concerned with the problem of real-time navigation and guidance of Wheeled Mobile Robots (WMRs) towards an unknown stationary or moving target using range-only measurements. The range can be estimated using the signal strength and the robust extended Kalman filtering. Several similar algorithms for navigation and guidance termed Equiangular Navigation and Guidance (ENG) laws are proposed and mathematically rigorous proofs of convergence and stability of the proposed guidance laws are given. The experimental investigation into the use of range data for a WMR navigation is documented and the results and discussions on the performance of the proposed guidance strategies are presented, where a wheeled robot successfully approach a stationary or follow a maneuvering target. In order to safely navigate and reliably operate in populated environments, ENG is then modified into Augmented-ENG (AENG), which enables the robot to approach a stationary target or follow an unpredictable maneuvering object in an unknown environment, while keeping a safe distance from the target, and simultaneously preserving a safety margin from the obstacles. Furthermore, we propose and experimentally investigate a new biologically inspired method for local obstacle avoidance and give the mathematically rigorous proof of the idea. In order for the robot to avoid collision and bypass the enroute obstacles in this method, the angle between the instantaneous moving direction of the robot and a reference point on the surface of the obstacle is kept constant. The proposed idea is combined with the ENG law, which leads to a reliable and fast long-range navigation. The performance of both navigation strategy and local obstacle avoidance techniques are confirmed with computer simulations and several experiments with ActivMedia Pioneer 3-DX wheeled robots. The second part of the thesis investigates some challenging problems in the area of wheeled robot navigation. We first address the problem of bearing-only guidance of an autonomous vehicle following a moving target with smaller minimum turning radius compared to that of the follower and propose a simple and constructive navigation law. In compliance with the increasing research on decentralized control laws for groups of mobile autonomous robots, we consider the problems of decentralized navigation of network of WMRs with limited communication and decentralized stabilization of formation of WMRs. New control laws are presented and simulation results are provided to illustrate the control laws and their applications.
828

Spatial Language for Mobile Robots: The Formation and Generative Grounding of Toponyms

Ms Ruth Schulz Unknown Date (has links)
No description available.
829

Cognitive inspired mapping by an autonomous mobile robot

Wong, Chee Kit January 2008 (has links)
When animals explore a new environment, they do not acquire a precise map of the places visited. In fact, research has shown that learning is a recurring process. Over time, new information helps the animal to update their perception of the locations it has visited. Yet, they are still able to use the fuzzy and often incomplete representation to find their way home. This process has been termed the cognitive mapping process. The work presented in this thesis uses a mobile robot equipped with sonar sensors to investigate the nature of such a process. Specifically, what is the information that is fundamental and prevalent in spatial navigation? Initially, the robot is instructed to compute a “cognitive map” of its environment. Since a robot is not a cognitive agent, it cannot, by definition, compute a cognitive map. Hence the robot is used as a test bed for understanding the cognitive mapping process. Yeap’s (1988) theory of cognitive mapping forms the foundation for computing the robot’s representation of the places it has visited. He argued that a network of local spaces is computed early in the cognitive mapping process. Yeap coined these local spaces as Absolute Space Representations (ASRs). However, ASR is not just a process of partitioning the environment into smaller local regions. The ASRs describe the bounded space that one is in, how one could leave that space (exits) and how the exits serves to link the ASRs to form a network that serves as the cognitive map (see Jefferies (1999)). Like the animal’s cognitive map, ASRs are not precise geometrical maps of the environment but rather, provide a rough shape or feel of the space the robot is currently in. Once the robot computes its “cognitive map”, it is then, like foraging and hoarding animals, instructed to find its way home. To do so, the robot uses two crucial pieces of information: distance between exits of ASRs and relative orientation of adjacent ASRs. A simple animal-like strategy was implemented for the robot to locate home. Results from the experiments demonstrated the robot’s ability to determine its location within the visited environment along its journey. This task was performed without the use of an accurate map. From these results and reviews of various findings related to cognitive mapping for various animals, we deduce that: Different animals have different sensing capabilities. They live in different environments and therefore face unique challenges. Consequently, they evolve to have different navigational strategies. However, we believe two crucial pieces of information are inherent in all animals and form the fundamentals of navigation: distance and orientation. Higher level animals may encode and may even prefer richer information to enhance the animal’s cognitive map. Nonetheless, distance and orientation will always be computed as a core process of cognitive mapping. We believe this insight will help future research to better understand the complex nature of cognitive mapping.
830

Visual guidance of robot motion

Gu, Lifang January 1996 (has links)
Future robots are expected to cooperate with humans in daily activities. Efficient cooperation requires new techniques for transferring human skills to robots. This thesis presents an approach on how a robot can extract and replicate a motion by observing how a human instructor conducts it. In this way, the robot can be taught without any explicit instructions and the human instructor does not need any expertise in robot programming. A system has been implemented which consists of two main parts. The first part is data acquisition and motion extraction. Vision is the most important sensor with which a human can interact with the surrounding world. Therefore two cameras are used to capture the image sequences of a moving rigid object. In order to compress the incoming images from the cameras and extract 3D motion information of the rigid object, feature detection and tracking are applied to the images. Corners are chosen as the main features because they are more stable under perspective projection and during motion. A reliable corner detector is implemented and a new corner tracking algorithm is proposed based on smooth motion constraints. With both spatial and temporal constraints, 3D trajectories of a set of points on the object can be obtained and the 3D motion parameters of the object can be reliably calculated by the algorithm proposed in this thesis. Once the 3D motion parameters are available through the vision system, the robot should be programmed to replicate this motion. Since we are interested in smooth motion and the similarity between two motions, the task of the second part of our system is therefore to extract motion characteristics and to transfer these to the robot. It can be proven that the characteristics of a parametric cubic B-spline curve are completely determined by its control points, which can be obtained by the least-squares fitting method, given some data points on the curve. Therefore a parametric cubic B–spline curve is fitted to the motion data and its control points are calculated. Given the robot configuration the obtained control points can be scaled, translated, and rotated so that a motion trajectory can be generated for the robot to replicate the given motion in its own workspace with the required smoothness and similarity, although the absolute motion trajectories of the robot and the instructor can be different. All the above modules have been integrated and results of an experiment with the whole system show that the approach proposed in this thesis can extract motion characteristics and transfer these to a robot. A robot arm has successfully replicated a human arm movement with similar shape characteristics by our approach. In conclusion, such a system collects human skills and intelligence through vision and transfers them to the robot. Therefore, a robot with such a system can interact with its environment and learn by observation.

Page generated in 0.0351 seconds