• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1202
  • 263
  • 233
  • 204
  • 181
  • 114
  • 36
  • 34
  • 20
  • 17
  • 13
  • 13
  • 9
  • 9
  • 7
  • Tagged with
  • 2775
  • 569
  • 543
  • 521
  • 481
  • 413
  • 408
  • 393
  • 350
  • 290
  • 260
  • 252
  • 215
  • 213
  • 210
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Design and Implementation of an Ionic-Polymer-Metal-Composite Biomimetic Robot

Chang, Yi-Chu 03 October 2013 (has links)
Ionic polymer metal composite (IPMC) is used in various bio-inspired systems, such as fish and tadpole-like robots swimming in water. The deflection of this smart material results from several internal and external factors, such as water distribution and surface conductivity. IPMC strips with a variety of water concentration on the surfaces and surface conductivity show various deflection patterns. Even without any external excitation, the strips can bend due to non-uniform water distribution. On the other hand, in order to understand the effects of surface conductivity in an aquatic environment, an IPMC strip with two wires connected to two distinct spots was used to demonstrate the power loss due to the surface resistance. Three types of input signals, sawtooth, sinusoidal, and square waves, were used to compare the difference between the input and output signals measured at the two spots. Thick (1-mm) IPMC strips were fabricated and employed in this research to sustain and drive the robot with sufficient forces. Furthermore, in order to predict and control the deflection, researchers developed the appropriate mathematical models. The special working principle, related to internal mobile cations with water molecules, however, makes the system complicated to be modeled and simulated. An IPMC strip can be modeled as a cantilever beam with loading distribution on the surface. Nevertheless, the loading distribution is non-uniform due to the non-perfect surface metallic plating, and four different kinds of imaginary loading distribution are employed in this model. On the other hand, a reverse-predicted method is used to find out the transfer function of the IPMC system according to the measured deflection and the corresponding input voltage. Several system-identification structures, such as autoregressive moving average with exogenous (ARX/ARMAX), output-error (OE), Box-Jenkins (BJ), and prediction-error minimization (PEM) models, are used to model the system with their specific mathematic principles. Finally, a novel linear time-variant (LTV) concept and method is introduced and applied to simulate an IPMC system. This kind of model is different from the previous linear time-invariant (LTI) models because the IPMC internal environment may be unsteady, such as free cations with water molecules. This phenomenon causes the variation of each internal part. In addition, the relationship between the thickness of IPMC strips and the deflection can be obtained by this concept. Finally, based on the experimental results above, an aquatic walking robot (102 mm × 80 mm × 43 mm, 39 g) with six 2-degree-of-freedom (2-DOF) legs has been designed and implemented. It walked in water at the speed of 0.5 mm/s. The average power consumption is 8 W per leg. Each leg has a thigh and a shank to generate 2-DOF motions. Each set of three legs walked together as a tripod to maintain the stability in operation.
382

Automatic coordination and deployment of multi-robot systems

Smith, Brian Stephen 31 March 2009 (has links)
We present automatic tools for configuring and deploying multi-robot networks of decentralized, mobile robots. These methods are tailored to the decentralized nature of the multi-robot network and the limited information available to each robot. We present methods for determining if user-defined network tasks are feasible or infeasible for the network, considering the limited range of its sensors. To this end, we define rigid and persistent feasibility and present necessary and sufficient conditions (along with corresponding algorithms) for determining the feasibility of arbitrary, user-defined deployments. Control laws for moving multi-robot networks in acyclic, persistent formations are defined. We also present novel Embedded Graph Grammar Systems (EGGs) for coordinating and deploying the network. These methods exploit graph representations of the network, as well as graph-based rules that dictate how robots coordinate their control. Automatic systems are defined that allow the robots to assemble arbitrary, user-defined formations without any reliance on localization. Further, this system is augmented to deploy these formations at the user-defined, global location in the environment, despite limited localization of the network. The culmination of this research is an intuitive software program with a Graphical User Interface (GUI) and a satellite image map which allows users to enter the desired locations of sensors. The automatic tools presented here automatically configure an actual multi-robot network to deploy and execute user-defined network tasks.
383

An investigation of hybrid maps for mobile robots

Buschka, Pär January 2005 (has links)
Autonomous robots typically rely on internal representations of the environment, or maps, to plan and execute their tasks. Several types of maps have been proposed in the literature, and there is general consensus that different types have different advantages and limitations, and that each type is more suited to certain tasks and less to others. Because of these reasons, it is becoming common wisdom in the field of mobile robotics to use hybrid maps that integrate several representations, usually of different types. Hybrid maps provide scalability and multiple views, allowing for instance to combine robot-centered and human-centered representations. There is, however, little understanding of the general principles that can be used to combine different maps into a hybrid one, and to make it something more than the sum of its parts. There is no systematic analysis of the different ways in which different maps can be combined, and how they can be made to cooperate. This makes it difficult to evaluate and compare different systems, and precludes us from getting a clear understanding of how a hybrid map can be designed or improved. The investigation presented in this thesis aims to contribute to fill this foundational gap, and to get a clearer understanding of the nature of hybrid maps. To help in this investigation, we develop two tools: The first one is a conceptual tool, an analytical framework in which the main ingredients of a hybrid map are described; the second one is an empirical tool, a new hybrid map that allows us to experimentally verify our claims and hypotheses. While these tools are themselves important contributions of this thesis, our investigation has resulted in the following additional outcomes: • A set of concepts that allow us to better understand the structure and operation of hybrid maps, and that help us to design them, compare them, identify their problems, and possibly improve them; • The identification of the notion of synergy as the fundamental way in which component maps inside a hybrid map cooperate. To assess the significance of these outcomes, we make and validate the following claims: 1. Our framework allows us to classify and describe existing maps in a uniform way. This claim is validated constructively by making a thorough classification of the hybrid maps reported in the literature. 2. Our framework also allows us to enhance an existing hybrid map by identifying spots for improvement. This claim is verified experimentally by modifying an existing map and evaluating its performance against the original one. 3. The notion of synergy plays an important role in hybrid maps. This claim is verified experimentally by testing the performance of a hybrid map with and without synergy.
384

Cooperative and intelligent control of multi-robot systems using machine learning

Wang, Ying 05 1900 (has links)
This thesis investigates cooperative and intelligent control of autonomous multi-robot systems in a dynamic, unstructured and unknown environment and makes significant original contributions with regard to self-deterministic learning for robot cooperation, evolutionary optimization of robotic actions, improvement of system robustness, vision-based object tracking, and real-time performance. A distributed multi-robot architecture is developed which will facilitate operation of a cooperative multi-robot system in a dynamic and unknown environment in a self-improving, robust, and real-time manner. It is a fully distributed and hierarchical architecture with three levels. By combining several popular AI, soft computing, and control techniques such as learning, planning, reactive paradigm, optimization, and hybrid control, the developed architecture is expected to facilitate effective autonomous operation of cooperative multi-robot systems in a dynamically changing, unknown, and unstructured environment. A machine learning technique is incorporated into the developed multi-robot system for self-deterministic and self-improving cooperation and coping with uncertainties in the environment. A modified Q-learning algorithm termed Sequential Q-learning with Kalman Filtering (SQKF) is developed in the thesis, which can provide fast multi-robot learning. By arranging the robots to learn according to a predefined sequence, modeling the effect of the actions of other robots in the work environment as Gaussian white noise and estimating this noise online with a Kalman filter, the SQKF algorithm seeks to solve several key problems in multi-robot learning. As a part of low-level sensing and control in the proposed multi-robot architecture, a fast computer vision algorithm for color-blob tracking is developed to track multiple moving objects in the environment. By removing the brightness and saturation information in an image and filtering unrelated information based on statistical features and domain knowledge, the algorithm solves the problems of uneven illumination in the environment and improves real-time performance.
385

Modeling a Real Time Operating System Using SpecC

Nukala, Akilesh Unknown Date (has links)
In today's digital (electronics) world, people's desire for electronic goods that ease their life at work, and leisure is increasing the complexity of the products of the embedded systems industry. For example, MP3 players for listening to music and cell phones for communicating with people.The gap between the hardware and software parts of embedded systems is being reduced by the use of System Level Design Languages (SLDL) that can model both hardware and software simultaneously. One such SLDL is SpecC.In this thesis, a SpecC model of a Real Time Operating System (RTOS) is constructed. It is shown how RTOS features can be incorporated into a SpecC model. The model is used to develop an application involving a robot avoiding obstacles to reach its destination. The RTOS model operates similar to the actual RTOS in the robot.The application includes a testbench model for the robot, including features such as interrupts, sonar sensors and wheel pulses, so that its operation closely resembles the actual robot. The sensor model is programmed to generate the values from the four sensor receivers, similar to the behaviour of the sensors on the actual robot. Also the pulses from the wheels and associated interrupts are programmed in the model so that it resembles the interrupts and wheel pulses present on actual robot.
386

Efficient Solutions to Autonomous Mapping and Navigation Problems

Williams, Stefan Bernard January 2002 (has links)
This thesis deals with the Simultaneous Localisation and Mapping algorithm as it pertains to the deployment of mobile systems in unknown environments. Simultaneous Localisation and Mapping (SLAM) as defined in this thesis is the process of concurrently building up a map of the environment and using this map to obtain improved estimates of the location of the vehicle. In essence, the vehicle relies on its ability to extract useful navigation information from the data returned by its sensors. The vehicle typically starts at an unknown location with no a priori knowledge of landmark locations. From relative observations of landmarks, it simultaneously computes an estimate of vehicle location and an estimate of landmark locations. While continuing in motion, the vehicle builds a complete map of landmarks and uses these to provide continuous estimates of the vehicle location. The potential for this type of navigation system for autonomous systems operating in unknown environments is enormous. One significant obstacle on the road to the implementation and deployment of large scale SLAM algorithms is the computational effort required to maintain the correlation information between features in the map and between the features and the vehicle. Performing the update of the covariance matrix is of O(n�) for a straightforward implementation of the Kalman Filter. In the case of the SLAM algorithm, this complexity can be reduced to O(n�) given the sparse nature of typical observations. Even so, this implies that the computational effort will grow with the square of the number of features maintained in the map. For maps containing more than a few tens of features, this computational burden will quickly make the update intractable - especially if the observation rates are high. An effective map-management technique is therefore required in order to help manage this complexity. The major contributions of this thesis arise from the formulation of a new approach to the mapping of terrain features that provides improved computational efficiency in the SLAM algorithm. Rather than incorporating every observation directly into the global map of the environment, the Constrained Local Submap Filter (CLSF) relies on creating an independent, local submap of the features in the immediate vicinity of the vehicle. This local submap is then periodically fused into the global map of the environment. This representation is shown to reduce the computational complexity of maintaining the global map estimates as well as improving the data association process by allowing the association decisions to be deferred until an improved local picture of the environment is available. This approach also lends itself well to three natural extensions to the representation that are also outlined in the thesis. These include the prospect of deploying multi-vehicle SLAM, the Constrained Relative Submap Filter and a novel feature initialisation technique. Results of this work are presented both in simulation and using real data collected during deployment of a submersible vehicle equipped with scanning sonar.
387

Feature-based stereo vision on a mobile platform

Huynh, Du Quan January 1994 (has links)
It is commonly known that stereopsis is the primary way for humans to perceive depth. Although, with one eye, we can still interact very well with our environment and do very highly skillful tasks by using other visual cues such as occlusion and motion, the resultant e ect of the absence of stereopsis is that the relative depth information between objects is essentially lost (Frisby,1979). While humans fuse the images seen by the left and right eyes in a seemingly easy way, the major problem - the correspondence of features - that needs to be solved in all binocular stereo systems of machine vision is not trivial. In this thesis, line segments and corners are chosen to be the features to be matched because they typically occur at object boundaries, surface discontinuities, and across surface markings. Polygonal regions are also selected since they are known to be well-configured and are, very often, associated with salient structures in the image. The use of these high level features, although helping to diminish matching ambiguities, does not completely resolve the matching problem when the scene contains repetitive structures. The spatial relationships between the feature matching pairs enforced in the stereo matching process, as proposed in this thesis, are found to provide even stronger support for correct feature matching pairs and, as a result, incorrect matching pairs can be largely eliminated. Getting global and salient 3D structures has been an important prerequisite for environmental modelling and understanding. While research on postprocessing the 3D information obtained from stereo has been attempted (Ayache and Faugeras, 1991), the strategy presented in this thesis for retrieving salient 3D descriptions is propagating the prominent information extracted from the 2D images to the 3D scene. Thus, the matching of two prominent 2D polygonal regions yields a prominent 3D region, and the inter-relation between two 2D region matching pairs is passed on and taken as a relationship between two 3D regions. Humans, when observing and interacting with the environment do not confine themselves to the observation and then the analysis of a single image. Similarly stereopsis can be vastly improved with the introduction of additional stereo image pairs. Eye, head, and body movements provide essential mobility for an active change of viewpoints, the disocclusion of occluded objects, the avoidance of obstacles, and the performance of any necessary tasks on hand. This thesis presents a mobile stereo vision system that has its eye movements provided by a binocular head support and stepper motors, and its body movements provided by a mobile platform, the Labmate. With a viewer centred coordinate system proposed in this thesis the computation of the 3D information observed at each individual viewpoint, the merging of the 3D in formation at consecutive viewpoints for environmental reconstruction, and strategies for movement control are discussed in detail.
388

Surface modelling and surface following for robots equipped with range sensors

Pudney, Christopher John January 1994 (has links)
The construction of surface models from sensor data is an important part of perceptive robotics. When the sensor data are obtained from fixed sensors the problem of occlusion arises. To overcome occlusion, sensors may be mounted on a robot that moves the sensors over the surface. In this thesis the sensors are single–point range finders. The range finders provide a set of sensor points, that is, the surface points detected by the sensors. The sets of sensor points obtained during the robot’s motion are used to construct a surface model. The surface model is used in turn in the computation of the robot’s motion, so surface modelling is performed on–line, that is, the surface model is constructed incrementally from the sensor points as they are obtained. A planar polyhedral surface model is used that is amenable to incremental surface modelling. The surface model consists of a set of model segments, where a neighbour relation allows model segments to share edges. Also sets of adjacent shared edges may form corner vertices. Techniques are presented for incrementally updating the surface model using sets of sensor points. Various model segment operations are employed to do this: model segments may be merged, fissures in model segment perimeters are filled, and shared edges and corner vertices may be formed. Details of these model segment operations are presented. The robot’s control point is moved over the surface model at a fixed distance. This keeps the sensors around the control point within sensing range of the surface, and keeps the control point from colliding with the surface. The remainder of the robot body is kept from colliding with the surface by using redundant degrees–of–freedom. The goal of surface modelling and surface following is to model as much of the surface as possible. The incomplete parts of the surface model (non–shared edges) indicate where sections of surface that have not been exposed to the robot’s sensors lie. The direction of the robot’s motion is chosen such that the robot’s control point is directed to non–shared edges, and then over the unexposed surface near the edge. These techniques have been implemented and results are presented for a variety of simulated robots combined with real range sensor data.
389

The application of multidimensional scaling to a robotic vision model of space perception /

Chuang, Ming-Chuen. January 1988 (has links)
Thesis (Ph.D.)--Tufts University, 1988. / Submitted to the Dept. of Engineering Design. Includes bibliographical references. Access restricted to members of the Tufts University community. Also available via the World Wide Web;
390

Vision based leader-follower formation control for mobile robots

Sequeira, Gerard, January 2007 (has links) (PDF)
Thesis (M.S.)--University of Missouri--Rolla, 2007. / Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed February 13, 2008) Includes bibliographical references (p. 39-41).

Page generated in 0.0494 seconds