• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 2
  • Tagged with
  • 20
  • 20
  • 20
  • 11
  • 8
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Search Methods for Mobile Manipulator Performance Measurement

Amoako-Frimpong, Samuel 10 August 2018 (has links)
<p> Mobile manipulators are a potential solution to the increasing need for additional flexibility and mobility in industrial robotics applications. However, they tend to lack the accuracy and precision achieved by fixed manipulators, especially in scenarios where both the manipulator and the autonomous vehicle move simultaneously. This thesis analyzes the problem of dynamically evaluating the positioning error of mobile manipulators. In particular, it investigates the use of Bayesian methods to predict the position of the end-effector in the presence of uncertainty propagated from the mobile platform. Simulations and real-world experiments are carried out to test the proposed method against a deterministic approach. These experiments are carried out on two mobile manipulators&mdash;a proof-of-concept research platform and an industrial mobile manipulator&mdash;using ROS and Gazebo. The precision of the mobile manipulator is evaluated through its ability to intercept retroreflective markers using a photoelectric sensor attached to the end-effector. Compared to the deterministic search approach, we observed improved interception capability with comparable search times, thereby enabling the effective performance measurement of the mobile manipulator.</p><p>
12

Simultaneous robot localization and mapping of parameterized spatio-temporal fields using multi-scale adaptive sampling

Mysorewala, Muhammad Faizan. January 2008 (has links)
Thesis (Ph.D.) -- University of Texas at Arlington, 2008.
13

A Framework For Learning Scene Independent Edge Detection

Wilbee, Aaron J. 17 June 2015 (has links)
<p> In this work, a framework for a system which will intelligently assign an edge detection filter to an image based on features taken from the image is introduced. The framework has four parts: the learning stage, image feature extraction, training filter creation, and filter selection training. Two prototypes systems of this framework are given. The learning stage for these systems is the Berkeley Segmentation Database coupled with the Baddelay Delta Metric. Feature extraction is performed using a GIST methodology which extracts color, intensity, and orientation information. The set of image features are used as the input to a single hidden layer feed forward neural network trained using back propagation. The system trains against a set of linear cellular automata filters which are determined to best solve the <i> edge image</i> according to the Baddelay Delta Metric. One system uses cellular automata augmented with a fuzzy rule. The systems are trained and tested against the images from the Berkeley Segmentation Database. The results from the testing indicate that systems built on this framework can perform better than standard methods of edge detection on average across many types of images.</p>
14

Exploring the application of haptic feedback guidance for port crane modernization

Ganji, Vinay G. 04 May 2013 (has links)
<p> In this thesis, the author presents a feasibility study of methods to modernize the port crane systems with the application of haptic (force) feedback assistive technology to assist the crane operator in the container handling process. The assistive technology provides motion guidance to the operator that could help increase the safety and productivity of the system. The technology of haptic feedback is successful in applications such as gaming, simulators etc., and has proven quite efficient in alerting the user or the operator. This study involves the implementation of haptic feedback as an assistive mechanism through a force-feedback joystick used by the operator to control the motion of a scaled port crane system. The haptic feedback system has been integrated to work with the visual feedback system as part of this study. The visual feedback system shares information needed to trigger the haptic (force) feedback display on the joystick. The force feedback displayed on the joystick has been modeled on Hooke's law of spring force. The force feedback and the visual feedback form a motion guidance system. The integrated system has been implemented and tested on a lab-scale testbed of a gantry crane. For experimental purposes, this concept has been tested on a PC-based Windows platform and also on a portable single board Linux-based computer, called the Beagleboard platform. The results from test runs on both the platforms (PC and Beagleboard using ARM processor) are reported in this study. 2</p>
15

Surface classification via unmanned aerial vehicles gripper finger deflection

Van Hoosear, Christopher A. 17 January 2014 (has links)
<p> The purpose of this thesis is to ascertain the feasibility of using strain gauges attached to a Unmanned Aerial Vehicle (UAV) gripper to determine, upon impact, the hardness of a landing site. We design and fabricate a four finger gripper that uses a rotary component to convert the rotational motion of a servo to the linear motion of the finger assemblies. We functionally test a gripper prototype made from rapid-prototype material. We conduct three experiments to test the gripper's functionality. The first experiment tests the gripper's ability to grasp, lift, and release a centered payload, and the gripper performed with overall success rates of 91%, 100%, and 87% respectively. The second experiment tests the gripper's ability to self-align, lift and release the payload and the gripper performed with overall success rates of 99%, 100%, and 96% respectively. The third experiment tests the functional durability of the gripper, and it performed without error for 5000 open/close cycles. </p>
16

Mobile Robot Homing Control Based on Odor Sensing

Craver, Matthew David 24 March 2015 (has links)
No description available.
17

Implementation d'un algorithme de localisation, suivi et separation de sources sonores sur DSP pour un robot mobile.

Briere, Simon. Unknown Date (has links)
Thèse (M.Sc.A.)--Université de Sherbrooke (Canada), 2007. / Titre de l'écran-titre (visionné le 1 février 2007). In ProQuest dissertations and theses. Publié aussi en version papier.
18

Smooth feedback planning /

Lindemann, Stephen R. January 2008 (has links)
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2008. / Source: Dissertation Abstracts International, Volume: 69-11, Section: B, page: 7038. Adviser: Mark W. Spong. Includes bibliographical references (leaves 140-157) Available on microfilm from Pro Quest Information and Learning.
19

Semantic based learning of syntax in an autonomous robot /

McClain, Matthew R. January 2006 (has links)
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2006. / Source: Dissertation Abstracts International, Volume: 67-11, Section: B, page: 6625. Adviser: Stephen Levinson. Includes bibliographical references (leaves 71-77) Available on microfilm from Pro Quest Information and Learning.
20

360? View Camera Based Visual Assistive Technology for Contextual Scene Information

Ali, Mazin 21 October 2017 (has links)
<p> In this research project, a system is proposed to aid the visually impaired by providing partial contextual information of the surroundings using 360&deg; view camera combined with deep learning is proposed. The system uses a 360&deg; view camera with a mobile device to capture surrounding scene information and provide contextual information to the user in the form of audio. The system could also be used for other applications such as logo detection which visually impaired users can use for shopping assistance. </p><p> The scene information from the spherical camera feed is classified by identifying objects that contain contextual information of the scene. That is achieved using convolutional neural networks (CNN) for classification by leveraging CNN transfer learning properties using the pre-trained VGG-19 network. There are two challenges related to this paper, a classification and a segmentation challenge. As an initial prototype, we have experimented with general classes such restaurants, coffee shops and street signs. We have achieved a 92.8% classification accuracy in this research project.</p><p>

Page generated in 0.1055 seconds