Spelling suggestions: "subject:"robot."" "subject:"cobot.""
121 |
3D Visualization and Interactive Image Manipulation for Surgical Planning in Robot-assisted SurgeryMaddah, Mohammadreza 30 August 2018 (has links)
No description available.
|
122 |
A New Variable Stiffness Series Elastic Actuator for the Next Generation Collaborative RobotSharma, Manoj Kumar 22 June 2020 (has links)
No description available.
|
123 |
Differential Drive Wheeled Robot Trajectory TrackingZhao, Yizhou January 2023 (has links)
This thesis summarizes an approach for building a trajectory-tracking framework for autonomous robots working in low-speed and controlled space. A modularized robot framework can provide easy access to hardware and software replacement, which can be a tool for validating trajectory-tracking algorithms in controlled laboratory conditions.
An introduction to other existing methods for trajectory tracking is presented. These advanced trajectory control methods and studies aim to improve trajectory tracking control for better performance under different environments.
This research uses ROS as the middleware for connecting the actuators and computing units. A market-existing global position measurement tool, the UWB system, was selected as the primary localization sensor. A Raspberry Pi and an Arduino Uno are used for high-level and low-level control. The separation of the control units benefits the modularization design of the framework. A robust control approach has also been introduced to prevent the disturbance of uneven terrain to improve the framework's capability to drive arbitrary robot chassis in different testing grounds. During each stage of development, there are offline and online tests for live control tests.
The trajectory tracking controller requires a robot kinematic model and tracking control program for better results of controlled behaviour. A custom trajectory control program was made and implemented into the tests. A digital simulation and a physical robot are built to validate the algorithm and the designed framework for performance validation. This framework aims to suit the other scholar's developments and can be used as a testing platform to implement their autonomous driving algorithms or additional sensors. By replacing the control algorithm in the existing trajectory-tracking robotic framework, this autonomous, universal platform may benefit the validation of these algorithms' performance in the field experiment. / Thesis / Master of Applied Science (MASc) / This thesis contains five chapters.
Chapter 1 provided the information and background for this research topic regarding the key components, methods, and tools for creating trajectory tracking.
Chapter 2 focuses on the existing methods and deep study of tools, equipment and hardware setups for trajectory tracking in simulation and physical setups. The experiments referenced from other studies can benefit the research and development work for the current trajectory tracking development work. The review provides different kinematics models for robot layouts, which impacts the final design of the field experiment robot.
Chapter 3 presents the design work process for creating a controller based on the final field experiment robot. This chapter provides steps and considerations while building the control system for a trajectory-tracking robot from scratch.
Chapter 4 demonstrates the simulation results and field experiment results. A study of error analysis and repeatability justification can also be found in this chapter.
Chapter 5 summarizes the research and development contribution, primary findings, and concerns for identified problems.
|
124 |
Open Loop Compliance Model of a 6 DOF Revolute Manipulator to Improve Accuracy Under LoadAbbott, Mark William 26 April 2002 (has links)
Robotic accuracy has long been limited by the compliance of the manipulator. Whether links under bending loads or backlash in gear trains and stretching of belts, the resulting compliance causes a loss of accuracy at the end-effector. Previous research has investigated accuracy of ideally stiff manipulators from many different points of view; however, an overall compliant modeling technique has not been formulated in the literature. This thesis presents a general technique to develop a compliant model for a general six-degree manipulator with the intent of reducing end-effector error for precision manufacturing.
Experimental and theoretical work was performed on an American Robot Merlin six-degree of freedom robot. The solution technique assumes each link of the manipulator is subject to stiffnesses in three directions, that is, in the direction of motion, laterally and torsionally. Each of the three stiffnesses is assumed constant, but unknown. Three experimental regimes were established, each covering a successively larger region of the workspace, and 243 data samples were taken within each regime. Samples were taken at twenty-seven data points under nine known loads for each of the first two regimes and at nine locations under twenty-seven loads in the third regime. An OPTOTRAK 3020 non-contact distance-measuring system was used to gather data from twelve sensors for each trial. The results were transformed into three displacements and three rotations of the end-effector. A regression algorithm solved for the unknown stiffnesses of the compliant model based on the measured experimental deflection.
Results show that for loads ranging between zero and 445 N, the deflection of the end-effector is predicted within fifteen percent of experimental results for most data points. Furthermore, a load set between zero and 111 N (the stated lift capacity of the manipulator) predicts end point position with an error of less than one-half a millimeter for all tested points.
This research provides a technique to quantify the compliance of a general manipulator and develops a model capable of being implemented with open-loop position control with known compliance. / Master of Science
|
125 |
Kinematic Enveloping Grasp Planning Method for Robotic Dexterous Hands and Three-Dimensional ObjectsSalimi, Shahram 12 1900 (has links)
Three-dimensional (3D) enveloping grasps for dexterous robotic hands possess several advantages over other types of grasps. However, their innate characteristics such as the several degrees of freedom of the dexterous hand, complexity of analyzing the 3D geometry of the object to be grasped or detecting the 3D contact points between the object and the hand make planning them automatically a very challenging problem. This thesis describes a new method for kinematic 3D enveloping grasp planning for a three-fingered dexterous hand. The required inputs are the geometric models of the object and hand; and the kinematic model of the hand. The outputs are the position and orientation of the palm and the angular joint positions of the fingers. The method introduces a new way of processing the 3D object. Instead of considering the object as a whole, a series of 2D slices (vertical and horizontal) of the object are used to define its geometry. This method is considerably simpler than other methods of object modeling and its parameters can be easily setup. A new idea for grading the object's 3D grasp search domain is proposed. The grading system analyzes the curvature pattern and thickness of the object and grades object regions according to their suitability for grasping. The proposed method is capable of eliminating most of the ungraspable areas of the object from the grasp search domain at the early stages of the search. This improves the overall efficiency of the search for a grasp. In modeling a dexterous hand a new method is proposed to model the fingers. In this model each finger is modeled by three articulated line segments, representing the top, centre and bottom of the finger. This model has significant benefits that it is efficient and does not need the exact coordinate of the 3D contact point between the finger and the object to analyze the feasibility of the grasp. The new grasp planning method was implemented by writing a 4300 line MATLAB program. The program has been run successfully with several 3D objects. These results are documented. / Thesis / Master of Applied Science (MASc)
|
126 |
Autonomous Navigation, Perception and Probabilistic Fire Location for an Intelligent Firefighting RobotKim, Jong Hwan 09 October 2014 (has links)
Firefighting robots are actively being researched to reduce firefighter injuries and deaths as well as increase their effectiveness on performing tasks. There has been difficulty in developing firefighting robots that autonomously locate a fire inside of a structure that is not in the direct robot field of view. The commonly used sensors for robots cannot properly function in fire smoke-filled environments where high temperature and zero visibility are present. Also, the existing obstacle avoidance methods have limitations calculating safe trajectories and solving local minimum problem while avoiding obstacles in real time under cluttered and dynamic environments. In addition, research for characterizing fire environments to provide firefighting robots with proper headings that lead the robots to ultimately find the fire is incomplete.
For use on intelligent firefighting robots, this research developed a real-time local obstacle avoidance method, local dynamic goal-based fire location, appropriate feature selection for fire environment assessment, and probabilistic classification of fire, smoke and their thermal reflections. The real-time local obstacle avoidance method called the weighted vector method is developed to perceive the local environment through vectors, identify suitable obstacle avoidance modes by applying a decision tree, use weighting functions to select necessary vectors and geometrically compute a safe heading. This method also solves local obstacle avoidance problems by integrating global and local goals to reach the final goal. To locate a fire outside of the robot field of view, a local dynamic goal-based 'Seek-and-Find' fire algorithm was developed by fusing long wave infrared camera images, ultraviolet radiation sensor and Lidar. The weighted vector method was applied to avoid complex static and unexpected dynamic obstacles while moving toward the fire. This algorithm was successfully validated for a firefighting robot to autonomously navigate to find a fire outside the field of view.
An improved 'Seek-and-Find' fire algorithm was developed using Bayesian classifiers to identify fire features using thermal images. This algorithm was able to discriminate fire and smoke from thermal reflections and other hot objects, allowing the prediction of a more robust heading for the robot. To develop this algorithm, appropriate motion and texture features that can accurately identify fire and smoke from their reflections were analyzed and selected by using multi-objective genetic algorithm optimization. As a result, mean and variance of intensity, entropy and inverse difference moment in the first and second order statistical texture features were determined to probabilistically classify fire, smoke, their thermal reflections and other hot objects simultaneously. This classification performance was measured to be 93.2% accuracy based on validation using the test dataset not included in the original training dataset. In addition, the precision, recall, F-measure, and G-measure were 93.5 - 99.9% for classifying fire and smoke using the test dataset. / Ph. D.
|
127 |
Competitive Algorithms and System for Multi-Robot Exploration of Unknown EnvironmentsPremkumar, Aravind Preshant 08 September 2017 (has links)
We present an algorithm to explore an orthogonal polygon using a team of p robots. This algorithm combines ideas from information-theoretic exploration algorithms and computational geometry based exploration algorithms. The algorithm is based on a single-robot polygon exploration algorithm and a tree exploration algorithm. We show that the exploration time of our algorithm is competitive (as a function of p) with respect to the offline optimal exploration algorithm. We discuss how this strategy can be adapted to real-world settings to deal with noisy sensors. In addition to theoretical analysis, we investigate the performance of our algorithm through simulations for multiple robots and experiments with a single robot. / Master of Science / In applications such as disaster recovery, the layout of the environment is generally unknown. Hence, there is a need to explore the environment in order to effectively perform search and rescue. Exploration of unknown environments using a single robot is a well studied problem. We present an algorithm to perform the task with a team of p robots for the specific case of orthogonal polygons, i.e. polygonal environments where each side is aligned with either the X or the Y axis. The algorithm is based on a single-robot polygon exploration algorithm and a tree exploration algorithm. We show that the exploration time of our algorithm is competitive (as a function of p) with respect to the optimal offline algorithm. We then optimize the information gain of the path followed by the robots by allowing local detours in order to decrease the entropy in the map.
|
128 |
State estimation of a hexapod robot using a proprioceptive sensory system / Estelle LubbeLubbe, Estelle January 2014 (has links)
The Defence, Peace, Safety and Security (DPSS) competency area within the Council for Scientific and
Industrial Research (CSIR) has identified the need for the development of a robot that can operate in
almost any land-based environment. Legged robots, especially hexapod (six-legged) robots present a
wide variety of advantages that can be utilised in this environment and is identified as a feasible
solution.
The biggest advantage and main reason for the development of legged robots over mobile (wheeled)
robots, is their ability to navigate in uneven, unstructured terrain. However, due to the complicated
control algorithms needed by a legged robot, most literature only focus on navigation in even or
relatively even terrains. This is seen as the main limitation with regards to the development of legged
robot applications. For navigation in unstructured terrain, postural controllers of legged robots need
fast and precise knowledge about the state of the robot they are regulating. The speed and accuracy
of the state estimation of a legged robot is therefore very important.
Even though state estimation for mobile robots has been studied thoroughly, limited research is
available on state estimation with regards to legged robots. Compared to mobile robots, locomotion
of legged robots make use of intermitted ground contacts. Therefore, stability is a main concern when
navigating in unstructured terrain. In order to control the stability of a legged robot, six degrees of
freedom information is needed about the base of the robot platform. This information needs to be
estimated using measurements from the robot’s sensory system.
A sensory system of a robot usually consist of multiple sensory devices on board of the robot.
However, legged robots have limited payload capacities and therefore the amount of sensory devices
on a legged robot platform should be kept to a minimum. Furthermore, exteroceptive sensory devices
commonly used in state estimation, such as a GPS or cameras, are not suitable when navigating in
unstructured and unknown terrain. The control and localisation of a legged robot should therefore
only depend on proprioceptive sensors. The need for the development of a reliable state estimation
framework (that only relies on proprioceptive information) for a low-cost, commonly available
hexapod robot is identified. This will accelerate the process for control algorithm development.
In this study this need is addressed. Common proprioceptive sensors are integrated on a commercial
low-cost hexapod robot to develop the robot platform used in this study. A state estimation
framework for legged robots is used to develop a state estimation methodology for the hexapod
platform. A kinematic model is also derived and verified for the platform, and measurement models
are derived to address possible errors and noise in sensor measurements. The state estimation
methodology makes use of an Extended Kalman filter to fuse the robots kinematics with
measurements from an IMU. The needed state estimation equations are also derived and
implemented in Matlab®.
The state estimation methodology developed is then tested with multiple experiments using the robot
platform. In these experiments the robot platform captures the sensory data with a data acquisition
method developed while it is being tracked with a Vicon motion capturing system. The sensor data is
then used as an input to the state estimation equations in Matlab® and the results are compared to
the ground-truth measurement outputs of the Vicon system. The results of these experiments show
very accurate estimation of the robot and therefore validate the state estimation methodology and
this study. / MIng (Computer and Electronic Engineering), North-West University, Potchefstroom Campus, 2015
|
129 |
State estimation of a hexapod robot using a proprioceptive sensory system / Estelle LubbeLubbe, Estelle January 2014 (has links)
The Defence, Peace, Safety and Security (DPSS) competency area within the Council for Scientific and
Industrial Research (CSIR) has identified the need for the development of a robot that can operate in
almost any land-based environment. Legged robots, especially hexapod (six-legged) robots present a
wide variety of advantages that can be utilised in this environment and is identified as a feasible
solution.
The biggest advantage and main reason for the development of legged robots over mobile (wheeled)
robots, is their ability to navigate in uneven, unstructured terrain. However, due to the complicated
control algorithms needed by a legged robot, most literature only focus on navigation in even or
relatively even terrains. This is seen as the main limitation with regards to the development of legged
robot applications. For navigation in unstructured terrain, postural controllers of legged robots need
fast and precise knowledge about the state of the robot they are regulating. The speed and accuracy
of the state estimation of a legged robot is therefore very important.
Even though state estimation for mobile robots has been studied thoroughly, limited research is
available on state estimation with regards to legged robots. Compared to mobile robots, locomotion
of legged robots make use of intermitted ground contacts. Therefore, stability is a main concern when
navigating in unstructured terrain. In order to control the stability of a legged robot, six degrees of
freedom information is needed about the base of the robot platform. This information needs to be
estimated using measurements from the robot’s sensory system.
A sensory system of a robot usually consist of multiple sensory devices on board of the robot.
However, legged robots have limited payload capacities and therefore the amount of sensory devices
on a legged robot platform should be kept to a minimum. Furthermore, exteroceptive sensory devices
commonly used in state estimation, such as a GPS or cameras, are not suitable when navigating in
unstructured and unknown terrain. The control and localisation of a legged robot should therefore
only depend on proprioceptive sensors. The need for the development of a reliable state estimation
framework (that only relies on proprioceptive information) for a low-cost, commonly available
hexapod robot is identified. This will accelerate the process for control algorithm development.
In this study this need is addressed. Common proprioceptive sensors are integrated on a commercial
low-cost hexapod robot to develop the robot platform used in this study. A state estimation
framework for legged robots is used to develop a state estimation methodology for the hexapod
platform. A kinematic model is also derived and verified for the platform, and measurement models
are derived to address possible errors and noise in sensor measurements. The state estimation
methodology makes use of an Extended Kalman filter to fuse the robots kinematics with
measurements from an IMU. The needed state estimation equations are also derived and
implemented in Matlab®.
The state estimation methodology developed is then tested with multiple experiments using the robot
platform. In these experiments the robot platform captures the sensory data with a data acquisition
method developed while it is being tracked with a Vicon motion capturing system. The sensor data is
then used as an input to the state estimation equations in Matlab® and the results are compared to
the ground-truth measurement outputs of the Vicon system. The results of these experiments show
very accurate estimation of the robot and therefore validate the state estimation methodology and
this study. / MIng (Computer and Electronic Engineering), North-West University, Potchefstroom Campus, 2015
|
130 |
Developing robots that impact human-robot trust in emergency evacuationsRobinette, Paul 07 January 2016 (has links)
High-risk, time-critical situations require trust for humans to interact with other agents even if they have never interacted with the agents before. In the near future, robots will perform tasks to help people in such situations, thus robots must understand why a person makes a trust decision in order to effectively aid the person. High casualty rates in several emergency evacuations motivate our use of this scenario as an example of a high-risk, time-critical situation. Emergency guidance robots can be stored inside of buildings then activated to search for victims and guide evacuees to safety. In this dissertation, we determined the conditions under which evacuees would be likely to trust a robot in an emergency evacuation.
We began by examining reports of real-world evacuations and considering how guidance robots can best help. We performed two simulations of evacuations and learned that robots could be helpful as long as at least 30% of evacuees trusted their guidance instructions. We then developed several methods for a robot to communicate directional information to evacuees. After performing three rounds of evaluation using virtually, remotely and physically present robots, we concluded that robots should communicate directional information by gesturing with two arms. Next, we studied the effect of situational risk and the robot's previous performance on a participant's decision to use the robot during an interaction. We found that higher risk scenarios caused participants to align their self-reported trust with their decisions in a trust situation. We also discovered that trust in a robot drops after a single error when interaction occurs in a virtual environment. After an exploratory study in trust repair, we have learned that a robot can repair broken trust during the emergency by apologizing for its prior mistake or giving additional information relevant to the situation. Apologizing immediately after the error had no effect.
Robots have the potential to save lives in emergency scenarios, but could have an equally disastrous effect if participants overtrust them. To explore this concept, we created a virtual environment of an office as well as a real-world simulation of an emergency evacuation. In both, participants interacted with a robot during a non-emergency phase to experience its behavior and then chose whether to follow the robot’s instructions during an emergency phase or not. In the virtual environment, the emergency was communicated through text, but in the real-world simulation, artificial smoke and fire alarms were used to increase the urgency of the situation. In our virtual environment, we confirmed our previous results that prior robot behavior affected whether participants would trust the robot or not. To our surprise, all participants followed the robot in the real-world simulation of an emergency, despite half observing the same robot perform poorly in a navigation guidance task just minutes before. We performed additional exploratory studies investigating different failure modes. Even when the robot pointed to a dark room with no discernible exit the majority of people did not choose to exit the way they entered.
The conclusions of this dissertation are based on the results of fifteen experiments with a total of 2,168 participants (2,071 participants in virtual or remote studies conducted over the internet and 97 participants in physical studies on campus). We have found that most human evacuees will trust an emergency guidance robot that uses understandable information conveyance modalities and exhibits efficient guidance behavior in an evacuation scenario. In interactions with a virtual robot, this trust can be lost because of a single error made by the robot, but a similar effect was not found with real-world robots. This dissertation presents data indicating that victims in emergency situations may overtrust a robot, even when they have recently witnessed the robot malfunction. This work thus demonstrates concerns which are important to both the HRI and rescue robot communities.
|
Page generated in 0.0424 seconds