• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Development of Integration Algorithms for Vision/Force Robot Control with Automatic Decision System

Bdiwi, Mohamad 12 August 2014 (has links) (PDF)
In advanced robot applications, the challenge today is that the robot should perform different successive subtasks to achieve one or more complicated tasks similar to human. Hence, this kind of tasks required to combine different kind of sensors in order to get full information about the work environment. However, from the point of view of control, more sensors mean more possibilities for the structure of the control system. As shown previously, vision and force sensors are the most common external sensors in robot system. As a result, in scientific papers it can be found numerous control algorithms and different structures for vision/force robot control, e.g. shared, traded control etc. The lacks in integration of vision/force robot control could be summarized as follows: • How to define which subspaces should be vision, position or force controlled? • When the controller should switch from one control mode to another one? • How to insure that the visual information could be reliably used? • How to define the most appropriated vision/force control structure? In many previous works, during performing a specified task one kind of vision/force control structure has been used which is pre-defined by the programmer. In addition to that, if the task is modified or changed, it would be much complicated for the user to describe the task and to define the most appropriated vision/force robot control especially if the user is inexperienced. Furthermore, vision and force sensors are used only as simple feedback (e.g. vision sensor is used usually as position estimator) or they are intended to avoid the obstacles. Accordingly, much useful information provided by the sensors which help the robot to perform the task autonomously is missed. In our opinion, these lacks of defining the most appropriate vision/force robot control and the weakness in the utilization from all the information which could be provided by the sensors introduce important limits which prevent the robot to be versatile, autonomous, dependable and user-friendly. For this purpose, helping to increase autonomy, versatility, dependability and user-friendly in certain area of robotics which requires vision/force integration is the scope of this thesis. More concretely: 1. Autonomy: In the term of an automatic decision system which defines the most appropriated vision/force control modes for different kinds of tasks and chooses the best structure of vision/force control depending on the surrounding environments and a priori knowledge. 2. Versatility: By preparing some relevant scenarios for different situations, where both the visual servoing and force control are necessary and indispensable. 3. Dependability: In the term of the robot should depend on its own sensors more than on reprogramming and human intervention. In other words, how the robot system can use all the available information which could be provided by the vision and force sensors, not only for the target object but also for the features extraction of the whole scene. 4. User-friendly: By designing a high level description of the task, the object and the sensor configuration which is suitable also for inexperienced user. If the previous properties are relatively achieved, the proposed robot system can: • Perform different successive and complex tasks. • Grasp/contact and track imprecisely placed objects with different poses. • Decide automatically the most appropriate combination of vision/force feedback for every task and react immediately to the changes from one control cycle to another because of occurrence of some unforeseen events. • Benefit from all the advantages of different vision/force control structures. • Benefit from all the information provided by the sensors. • Reduce the human intervention or reprogramming during the execution of the task. • Facilitate the task description and entering of a priori-knowledge for the user, even if he/she is inexperienced.
2

Development of Integration Algorithms for Vision/Force Robot Control with Automatic Decision System

Bdiwi, Mohamad 10 June 2014 (has links)
In advanced robot applications, the challenge today is that the robot should perform different successive subtasks to achieve one or more complicated tasks similar to human. Hence, this kind of tasks required to combine different kind of sensors in order to get full information about the work environment. However, from the point of view of control, more sensors mean more possibilities for the structure of the control system. As shown previously, vision and force sensors are the most common external sensors in robot system. As a result, in scientific papers it can be found numerous control algorithms and different structures for vision/force robot control, e.g. shared, traded control etc. The lacks in integration of vision/force robot control could be summarized as follows: • How to define which subspaces should be vision, position or force controlled? • When the controller should switch from one control mode to another one? • How to insure that the visual information could be reliably used? • How to define the most appropriated vision/force control structure? In many previous works, during performing a specified task one kind of vision/force control structure has been used which is pre-defined by the programmer. In addition to that, if the task is modified or changed, it would be much complicated for the user to describe the task and to define the most appropriated vision/force robot control especially if the user is inexperienced. Furthermore, vision and force sensors are used only as simple feedback (e.g. vision sensor is used usually as position estimator) or they are intended to avoid the obstacles. Accordingly, much useful information provided by the sensors which help the robot to perform the task autonomously is missed. In our opinion, these lacks of defining the most appropriate vision/force robot control and the weakness in the utilization from all the information which could be provided by the sensors introduce important limits which prevent the robot to be versatile, autonomous, dependable and user-friendly. For this purpose, helping to increase autonomy, versatility, dependability and user-friendly in certain area of robotics which requires vision/force integration is the scope of this thesis. More concretely: 1. Autonomy: In the term of an automatic decision system which defines the most appropriated vision/force control modes for different kinds of tasks and chooses the best structure of vision/force control depending on the surrounding environments and a priori knowledge. 2. Versatility: By preparing some relevant scenarios for different situations, where both the visual servoing and force control are necessary and indispensable. 3. Dependability: In the term of the robot should depend on its own sensors more than on reprogramming and human intervention. In other words, how the robot system can use all the available information which could be provided by the vision and force sensors, not only for the target object but also for the features extraction of the whole scene. 4. User-friendly: By designing a high level description of the task, the object and the sensor configuration which is suitable also for inexperienced user. If the previous properties are relatively achieved, the proposed robot system can: • Perform different successive and complex tasks. • Grasp/contact and track imprecisely placed objects with different poses. • Decide automatically the most appropriate combination of vision/force feedback for every task and react immediately to the changes from one control cycle to another because of occurrence of some unforeseen events. • Benefit from all the advantages of different vision/force control structures. • Benefit from all the information provided by the sensors. • Reduce the human intervention or reprogramming during the execution of the task. • Facilitate the task description and entering of a priori-knowledge for the user, even if he/she is inexperienced.

Page generated in 0.045 seconds