• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • Tagged with
  • 9
  • 9
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Task-based Robotic Grasp Planning

Lin, Yun 13 November 2014 (has links)
Grasp should be selected intelligently to fulfill different stability properties and manipulative requirements. Currently, most grasping approaches consider only pick-and-place tasks without any physical interaction with other objects or the environment, which are common in an industry setting with limited uncertainty. When robots move to our daily-living environment and perform a broad range of tasks in an unstructured environment, all sorts of physical interactions will occur, which will result in random physical interactive wrenches: forces and torques on the tool. In addition, for a tool to perform a required task, certain motions need to occur. We call it "functional tool motion," which represents the innate function of the tool and the nature of the task. Grasping with a robotic hand gives flexibility in "mounting" the tool onto the robotic arm - a different grasp will connect the tool to the robotic arm with a different hand posture, then the inverse kinematics approach will result in a different joint motion of the arm in order to achieve the same functional tool motion. Thus, the grasp and the functional tool motion decide the manipulator's motion, as well as the effort to achieve the motion. Therefore, we propose to establish two objectives to serve the purpose of a grasp: the grasp should maintain a firm grip and withstand interactive wrenches on the tool during the task; and the grasp should enable the manipulator to carry out the task most efficiently with little motion effort, and then search for a grasp to optimize both objectives. For this purpose, two grasp criteria are presented to evaluate the grasp: the task wrench coverage criterion and the task motion effort criterion. The two grasp criteria are used as objective functions to search for the optimal grasp for grasp planning. To reduce the computational complexity of the search in high-dimensional robotic hand configuration space, we propose a novel grasp synthesis approach that integrates two human grasp strategies - grasp type, and thumb placement (position and direction) - into grasp planning. The grasping strategies abstracted from humans should meet two important criteria: they should reflect the demonstrator's intention, and they should be general enough to be used by various robotic hand models. Different abstractions of human grasp constrain the grasp synthesis and narrow down the solutions of grasp generation to different levels. If a strict constraint is imposed, such as defining all contact points of the fingers on the object, the strategy loses flexibility and becomes rarely achievable for a robotic hand with a different kinematic model. Thus, the choice of grasp strategies should balance the learned constraints and required flexibility to accommodate the difference between a human hand and a robotic hand. The human strategies of grasp type and thumb placement have such a balance while conveying important human intents to the robotic grasping. The proposed approach has been thoroughly evaluated both in simulation and on a real robotic system for multiple objects that would be encountered in daily living.
2

Analysis and Modeling of Machine Operation Tasks using Egocentric Vision / エゴセントリックビジョンを用いた機械操作タスクの分析とモデリング

Chen, Longfei 23 September 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第22778号 / 工博第4777号 / 新制||工||1747(附属図書館) / 京都大学大学院工学研究科電気工学専攻 / (主査)教授 中村 裕一, 教授 小山田 耕二, 教授 西野 恒 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DGAM
3

Identification systématique et représentation des erreurs humaines dans les modèles de tâches / Systematic identification and representation of human errors in task models

Fahssi, Racim Mehdi 14 December 2018 (has links)
Dans les approches centrées utilisateur, les techniques, méthodes, et processus de développement utilisés visent à connaître et comprendre les utilisateurs (analyser leurs besoins, évaluer leurs manières d'utiliser les systèmes) dans le but de concevoir et développer des systèmes utilisables, c'est-à-dire, en adéquation avec leurs comportements, leurs compétences et leurs besoins. Parmi les techniques employées pour garantir l'utilisabilité, la modélisation des tâches permet de décrire les objectifs et acticités des utilisateurs. Grâce aux modèles produits, les spécialistes des facteurs humains peuvent analyser et évaluer l'efficacité des applications interactives. Cette approche d'analyse et de modélisation de tâches a toujours mis l'accent sur la représentation explicite du comportement standard de l'utilisateur. Ceci s'explique par le fait que les erreurs humaines ne font pas partie des objectifs des utilisateurs et qu'ils sont donc exclus de la description des tâches. Cette vision sans erreurs, suivie largement par la communauté en Interaction Homme-Machine, est très différente de celle de la communauté en Facteur Humain qui, depuis ses débuts, s'intéresse à comprendre les causes des erreurs humaines et leur impact sur la performance, mais aussi sur des aspects majeurs comme la sureté de fonctionnement et la fiabilité des utilisateurs et de leur travail. L'objectif de cette thèse est de démontrer qu'il est possible de décrire de façon systématique, dans des modèles de tâches, les erreurs pouvant survenir lors de l'accomplissement de tâches utilisateur. Pour cette démonstration, nous proposons une approche à base de modèles de tâches associée à un processus de description des erreurs humaines et supportée par un ensemble d'outils. Cette thèse présente les résultats de l'application de l'approche proposée à une étude de cas industrielle dans le domaine d'application de l'aéronautique. / In user-centered approaches, the techniques, methods, and development processes used aim to know and understand the users (analyze their needs, evaluate their ways of using the systems) in order to design and develop usable systems that is in line with their behavior, skills and needs. Among the techniques used to guarantee usability, task modeling makes it possible to describe the objectives and activities of the users. With task models, human factors specialists can analyze and evaluate the effectiveness of interactive applications. This approach of task analysis and modeling has always focused on the explicit representation of the standard behavior of the user. This is because human errors are not part of the users' objectives and are therefore excluded from the job description. This vision of error-free activities, widely followed by the human-machine interaction community, is very different from the Human Factor community vison on user tasks. Since its inception, Human Factor community has been interested in understanding the causes of human error and its impact on performance, but also on major aspects like the reliability of the operation and the reliability of the users and their work. The objective of this thesis is to demonstrate that it is possible to systematically describe, in task models, user errors that may occur during the performance of user tasks. For this demonstration, we propose an approach based on task models associated with a human error description process and supported by a set of tools. This thesis presents the results of the application of the proposed approach to an industrial case study in the application domain of aeronautics.
4

Supporting Requirements Reuse in a User-centric Design Framework through Task Modeling and Critical Parameters

Montabert, Cyril 14 August 2006 (has links)
Many software systems fail as a direct consequence of errors in requirements analysis. Establishing formal metrics early in the design process, using attributes like critical parameters, enables designers to properly assess software success. While critical parameters alone do not have the potential to drive design, establishing requirements tied to critical parameters helps designers capture design objectives. For the design of interactive systems, the use of scenario-based approaches offers natural user centricity and facilitates knowledge reuse through the generation of claims. Unfortunately, the requirements-analysis phase of scenario-based design does not offer sufficient built-in and explicit techniques needed for capturing the critical-parameter requirements of a system. Because success depends heavily on user involvement and proper requirements, there is a crucial need for a requirements-analysis technique that bridges the gap between scenarios and critical parameters. Better establishing requirements will benefit design. By adapting task-modeling techniques to support critical parameters within the requirements-analysis phase of scenario-based design, we are able to provide designers with a systematic technique for capturing requirements in a reusable form that enables and encourages knowledge transfer early in the development process. The research work presented concentrates on the domain of notification systems, as previous research efforts led to the identification of three critical parameters. Contributions of this work include establishment of a structured process for capturing critical-parameter requirements within a user-centric design framework and introduction of knowledge reuse at the requirements phase. On one hand, adapting task models to capture requirements bridges the gap between scenarios and critical parameters, which benefits design from user involvement and accurate requirements. On the other hand, using task models as a reusable component leverages requirements reuse which benefits design by increasing quality while reducing development costs and time-to-market. / Master of Science
5

Génération de récits à partir de données ambiantes / Generating stories from ambient data

Baez miranda, Belen 03 December 2018 (has links)
Le récit est un outil de communication qui permet aux individus de donner un sens au monde qui les entoure. Il représente une plate-forme pour comprendre et partager leur culture, connaissances et identité. Le récit porte une série d'événements réels ou imaginaires, en provoquant un ressenti, une réaction ou même, déclenche une action. Pour cette raison, il est devenu un sujet d'intérêt pour différents domaines au-delà de la Littérature (Éducation, Marketing, Psychologie, etc.) qui cherchent d'atteindre un but particulier au travers de lui (Persuader, Réfléchir, Apprendre, etc.).Cependant, le récit reste encore sous-développé dans le contexte informatique. Il existent des travaux qui visent son analyse et production automatique. Les algorithmes et implémentations, par contre, restent contraintes à imiter le processus créatif derrière des textes littéraires provenant de sources textuelles. Ainsi, il n'existent pas des approches qui produisent automatiquement des récits dont 1) la source est constitué de matériel non formatées et passé dans la réalité et 2) et le contenu projette une perspective qui cherche à transmettre un message en particulier. Travailler avec des données brutes devient relevante vu qu'elles augmentent exponentiellement chaque jour grâce à l'utilisation d'appareils connectés.Ainsi, vu le contexte du Big Data, nous présentons une approche de génération automatique de récits à partir de données ambiantes. L'objectif est de faire émerger l'expérience vécue d'une personne à partir des données produites pendant une activité humaine. Tous les domaines qui travaillent avec des données brutes pourraient bénéficier de ce travail, tels que l'Éducation ou la Santé. Il s'agit d'un effort interdisciplinaire qui inclut le Traitement Automatique de Langues, la Narratologie, les Sciences Cognitives et l'Interaction Homme-Machine.Cette approche est basée sur des corpus et modèles et comprend la formalisation de ce que nous appelons le récit d'activité ainsi qu'une démarche de génération adaptée. Elle a est composé de 4 étapes : la formalisation des récits d'activité, la constitution de corpus, la construction de modèles d'activité et du récit, et la génération de texte. Chacune a été conçue pour surmonter des contraintes liées aux questions scientifiques posées vue la nature de l'objectif : la manipulation de données incertaines et incomplètes, l'abstraction valide d'après l'activité, la construction de modèles avec lesquels il soit possible la transposition de la réalité gardée dans les données vers une perspective subjective et la rendue en langage naturel. Nous avons utilisé comme cas d'usage le récit d'activité, vu que les pratiquant se servent des appareils connectés, ainsi qu'ils ont besoin de partager son expérience. Les résultats obtenus sont encourageants et donnent des pistes qui ouvrent beaucoup de perspectives de recherche. / Stories are a communication tool that allow people to make sense of the world around them. It represents a platform to understand and share their culture, knowledge and identity. Stories carry a series of real or imaginary events, causing a feeling, a reaction or even trigger an action. For this reason, it has become a subject of interest for different fields beyond Literature (Education, Marketing, Psychology, etc.) that seek to achieve a particular goal through it (Persuade, Reflect, Learn, etc.).However, stories remain underdeveloped in Computer Science. There are works that focus on its analysis and automatic production. However, those algorithms and implementations remain constrained to imitate the creative process behind literary texts from textual sources. Thus, there are no approaches that produce automatically stories whose 1) the source consists of raw material that passed in real life and 2) and the content projects a perspective that seeks to convey a particular message. Working with raw data becomes relevant today as it increase exponentially each day through the use of connected devices.Given the context of Big Data, we present an approach to automatically generate stories from ambient data. The objective of this work is to bring out the lived experience of a person from the data produced during a human activity. Any areas that use such raw data could benefit from this work, for example, Education or Health. It is an interdisciplinary effort that includes Automatic Language Processing, Narratology, Cognitive Science and Human-Computer Interaction.This approach is based on corpora and models and includes the formalization of what we call the activity récit as well as an adapted generation approach. It consists of 4 stages: the formalization of the activity récit, corpus constitution, construction of models of activity and the récit, and the generation of text. Each one has been designed to overcome constraints related to the scientific questions asked in view of the nature of the objective: manipulation of uncertain and incomplete data, valid abstraction according to the activity, construction of models from which it is possible the Transposition of the reality collected though the data to a subjective perspective and rendered in natural language. We used the activity narrative as a case study, as practitioners use connected devices, so they need to share their experience. The results obtained are encouraging and give leads that open up many prospects for research.
6

Robot Motion and Task Learning with Error Recovery

Chang, Guoting January 2013 (has links)
The ability to learn is essential for robots to function and perform services within a dynamic human environment. Robot programming by demonstration facilitates learning through a human teacher without the need to develop new code for each task that the robot performs. In order for learning to be generalizable, the robot needs to be able to grasp the underlying structure of the task being learned. This requires appropriate knowledge abstraction and representation. The goal of this thesis is to develop a learning by imitation system that abstracts knowledge of human demonstrations of a task and represents the abstracted knowledge in a hierarchical framework. The learning by imitation system is capable of performing both action and object recognition based on video stream data at the lower level of the hierarchy, while the sequence of actions and object states observed is reconstructed at the higher level of the hierarchy in order to form a coherent representation of the task. Furthermore, error recovery capabilities are included in the learning by imitation system to improve robustness to unexpected situations during task execution. The first part of the thesis focuses on motion learning to allow the robot to both recognize the actions for task representation at the higher level of the hierarchy and to perform the actions to imitate the task. In order to efficiently learn actions, the actions are segmented into meaningful atomic units called motion primitives. These motion primitives are then modeled using dynamic movement primitives (DMPs), a dynamical system model that can robustly generate motion trajectories to arbitrary goal positions while maintaining the overall shape of the demonstrated motion trajectory. The DMPs also contain weight parameters that are reflective of the shape of the motion trajectory. These weight parameters are clustered using affinity propagation (AP), an efficient exemplar clustering algorithm, in order to determine groups of similar motion primitives and thus, performing motion recognition. The approach of DMPs combined with APs was experimentally verified on two separate motion data sets for its ability to recognize and generate motion primitives. The second part of the thesis outlines how the task representation is created and used for imitating observed tasks. This includes object and object state recognition using simple computer vision techniques as well as the automatic construction of a Petri net (PN) model to describe an observed task. Tasks are composed of a sequence of actions that have specific pre-conditions, i.e. object states required before the action can be performed, and post-conditions, i.e. object states that result from the action. The PNs inherently encode pre-conditions and post-conditions of a particular event, i.e. action, and can model tasks as a coherent sequence of actions and object states. In addition, PNs are very flexible in modeling a variety of tasks including tasks that involve both sequential and parallel components. The automatic PN creation process has been tested on both a sequential two block stacking task and a three block stacking task involving both sequential and parallel components. The PN provides a meaningful representation of the observed tasks that can be used by a robot to imitate the tasks. Lastly, error recovery capabilities are added to the learning by imitation system in order to allow the robot to readjust the sequence of actions needed during task execution. The error recovery component is able to deal with two types of errors: unexpected, but known situations and unexpected, unknown situations. In the case of unexpected, but known situations, the learning system is able to search through the PN to identify the known situation and the actions needed to complete the task. This ability is useful not only for error recovery from known situations, but also for human robot collaboration, where the human unexpectedly helps to complete part of the task. In the case of situations that are both unexpected and unknown, the robot will prompt the human demonstrator to teach how to recover from the error to a known state. By observing the error recovery procedure and automatically extending the PN with the error recovery information, the situation encountered becomes part of the known situations and the robot is able to autonomously recover from the error in the future. This error recovery approach was tested successfully on errors encountered during the three block stacking task.
7

Robot Motion and Task Learning with Error Recovery

Chang, Guoting January 2013 (has links)
The ability to learn is essential for robots to function and perform services within a dynamic human environment. Robot programming by demonstration facilitates learning through a human teacher without the need to develop new code for each task that the robot performs. In order for learning to be generalizable, the robot needs to be able to grasp the underlying structure of the task being learned. This requires appropriate knowledge abstraction and representation. The goal of this thesis is to develop a learning by imitation system that abstracts knowledge of human demonstrations of a task and represents the abstracted knowledge in a hierarchical framework. The learning by imitation system is capable of performing both action and object recognition based on video stream data at the lower level of the hierarchy, while the sequence of actions and object states observed is reconstructed at the higher level of the hierarchy in order to form a coherent representation of the task. Furthermore, error recovery capabilities are included in the learning by imitation system to improve robustness to unexpected situations during task execution. The first part of the thesis focuses on motion learning to allow the robot to both recognize the actions for task representation at the higher level of the hierarchy and to perform the actions to imitate the task. In order to efficiently learn actions, the actions are segmented into meaningful atomic units called motion primitives. These motion primitives are then modeled using dynamic movement primitives (DMPs), a dynamical system model that can robustly generate motion trajectories to arbitrary goal positions while maintaining the overall shape of the demonstrated motion trajectory. The DMPs also contain weight parameters that are reflective of the shape of the motion trajectory. These weight parameters are clustered using affinity propagation (AP), an efficient exemplar clustering algorithm, in order to determine groups of similar motion primitives and thus, performing motion recognition. The approach of DMPs combined with APs was experimentally verified on two separate motion data sets for its ability to recognize and generate motion primitives. The second part of the thesis outlines how the task representation is created and used for imitating observed tasks. This includes object and object state recognition using simple computer vision techniques as well as the automatic construction of a Petri net (PN) model to describe an observed task. Tasks are composed of a sequence of actions that have specific pre-conditions, i.e. object states required before the action can be performed, and post-conditions, i.e. object states that result from the action. The PNs inherently encode pre-conditions and post-conditions of a particular event, i.e. action, and can model tasks as a coherent sequence of actions and object states. In addition, PNs are very flexible in modeling a variety of tasks including tasks that involve both sequential and parallel components. The automatic PN creation process has been tested on both a sequential two block stacking task and a three block stacking task involving both sequential and parallel components. The PN provides a meaningful representation of the observed tasks that can be used by a robot to imitate the tasks. Lastly, error recovery capabilities are added to the learning by imitation system in order to allow the robot to readjust the sequence of actions needed during task execution. The error recovery component is able to deal with two types of errors: unexpected, but known situations and unexpected, unknown situations. In the case of unexpected, but known situations, the learning system is able to search through the PN to identify the known situation and the actions needed to complete the task. This ability is useful not only for error recovery from known situations, but also for human robot collaboration, where the human unexpectedly helps to complete part of the task. In the case of situations that are both unexpected and unknown, the robot will prompt the human demonstrator to teach how to recover from the error to a known state. By observing the error recovery procedure and automatically extending the PN with the error recovery information, the situation encountered becomes part of the known situations and the robot is able to autonomously recover from the error in the future. This error recovery approach was tested successfully on errors encountered during the three block stacking task.
8

Projeto e implementação de módulo TAOS-Graph da ferramenta iTAOS para análise e modelagem da tarefa. / Design and implementation of the TAOS-Graph module of the iTAOS tool for task analysis and modeling.

MEDEIROS, Francisco Petrônio Alencar de. 27 August 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-27T17:05:09Z No. of bitstreams: 1 FRANCISCO PETRÔNIO ALENCAR DE MEDEIROS - DISSERTAÇÃO PPGCC 2003..pdf: 3495874 bytes, checksum: 26f2cedd183b05f7571147e29242352b (MD5) / Made available in DSpace on 2018-08-27T17:05:09Z (GMT). No. of bitstreams: 1 FRANCISCO PETRÔNIO ALENCAR DE MEDEIROS - DISSERTAÇÃO PPGCC 2003..pdf: 3495874 bytes, checksum: 26f2cedd183b05f7571147e29242352b (MD5) Previous issue date: 2003-02-26 / Esse trabalho apresenta o processo de construção e implementação do módulo TAOSGraph da ferramenta iTAOS. iTAOS é uma ferramenta gráfica que implementa o formalismo TAOS (Task and Action Oriented System) concebida para acompanhar o projetista de interfaces durante a fase de análise e descrição da tarefa dentro de um processo de desenvolvimento de interfaces, verificando a completude e consistência da representação. TAOS-Graph foi desenvolvido utilizando a metodologia MEDITE, uma metodologia guiada por modelos e baseada na tarefa para construção de interfaces ergonômicas. Os artefatos gerados ao final de cada etapa do processo de desenvolvimento de TAOS-Graph foram: a descrição TAOS da tarefa, a especificação conceitual da interação e o código da interface. Como recomenda a metodologia, foi realizada uma inspeção de conformidade da ferramenta iTAOS com as partes 14 (Menus), 16 (Manipulação direta) e 17 (Formulários) do padrão ISO 9241. / This work presents the process of construction and implementation of the TAOSGraph module of the iTAOS tool. iTAOS is a graphical tool that implements the TAOS formalism (Task and Action Oriented System) and is responsible for accompanying the interface designer (iTAOS user) during domain task’s description and analysis phases within the interface development process, verifying the completeness and the consistency of the representation. TAOS-Graph was developed using the methodology MEDITE, a methodology guided for models and based in the task for construction of ergonomic interfaces. The artefacts generated to the end of each stage of the development process of TAOS-Graph had been: description TAOS of the task, the conceptual specification of the interaction and the code of the interface. As recommends the methodology, iTAOS was carried through an inspection of conformity with the parts 14, 16 and 17 of the standard ISO 9241.
9

Personalized Access to Contextual Information by using an Assistant for Query Reformulation / Personnalisation et Adaptation de L’accès à L’information Contextuelle en utilisant un Assistant Intelligent

Asfari, Ounas 19 September 2011 (has links)
Les travaux présentés dans cette thèse rentrent dans le cadre de la Recherche d'Information (RI) et s'intéressent à une des questions de recherche actuellement en vogue dans ce domaine: la prise en compte du contexte de l'utilisateur pendant sa quête de l'information pertinente. Nous proposons une approche originale de reformulation automatique de requêtes basée sur le profil utilisateur et sa tâche actuelle. Plus précisément, notre approche tient compte deux éléments du contexte, les centres d'intérêts de l'utilisateur (son profil) et la tâche qu'il réalise, pour suggérer des requêtes appropriées à son contexte. Nous proposons, en particulier, toute une démarche originale permettant de bien interpréter et réécrire la requête initiale en fonction des activités réalisées dans la tâche courante de l'utilisateur.Nous considérons qu'une tâche est jalonnée par des activités, nous proposons alors d'interpréter le besoin de l'utilisateur, représenté initialement par la requête, selon ses activités actuelles dans la tâche (et son profil) et de suggérer des reformulations de requêtes appropriées à ces activités.Une implémentation de cette approche est faite, et elle est suivie d’une étude expérimentale. Nous proposons également une procédure d'évaluation qui tient compte l'évaluation des termes d'expansion, et l'évaluation des résultats retournés en utilisant les requêtes reformulées, appelés SRQ State Reformulated Query. Donc, trois facteurs d’évaluation sont proposés sur lesquels nous nous appuierons pour l'analyse et l'évaluation des résultats. L’objective est de quantifier l'amélioration apportée par notre système dans certains contextes par rapport aux autres systèmes. Nous prouvons que notre approche qui prend en compte la tâche actuelle de l'utilisateur est effectivement plus performante que les approches basées, soit uniquement sur la requête initiale, ou encore celle basée sur la requête reformulée en considérant uniquement le profil de l'utilisateur. / Access to relevant information adapted to the needs and the context of the user is areal challenge in Web Search, owing to the increases of heterogeneous resources andthe varied data on the web. There are always certain needs behind the user query,these queries are often ambiguous and shortened, and thus we need to handle thesequeries intelligently to satisfy the user’s needs. For improving user query processing,we present a context-based hybrid method for query expansion that automaticallygenerates new reformulated queries in order to guide the information retrieval systemto provide context-based personalized results depending on the user profile andhis/her context. Here, we consider the user context as the actual state of the task thatthe user is undertaking when the information retrieval process takes place. Thus StateReformulated Queries (SRQ) are generated according to the task states and the userprofile which is constructed by considering related concepts from existing concepts ina domain ontology. Using a task model, we will show that it is possible to determinethe user’s current task automatically. We present an experimental study in order toquantify the improvement provided by our system compared to the direct querying ofa search engine without reformulation, or compared to the personalized reformulationbased on a user profile only. The Preliminary results have proved the relevance of ourapproach in certain contexts.

Page generated in 0.0813 seconds