• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • Tagged with
  • 8
  • 8
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Intention Recognition in a Strategic Environment

Akridge, Cameron 01 January 2005 (has links)
This thesis investigates an intelligent system that can in real time infer the course of action of a human opponent in a competitive environment. Such an achievement would indicate the possibility that machines can not only interpret human behavior as it happens, but also predict the future course of action that a human might take. This thesis first examines several different application of intention recognition, describes the approach of Template Based Interpretation (TBI), and details the process of creating an efficient and accurate intention recognition system. The domain chosen is chess. The system's objective was to discern the opponent's strategy. It is able to use the board positions and other relevant data of the current state to gain an understanding of the movement patterns of the opposition.
2

Intelligent Telerobotic Assistance For Enhancing Manipulation Capabilities Of Persons With Disabilities

Yu, Wentao 11 August 2004 (has links)
This dissertation addresses the development of a telemanipulation system using intelligent mapping from a haptic user interface to a remote manipulator to assist in maximizing the manipulation capabilities of persons with disabilities. This mapping, referred to as assistance function, is determined on the basis of environmental model or real-time sensory data to guide the motion of a telerobotic manipulator while performing a given task. Human input is enhanced rather than superseded by the computer. This is particularly useful when the user has restricted range of movements due to certain disabilities such as muscular dystrophy, a stroke, or any form of pathological tremor. In telemanipulation system, assistance of variable position/velocity mapping or virtual fixture can improve manipulation capability and dexterity. Conventionally, these assistances are based on the environment information, without knowing user's motion intention. In this dissertation, user's motion intention is combined with real-time environment information for applying appropriate assistance. If the current task is following a path, a virtual fixture orthogonal to the path is applied. Similarly, if the task is to align the end-effector with a target, an attractive force field is generated. In order to successfully recognize user's motion intention, a Hidden Markov Model (HMM) is developed. This dissertation describes the HMM based skill learning and its application in a motion therapy system in which motion along a labyrinth is controlled using a haptic interface. Two persons with disabilities on upper limb are trained using this virtual therapist. The performance measures before and after the therapy training, including the smoothness of the trajectory, distance ratio, time taken, tremor and impact forces are presented. The results demonstrate that the forms of assistance provided reduced the execution times and increased the performance of the chosen tasks for the disabled individuals. In addition, these results suggest that the introduction of the haptic rendering capabilities, including the force feedback, offers special benefit to motion-impaired users by augmenting their performance on job related tasks.
3

Anticipation of Human Movements : Analyzing Human Action and Intention: An Experimental Serious Game Approach

Kurt, Ugur Halis January 2018 (has links)
What is the difference between intention and action? To start answering this complex question, we have created a serious game that allows us to capture a large quantity of experimental data and study human behavior. In the game, users catch flies, presented to the left or to the right of the screen, by dragging the tongue of a frog across a touchscreen monitor. The movement of interest has a predefined starting point (the frog) and necessarily transits through a via-point (a narrow corridor) before it proceeds to the chosen left/right direction. Meanwhile, the game collects data about the movement performed by the player. This work is focused on the analysis of such movements. We try to find criteria that will allow us to predict (as early as possible) the direction (left/right) chosen by the player. This is done by analyzing kinematic information (e.g. trajectory, velocity profile, etc.). Also, processing such data according to the dynamical movement primitives approach, allows us to find further criteria that support a classification of human movement. Our preliminary results show that individually considered, participants tend to create and use stereotypical behaviors that can be used to formulate predictions about the subjects’ intention to reach in one direction or the other, early after the onset of the movement.
4

On Advanced Template-based Interpretation As Applied To Intention Recognition In A Strategic Environment

Akridge, Cameron 01 January 2007 (has links)
An area of study that has received much attention over the past few decades is simulations involving threat assessment in military scenarios. Recently, much research has emerged concerning the recognition of troop movements and formations in non-combat simulations. Additionally, there have been efforts towards the detection and assessment of various types of malicious intentions. One such work by Akridge addressed the issue of Strategic Intention Recognition, but fell short in the detection of tactics that it could not detect without somehow manipulating the environment. Therefore, the aim of this thesis is to address the problem of recognizing an opponent's intent in a strategic environment where the system can think ahead in time to see the agent's plan. To approach the problem, a structured form of knowledge called Template-Based Interpretation is borrowed from the work of others and enhanced to reason in a temporally dynamic simulation.
5

Advancing Deep Learning-based Driver Intention Recognition : Towards a safe integration framework of high-risk AI systems

Vellenga, Koen January 2024 (has links)
Progress in artificial intelligence (AI), onboard computation capabilities, and the integration of advanced sensors in cars have facilitated the development of Advanced Driver Assistance Systems (ADAS). These systems aim to continuously minimize human driving errors. {An example application of an ADAS could be to support a human driver by informing if an intended driving maneuver is safe to pursue given the current state of the driving environment. One of the components enabling such an ADAS is recognizing the driver's intentions. Driver intention recognition (DIR) concerns the identification of what driving maneuver a driver aspires to perform in the near future, commonly spanning a few seconds. A challenging aspect of integrating such a system into a car is the ability of the ADAS to handle unseen scenarios. Deploying any AI-based system in an environment where mistakes can cause harm to human beings is considered a high-risk AI system. Upcoming AI regulations require a car manufacturer to motivate the design, performance-complexity trade-off, and the understanding of potential blind spots of a high-risk AI system.} Therefore, this licentiate thesis focuses on AI-based DIR systems and presents an overview of the current state of the DIR research field. Additionally, experimental results are included that demonstrate the process of empirically motivating and evaluating the design of deep neural networks for DIR. To avoid the reliance on sequential Monte Carlo sampling techniques to produce an uncertainty estimation, we evaluated a surrogate model to reproduce uncertainty estimations learned from probabilistic deep-learning models. Lastly, to contextualize the results within the broader scope of safely integrating future high-risk AI-based systems into a car, we propose a foundational conceptual framework. / <p>Ett av tre delarbeten (övriga se rubriken Delarbeten/List of papers):</p><p>Vellenga, Koen, H. Joe Steinhauer et al. (2024). "Designing deep neural networks for driver intention recognition". <em>Under submission</em>.</p>
6

Autonomous Robotic Escort Incorporating Motion Prediction with Human Intention

Conte, Dean Edward 02 March 2021 (has links)
This thesis presents a framework for a mobile robot to escort a human to their destination successfully and efficiently. The proposed technique uses accurate path prediction incorporating human intention to locate the robot in front of the human while walking. Human intention is inferred by the head pose, an effective past-proven implicit indicator of intention, and fused with conventional physics-based motion prediction. The human trajectory is estimated and predicted using a particle filter because of the human's nonlinear and non-Gaussian behavior, and the robot control action is determined from the predicted human pose allowing for anticipative autonomous escorting. Experimental analysis shows that the incorporation of the proposed human intention model reduces human position prediction error by approximately 35% when turning. Furthermore, experimental validation with an omnidirectional mobile robotic platform shows escorting up to 50% more accurate compared to the conventional techniques, while achieving 97% success rate. / Master of Science / This thesis presents a method for a mobile robot to escort a human to their destination successfully and efficiently. The proposed technique uses human intention to predict the walk path allowing the robot to be in front of the human while walking. Human intention is inferred by the head direction, an effective past-proven indicator of intention, and is combined with conventional motion prediction. The robot motion is then determined from the predicted human position allowing for anticipative autonomous escorting. Experimental analysis shows that the incorporation of the proposed human intention reduces human position prediction error by approximately 35% when turning. Furthermore, experimental validation with an mobile robotic platform shows escorting up to 50% more accurate compared to the conventional techniques, while achieving 97% success rate. The unique escorting interaction method proposed has applications such as touch-less shopping cart robots, exercise companions, collaborative rescue robots, and sanitary transportation for hospitals.
7

Inferring intentions through state representations in cooperative human-robot environments / Déduction d’intentions au travers de la représentation d’états au sein des milieux coopératifs entre homme et robot

Schlenoff, Craig 30 June 2014 (has links)
Les humains et les robots travaillant en toute sécurité et en parfaite harmonie dans un environnement est l'un des objectifs futurs de la communauté robotique. Quand les humains et les robots peuvent travailler ensemble dans le même espace, toute une catégorie de tâches devient prête à l'automatisation, allant de la collaboration pour l'assemblage de pièces, à la manutention de pièces et de materiels ainsi qu'à leur livraison. Garantir la sûreté des humains nécessite que le robot puisse être capable de surveiller la zone de travail, déduire l'intention humaine, et être conscient suffisamment tôt des dangers potentiels afin de les éviter.Des normes existent sur la collaboration entre robots et humains, cependant elles se focalisent à limiter les distances d'approche et les forces de contact entre l'humain et le robot. Ces approches s'appuient sur des processus qui se basent uniquement sur la lecture des capteurs, et ne tiennent pas compte des états futurs ou des informations sur les tâches en question. Un outil clé pour la sécurité entre des robots et des humains travaillant dans un environnement inclut la reconnaissance de l'intention dans lequel le robot tente de comprendre l'intention d'un agent (l'humain) en reconnaissant tout ou partie des actions de l'agent pour l'aider à prévoir les actions futures de cet agent. La connaissance de ces actions futures permettra au robot de planifier sa contribution aux tâches que l'humain doit exécuter ou au minimum, à ne pas se mettre dans une position dangereuse.Dans cette thèse, nous présentons une approche qui est capable de déduire l'intention d'un agent grâce à la reconnaissance et à la représentation des informations de l'état. Cette approche est différente des nombreuses approches présentes dans la littérature qui se concentrent principalement sur la reconnaissance de l'activité (par opposition à la reconnaissance de l'état) et qui « devinent » des raisons pour expliquer les observations. Nous déduisons les relations détaillées de l'état à partir d'observations en utilisant Region Connection Calculus 8 (RCC-8) et ensuite nous déduisons les relations globales de l'état qui sont vraies à un moment donné. L'utilisation des informations sur l'état sert à apporter une contribution plus précise aux algorithmes de reconnaissance de l'intention et à générer des résultats qui sont equivalents, et dans certains cas, meilleurs qu'un être humain qui a accès aux mêmes informations. / Humans and robots working safely and seamlessly together in a cooperative environment is one of the future goals of the robotics community. When humans and robots can work together in the same space, a whole class of tasks becomes amenable to automation, ranging from collaborative assembly to parts and material handling to delivery. Proposed standards exist for collaborative human-robot safety, but they focus on limiting the approach distances and contact forces between the human and the robot. These standards focus on reactive processes based only on current sensor readings. They do not consider future states or task-relevant information. A key enabler for human-robot safety in cooperative environments involves the field of intention recognition, in which the robot attempts to understand the intention of an agent (the human) by recognizing some or all of their actions to help predict the human’s future actions.We present an approach to inferring the intention of an agent in the environment via the recognition and representation of state information. This approach to intention recognition is different than many ontology-based intention recognition approaches in the literature as they primarily focus on activity (as opposed to state) recognition and then use a form of abduction to provide explanations for observations. We infer detailed state relationships using observations based on Region Connection Calculus 8 (RCC-8) and then infer the overall state relationships that are true at a given time. Once a sequence of state relationships has been determined, we use a Bayesian approach to associate those states with likely overall intentions to determine the next possible action (and associated state) that is likely to occur. We compare the output of the Intention Recognition Algorithm to those of an experiment involving human subjects attempting to recognize the same intentions in a manufacturing kitting domain. The results show that the Intention Recognition Algorithm, in almost every case, performed as good, if not better, than a human performing the same activity.
8

Eye Movement Analysis for Activity Recognition in Everyday Situations

Gustafsson, Anton January 2018 (has links)
Den ständigt ökande mängden av smarta enheter i vår vardag har lett till nya problem inom HCI så som hur vi människor ska interagera med dessa enheter på ett effektivt och enkelt sätt. Än så länge har kontextuellt medvetna system visat sig kunna vara ett möjligt sätt att lösa detta problem. Om ett system hade kunnat automatiskt detektera personers aktiviteter och avsikter, kunde det agera utan någon explicit inmatning från användaren. Ögon har tidigare visat sig avslöja mycket information om en persons kognitiva tillstånd och skulle kunna vara en möjlig modalitet för att extrahera aktivitesinformation ifrån.I denna avhandling har vi undersökt möjligheten att detektera aktiviteter genom att använda en billig, hemmabyggd ögonspårningsapparat. Ett experiment utfördes där deltagarna genomförde aktiviteter i ett kök för att samla in data om deras ögonrörelser. Efter experimentet var färdigt, annoterades, förbehandlades och klassificerades datan med hjälp av en multilayer perceptron--och en random forest--klassificerare.Trots att mängden data var relativt liten, visade resultaten att igenkänningsgraden var mellan 30-40% beroende på vilken klassificerare som användes. Detta bekräftar tidigare forskning att aktivitetsigenkänning genom att analysera ögonrörelser är möjligt. Dock visar det även att det fortfarande är svårt att uppnå en hög igenkänningsgrad. / The increasing amount of smart devices in our everyday environment has created new problems within human-computer interaction such as how we humans are supposed to interact with these devices efficiently and with ease. So far, context-aware systems could be a possible candidate to solve this problem. If a system automatically could detect people's activities and intentions, it could act accordingly without any explicit input from the user. Eyes have previously shown to be a rich source of information about a person's cognitive state and current activity. Because of this, eyes could be a viable input modality for extracting information from. In this thesis, we examine the possibility of detecting human activity by using a low cost, home-built monocular eye tracker. An experiment was conducted were participants performed everyday activities in a kitchen to collect eye movement data. After conducting the experiment, the data was annotated, preprocessed and classified using multilayer perceptron and random forest classifiers.Even though the data set collected was small, the results showed a recognition rate of between 30-40% depending on the classifier used. This confirms previous work that activity recognition using eye movement data is possible but that achieving high accuracy is challenging.

Page generated in 0.1527 seconds