• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 3
  • 2
  • 2
  • Tagged with
  • 44
  • 33
  • 31
  • 29
  • 18
  • 16
  • 12
  • 9
  • 8
  • 8
  • 8
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Animism and Anthropomorphism in Living Spaces : Designing for 'Life' in spatial interactions

Menon, Arjun Rajendran January 2020 (has links)
Integrating animism and anthropomorphism into technology and our interactions with said technology allows for the design of better affordances, easier comprehension, and more intricate interactions between humans and technological artefacts. This study seeks to understand the circumstances and contexts under which humans tend to form emotional bonds with nonhuman entities and ascribe life-like or human-like qualities to them, through qualitative research. It also seeks to investigate whether animism and anthropomorphism apply to abstract entities such as a space, through ‘constructive design-based research’ and ‘thing-centered design’ methodologies. The investigations yield several insights in general, that are useful to designers attempting to incorporate animism and anthropomorphism into their work. The prototyping led to the creation of a prototype space that can serve as the foundation for future research. / Integrering av animism och antropomorfism i teknik och vår interaktion med nämnda teknik möjliggör design av bättre överkomliga priser, lättare förståelse och mer invecklade interaktioner mellan människor och tekniska artefakter. Denna studie syftar till att förstå de omständigheter och sammanhang under vilka människor tenderar att bilda känslomässiga band med icke-mänskliga enheter och tillskriva dem livsliknande eller mänskliga egenskaper genom kvalitativ forskning. Det försöker också undersöka om animism och antropomorfism gäller abstrakta enheter som ett utrymme, genom ‘constructive design-based research’ och ‘thing-centered design’ metoder. Undersökningarna ger i allmänhet flera insikter som är användbara för designers som försöker integrera animism och antropomorfism i sitt arbete. Prototyperingen ledde till skapandet av ett prototyputrymme som kan tjäna som grund för framtida forskning.
22

Interactive concept acquisition for embodied artificial agents

de Greeff, Joachim January 2013 (has links)
An important capacity that is still lacking in intelligent systems such as robots, is the ability to use concepts in a human-like manner. Indeed, the use of concepts has been recognised as being fundamental to a wide range of cognitive skills, including classification, reasoning and memory. Intricately intertwined with language, concepts are at the core of human cognition; but despite a large body or research, their functioning is as of yet not well understood. Nevertheless it remains clear that if intelligent systems are to achieve a level of cognition comparable to humans, they will have to posses the ability to deal with the fundamental role that concepts play in cognition. A promising manner in which conceptual knowledge can be acquired by an intelligent system is through ongoing, incremental development. In this view, a system is situated in the world and gradually acquires skills and knowledge through interaction with its social and physical environment. Important in this regard is the notion that cognition is embodied. As such, both the physical body and the environment shape the manner in which cognition, including the learning and use of concepts, operates. Through active partaking in the interaction, an intelligent system might influence its learning experience as to be more effective. This work presents experiments which illustrate how these notions of interaction and embodiment can influence the learning process of artificial systems. It shows how an artificial agent can benefit from interactive learning. Rather than passively absorbing knowledge, the system actively partakes in its learning experience, yielding improved learning. Next, the influence of embodiment on perception is further explored in a case study concerning colour perception, which results in an alternative explanation for the question of why human colour experience is very similar amongst individuals despite physiological differences. Finally experiments, in which an artificial agent is embodied in a novel robot that is tailored for human-robot interaction, illustrate how active strategies are also beneficial in an HRI setting in which the robot learns from a human teacher.
23

Human Robot Interaction for Autonomous Systems in Industrial Environments

Chadalavada, Ravi Teja January 2016 (has links)
The upcoming new generation of autonomous vehicles for transporting materials in industrial environments will be more versatile, flexible and efficient than traditional Automatic Guided Vehicles (AGV), which simply follow pre-defined paths. However, freely navigating vehicles can appear unpredictable to human workers and thus cause stress and render joint use of the available space inefficient. This work addresses the problem of providing information regarding a service robot’s intention to humans co-populating the environment. The overall goal is to make humans feel safer and more comfortable, even when they are in close vicinity of the robot. A spatial Augmented Reality (AR) system for robot intention communication by means of projecting proxemic information onto shared floor space is developed on a robotic fork-lift by equipping it with a LED projector. This helps in visualizing internal state information and intents on the shared floors spaces. The robot’s ability to communicate its intentions is evaluated in realistic situations where test subjects meet the robotic forklift. A Likert scalebased evaluation which also includes comparisons to human-human intention communication was performed. The results show that already adding simple information, such as the trajectory and the space to be occupied by the robot in the near future, is able to effectively improve human response to the robot. This kind of synergistic human-robot interaction in a work environment is expected to increase the robot’s acceptability in the industry.
24

Human-like Crawling for Humanoid Robots : Gait Evaluation on the NAO robot

Aspernäs, Andreas January 2018 (has links)
Human-robot interaction (HRI) is the study of how we as humans interact and communicate with robots and one of its subfields is working on how we can improve the collaboration between humans and robots. We need robots that are more user friendly and easier to understand and a key aspect of this is human-like movements and behavior. This project targets a specific set of motions called locomotion and tests them on the humanoid NAO robot. A human-like crawling gait was developed for the NAO robot and compared to the built-in walking gait through three kinds of experiments. The first one to compare the speed of the two gaits, the second one to estimate their sta- bility, and the third to examine how long they can operate by measuring the power consumption and temperatures in the joints. The results showed the robot was significantly slower when crawling compared to walking, and when still the robot was more stable while standing than on all-fours. The power consumption remained essentially the same, but the crawling gait ended up having a shorter operational time due to higher temperature increase in the joints. While the crawling gait has benefits of having a lower profile then the walking gait and could therefore more easily pass under low hanging obsta- cles, it does have major issues that needs to be addressed to become a viable solution. Therefore these are important factors to consider when developing gaits and designing robots, and motives further research to try and solve these problems.
25

Designing an interface for a teleoperated vehicle which uses two cameras for navigation.

Rudqwist, Lucas January 2018 (has links)
The Swedish fire department have been wanting a robot that can be sent to situations where it’s too dangerous to send in firefighters. A teleoperated vehicle is being developed for exactly this purpose. This thesis has its base in research that previously has been conducted within Human-Robot Interaction and interface design for teleoperated vehicles. In this study, a prototype was developed to be able to simulate the experience of driving a teleoperated vehicle. It visualised the intended interface of the operator and simulated the operating experience. The development followed a User-Centered Design process and was evaluated by users. After the final evaluation a design proposal, based on previous research and user feedback, was presented. The study discusses the issues discovered when designing an interface for a teleoperated vehicle that uses two cameras for maneuvering. One challenge was how to fully utilize the two video feeds and create an interplay between them. The evaluations showed that users could keep better focus with one larger, designated main feed and the second one being placed where it can be easily glanced at. Simplicity and were to display sensor data were also shown to be important aspects to consider when trying to lower the mental load on the operator. Further modifications to the vehicle and the interface has to be made to increase the operators awareness and confidence when maneuvering the vehicle. / Det svenska brandförsvaret har varit i behov utav en robot som kan användas i situationer där det är för riskfyllt att skicka in brandmän. Ett radiostyrt fordon håller på att utvecklas för exakt detta syfte. Detta arbete baseras på den forskning som tidigare genomförts inom Människa-Datorinteraktion och gränssnitts-design för radiostyrda fordon. I denna studie utvecklades en prototyp för att simulera känslan av att köra ett radiostyrt fordon. Det visualiserade det tänka gränssnitten för operatören och simulerade körupplevelsen. Utvecklingen skedde genom en Användarcentrerad designprocess och utvärderades med hjälp utav användare. Efter den slutgiltiga utvärderingen så presenterades ett designförslag som baserades på tidigare forskning och användarnas återkoppling. Studien diskuterar de problem som uppstår när man designar ett gränssnitt för ett radiostyrt fordon som använder två kameror för manövrering. En utmaning var hur man kan till fullo utnyttja de två kamerabilderna och skapa ett samspel mellan dem. Utvärderingarna visade att användarna kunde hålla bättre fokus med en större, dedikerad kamerabild och en mindre sekundär kamerabild som enkelt kan blickas över. Enkelhet och var sensordata placeras, visade sig också var viktiga aspekter för att minska den mentala påfrestningen för operatören. Vidare modifikationer på fordonet och gränssnittet behöver genomföras för öka operatörens medvetenhet och självförtroende vid manövrering.
26

Robots that say 'no' : acquisition of linguistic behaviour in interaction games with humans

Förster, Frank January 2013 (has links)
Negation is a part of language that humans engage in pretty much from the onset of speech. Negation appears at first glance to be harder to grasp than object or action labels, yet this thesis explores how this family of ‘concepts’ could be acquired in a meaningful way by a humanoid robot based solely on the unconstrained dialogue with a human conversation partner. The earliest forms of negation appear to be linked to the affective or motivational state of the speaker. Therefore we developed a behavioural architecture which contains a motivational system. This motivational system feeds its state simultaneously to other subsystems for the purpose of symbol-grounding but also leads to the expression of the robot’s motivational state via a facial display of emotions and motivationally congruent body behaviours. In order to achieve the grounding of negative words we will examine two different mechanisms which provide an alternative to the established grounding via ostension with or without joint attention. Two large experiments were conducted to test these two mechanisms. One of these mechanisms is so called negative intent interpretation, the other one is a combination of physical and linguistic prohibition. Both mechanisms have been described in the literature on early child language development but have never been used in human-robot-interaction for the purpose of symbol grounding. As we will show, both mechanisms may operate simultaneously and we can exclude none of them as potential ontogenetic origin of negation.
27

Generating Engagement Behaviors in Human-Robot Interaction

Holroyd, Aaron 26 April 2011 (has links)
Based on a study of the engagement process between humans, I have developed models for four types of connection events involving gesture and speech: directed gaze, mutual facial gaze, adjacency pairs and backchannels. I have developed and validated a reusable Robot Operating System (ROS) module that supports engagement between a human and a humanoid robot by generating appropriate connection events. The module implements policies for adding gaze and pointing gestures to referring phrases (including deictic and anaphoric references), performing end-of-turn gazes, responding to human-initiated connection events and maintaining engagement. The module also provides an abstract interface for receiving information from a collaboration manager using the Behavior Markup Language (BML) and exchanges information with a previously developed engagement recognition module. This thesis also describes a Behavior Markup Language (BML) realizer that has been developed for use in robotic applications. Instead of the existing fixed-timing algorithms used with virtual agents, this realizer uses an event-driven architecture, based on Petri nets, to ensure each behavior is synchronized in the presence of unpredictable variability in robot motor systems. The implementation is robot independent, open-source and uses the Robot Operating System (ROS).
28

Domain Concretization from Examples: Addressing Missing Domain Knowledge via Robust Planning

January 2020 (has links)
abstract: Most planning agents assume complete knowledge of the domain, which may not be the case in scenarios where certain domain knowledge is missing. This problem could be due to design flaws or arise from domain ramifications or qualifications. In such cases, planning algorithms could produce highly undesirable behaviors. Planning with incomplete domain knowledge is more challenging than partial observability in the sense that the planning agent is unaware of the existence of such knowledge, in contrast to it being just unobservable or partially observable. That is the difference between known unknowns and unknown unknowns. In this thesis, I introduce and formulate this as the problem of Domain Concretization, which is inverse to domain abstraction studied extensively before. Furthermore, I present a solution that starts from the incomplete domain model provided to the agent by the designer and uses teacher traces from human users to determine the candidate model set under a minimalistic model assumption. A robust plan is then generated for the maximum probability of success under the set of candidate models. In addition to a standard search formulation in the model-space, I propose a sample-based search method and also an online version of it to improve search time. The solution presented has been evaluated on various International Planning Competition domains where incompleteness was introduced by deleting certain predicates from the complete domain model. The solution is also tested in a robot simulation domain to illustrate its effectiveness in handling incomplete domain knowledge. The results show that the plan generated by the algorithm increases the plan success rate without impacting action cost too much. / Dissertation/Thesis / Masters Thesis Computer Science 2020
29

Mission Specialist Human-Robot Interaction in Micro Unmanned Aerial Systems

Peschel, Joshua Michael 2012 August 1900 (has links)
This research investigated the Mission Specialist role in micro unmanned aerial systems (mUAS) and was informed by human-robot interaction (HRI) and technology findings, resulting in the design of an interface that increased the individual performance of 26 untrained CBRN (chemical, biological, radiological, nuclear) responders during two field studies, and yielded formative observations for HRI in mUAS. Findings from the HRI literature suggested a Mission Specialist requires a role-specific interface that shares visual common ground with the Pilot role and allows active control of the unmanned aerial vehicle (UAV) payload camera. Current interaction technology prohibits this as responders view the same interface as the Pilot and give verbal directions for navigation and payload control. A review of interaction principles resulted in a synthesis of five design guidelines and a system architecture that were used to implement a Mission Specialist interface on an Apple iPad. The Shared Roles Model was used to model the mUAS human-robot team using three formal role descriptions synthesized from the literature (Flight Director, Pilot, and Mission Specialist). The Mission Specialist interface was evaluated through two separate field studies involving 26 CBRN experts who did not have mUAS experience. The studies consisted of 52 mission trials to surveil, evaluate, and capture imagery of a chemical train derailment incident staged at Disaster City. Results from the experimental study showed that when a Mission Specialist was able to actively control the UAV payload camera and verbally coordinate with the Pilot, greater role empowerment (confidence, comfort, and perceived best individual and team performance) was reported by a majority of participants for similar tasks; thus, a role-specific interface is preferred and should be used by untrained responders instead of viewing the same interface as the Pilot in mUAS. Formative observations made during this research suggested: i) establishing common ground in mUAS is both verbal and visual, ii) type of coordination (active or passive) preferred by the Mission Specialist is affected by command-level experience and perceived responsibility for the robot, and iii) a separate Pilot role is necessary regardless of preferred coordination type in mUAS. This research is of importance to HRI and CBRN researchers and practitioners, as well as those in the fields of robotics, human-computer interaction, and artificial intelligence, because it found that a human Pilot role is necessary for assistance and understanding, and that there are hidden dependencies in the human-robot team that affect Mission Specialist performance.
30

The role of trust and relationships in human-robot social interaction

Wagner, Alan Richard 10 November 2009 (has links)
Can a robot understand a human's social behavior? Moreover, how should a robot act in response to a human's behavior? If the goals of artificial intelligence are to understand, imitate, and interact with human level intelligence then researchers must also explore the social underpinnings of this intellect. Our endeavor is buttressed by work in biology, neuroscience, social psychology and sociology. Initially developed by Kelley and Thibaut, social psychology's interdependence theory serves as a conceptual skeleton for the study of social situations, a computational process of social deliberation, and relationships (Kelley&Thibaut, 1978). We extend and expand their original work to explore the challenge of interaction with an embodied, situated robot. This dissertation investigates the use of outcome matrices as a means for computationally representing a robot's interactions. We develop algorithms that allow a robot to create these outcome matrices from perceptual information and then to use them to reason about the characteristics of their interactive partner. This work goes on to introduce algorithms that afford a means for reasoning about a robot's relationships and the trustworthiness of a robot's partners. Overall, this dissertation embodies a general, principled approach to human-robot interaction which results in a novel and scientifically meaningful approach to topics such as trust and relationships.

Page generated in 0.0263 seconds