Spelling suggestions: "subject:"assistive robots"" "subject:"assistives robots""
1 |
A framework for characterization and planning of safe, comfortable, and customizable motion of assistive mobile robotsGulati, Shilpa 26 October 2011 (has links)
Assistive mobile robots, such as intelligent wheelchairs, that can navigate autonomously in response to high level commands from a user can greatly benefit people with cognitive and physical disabilities by increasing their mobility. In this work, we address the problem of safe, comfortable, and customizable motion planning of such assistive mobile robots.
We recognize that for an assistive robot to be acceptable to human users, its motion should be safe and comfortable. Further, different users should be able to customize the motion according to their comfort. We formalize the notion of motion comfort as a discomfort measure that can be minimized to compute comfortable trajectories, and identify several properties that a trajectory must have for the motion to be comfortable. We develop a motion planning framework for planning safe, comfortable, and customizable trajectories in small-scale space. This framework removes the limitations of existing methods for planning motion of a wheeled mobile robot moving on a plane, none of which can compute trajectories with all the properties necessary for comfort.
We formulate a discomfort cost functional as a weighted sum of total travel time, time integral of squared tangential jerk, and time integral of squared normal jerk. We then define the problem of safe and comfortable motion planning as that of minimizing this discomfort such that the trajectories satisfy boundary conditions on configuration and its higher derivatives, avoid obstacles, and satisfy constraints on curvature, speed, and acceleration. This description is transformed into a precise mathematical problem statement using a general nonlinear constrained optimization approach. The main idea is to formulate a well-posed infinite-dimensional optimization problem and use a conforming finite-element discretization to transform it into a finite-dimensional problem for a numerical solution.
We also outline a method by which a user may customize the motion and present some guidelines for conducting human user studies to validate or refine the discomfort measure presented in this work.
Results show that our framework is capable of reliably planning trajectories that have all the properties necessary for comfort. We believe that our work is an important first step in developing autonomous assistive robots that are acceptable to human users. / text
|
2 |
Robot-Enhanced ABA Therapy: Exploring Emerging Artificial Intelligence Embedded Systems in Socially Assistive Robots for the Treatment of AutismCalle Ortiz, Eduardo R 08 August 2019 (has links)
In the last decade, socially assistive robots have been used in therapeutic treatments for individuals diagnosed with Autism Spectrum Disorders (ASDs). Preliminary studies have demonstrated positive results using the Penguin for Autism Behavioral Intervention (PABI) developed by the AIM Lab at WPI to assist individuals diagnosed with ASDs in Applied Behavioral Analysis (ABA) therapy treatments. In recent years, power-efficient embedded AI computing devices have emerged as a powerful technology by reducing the complexity of the hardware platforms while providing support for parallel models of computation. This new hardware architecture seems to be an important step in the improvement of socially assistive robots in ABA therapy. In this thesis, we explore the use of a power-efficient embedded AI computing device and pre-trained deep learning models to improve PABI’s performance. Five main contributions are made in this work. First, a robot-enhanced ABA therapy framework is designed. Second, a multilayer pattern software architecture for a robot-enhanced ABA therapy framework is explored. Third, a multifactorial experiment is completed in order to benchmark the performance of three popular deep learning frameworks over the AI computing device. Experimental results demonstrate that some deep learning frameworks utilize the resources of GPU power while others utilize the multicore ARM-CPU system of the device for its parallel model of computation. Fourth, the robustness of state-of-the-art pre-trained deep learning models for feature extraction is analyzed and contrasted with the previous approach used by PABI. Experimental results indicate that pre-trained deep learning models overcome the traditional approaches in some fields; however, combining different pre-trained models in a process reduces its accuracy. Fifth, a patient-tracking algorithm based on an identity verification approach is developed to improve the autonomy, usability, and interactions of patients with the robot. Experimental results show that the developed algorithm has the potential to perform as well as the previous algorithm used by PABI based on a deep learning classifier approach.
|
3 |
Pepe: an adaptive robot that helps children with autism to plan and self-manage their day.Cañete Yaque, Raquel January 2021 (has links)
Covid19 has brought up physical and mental challenges for all of us. However, this is even more pronounced for those who suffer from psychological pathologies, such as Autism Spectrum Disorder (ASD). One of the main challenges that parents of children with ASD faced during the pandemic was to plan and structure a daily routine for their kids. The disruption of the routine, together with the difficulty of combining work and the care of the child has resulted in behavioral problems and stress and anxiety for both, parents and children. This project focused on developing an adaptive robot that helps children with autism to plan and self-manage their day, with the end goal of becoming more independent. With adaptability, agencies, senses, and playfulness at the core of the design, Pepe is meant as a support tool for these children to use along the way. By collecting information from the performance of the kid, it is able to adapt its behavior to the child´s (and parent´s) needs and desires, and therefore progress with the child. It builds upon the principles of Positive Behavioral Support to prevent emotional crises by embracing a long-run negotiation process, by which the child gets gradually closer to the end goal of self-autonomy. Intending to be adapted to the accentuated needs of these children, it combines traditional and computational elements to make the most out of the experience. This project included in-depth user research together with parents and experts, an interdisciplinary design approach, and a prototyping phase in which a prototype was tested with children with ASD.
|
4 |
DESIGN AND SYSTEM IDENTIFICATION OF A MOBILE PARALLEL ROBOTHan Lin (18516603) 08 May 2024 (has links)
<p dir="ltr">The research presents the structure and a prototype an innovative parallel robotic structure using 3 mobile bases for actuation and hybrid motion. A system identification was performed to verify the model of the robot.</p>
|
5 |
DESIGN AND MODELING OF A BALLOON ROBOT WITH WHEEL PADDLES FOR AGRICULTURAL USEXiaotong Huang (18524037) 09 May 2024 (has links)
<p dir="ltr">The research study of Design and Modeling of a Balloon Robot with Wheel Paddles for Agricultural Use (Huang, et al. 2023) presented the design, analysis, and simulation of an innovative agricultural robot that integrated a buoyancy system with a helium balloon and wheeled paddles for navigation, aiming to optimize crop health monitoring. The thesis research initiated with a comprehensive examination of the conceptual design, focusing on the robot's buoyancy mechanism and propulsion system. Detailed motion analysis and kinematic studies underpinned the development of a dynamic model, which was rigorously tested through MATLAB simulations. The MATLAB simulations assessed the unmanned vehicle's operational efficiency, maneuverability, and energy consumption in the environment setting of agricultural. The findings of the new design highlighted the robot's potential to surpass traditional agricultural robots in precision and adaptability, mitigating the limitations of ground and aerial alternatives. The thesis study of the balloon robot concluded with strategic recommendations for future enhancements, emphasizing scalability, payload capacity, and environmental adaptability, thus paving the way for advanced agricultural robotics.</p>
|
6 |
"Ska du ta hand om mig?" : En litteraturstudie om socialt assisterande robotar i demensvården / "Are you going to take care of me?” : A literary review about socially assistive robots in a dementia care settingBirgersson, Fredrik, Ladeby, Ludvig January 2016 (has links)
Bakgrund: Demens är en sjukdom som drabbar en stor del av en alltmer åldrande befolkning i världen. Patienter som lider av demens kan drabbas av olika symtom som t.ex. kommunikationssvårigheter, personlighetsförändringar samt emotionell avtrubbning. Dessa symtom kan vara svåra att behandla och påverkar patienter, anhöriga och vårdpersonal. Tidigare forskning har visat att socialt assisterande robotar kan ha en god inverkan på människor som lider av demenssjukdom. Syfte: Syftet med denna litteraturstudie är att beskriva vilken inverkan på hälsan socialt assisterande robotar har hos patienter som lider av demenssjukdom.Metod: En litteraturstudie grundat på tio vetenskapliga artiklar, som tagits fram utifrån syftet. Resultat: Resultatet presenterades utifrån två kategorier, Psykosocial hälsa och Livskvalitet. Interventioner med socialt assisterande robotar hade en god inverkan på den psykosociala hälsan, medan livskvalitet påverkades med varierat resultat beroende på vilket stadie av demens som den drabbade befinner sig i. Slutsats: Sammanfattningsvis ges ett intryck av att socialt assisterande robotar kan hjälpa vårdpersonal i deras omvårdnadsarbete för patienter med demens, och att dessa robotar har en övervägande god inverkan på demensdrabbades hälsa. Betydelse: Kunskap om vad socialt assisterande robotar har för inverkan på hälsa hos demensdrabbade är angeläget, för att i framtiden kunna implementera omvårdnadsåtgärder med denna typ av robotar. / Background: Dementia is a disease which affects a large part of an increasingly aging population in the world. Patients suffering from dementia may suffer from various symptoms such as communication difficulties, personality changes and emotional blunting. These symptoms can be difficult to treat and affects patients, relatives and health professionals. Previous research has shown that socially assistive robots could have a good effect on people suffering from dementia. Aim: The purpose of this study is to describe the impact on health of socially assistive robots in patients suffering from dementia. Method: A literature study based on ten scientific articles, which has been devised based on the purpose. Results: The results are presented on the basis of two categories, psychosocial health and quality of life. Interventions with socially assistive robots had a good impact on the psychosocial health, while quality of life was affected with varied results depending on the stage of dementia that the patient is in. Conclusion: In conclusion, the given impression is that socially assistive robots could help health professionals in their nursing work for patients with dementia, and that these robots have a predominantly good effect on demented people’s health. Significance: The knowledge of how socially assistive robots impact the health of people who suffers from dementia is imperative, to eventually be able to implement nursing interventions with this type of robots.
|
7 |
VISION-LANGUAGE MODEL FOR ROBOT GRASPINGAbhinav Kaushal Keshari (15348490) 01 May 2023 (has links)
<p>Robot grasping is emerging as an active area of research in robotics as the interest in human-robot interaction is gaining worldwide because of diverse industrial settings for sharing tasks and workplaces. It mainly focuses on the quality of generated grasps for object manipulation. However, despite advancements, these methods need to consider the human-robot collaboration settings where robots and humans will have to grasp the same objects concurrently. Therefore, generating robot grasps compatible with human preferences of simultaneously holding an object is necessary to ensure a safe and natural collaboration experience. In this work, we propose a novel, deep neural network-based method called CoGrasp that generates human-aware robot grasps by contextualizing human preference models of object grasping into the robot grasp selection process. We validate our approach against existing state-of-the-art robot grasping methods through simulated and real-robot experiments and user studies. In real robot experiments, our method achieves about 88% success rate in producing stable grasps that allow humans to interact and grasp objects simultaneously in a socially compliant manner. Furthermore, our user study with 10 independent participants indicated our approach enables a safe, natural, and socially aware human-robot objects' co-grasping experience compared to a standard robot grasping technique.</p>
<p>To facilitate the grasping process, we also introduce a vision-language model that works as a pre-processing system before the grasping action takes place. In most settings, the robots are equipped with sensors that allow them to capture the scene, on which the vision model is used to do a detection task and objectify the visible contents in the environment. The language model is used to program the robot to make it possible for them to understand and execute the required sequence of tasks. Using the process of object detection, we build a set of object queries from the sensor image and allow the user to provide an input query for a task to be performed. We then perform a similarity score among these queries to localize the object that needs attention, and once identified, we can use a grasping process for the task at hand.</p>
|
8 |
<b>Optimizing the Dispatch Topology of a 911 Response Drone Network</b>Charles John D'Onofrio Jr. (19195516) 24 July 2024 (has links)
<p dir="ltr">This thesis adapts and applies methodologies for optimizing the sensing topology of a counter-UAS (CUAS) network to the problem of optimizing the geospatial distribution of emergency response drone bases subject to resource limitations while ensuring alignment with emergency response requirements. The specific context for this work is a 911 call incident response.</p><p dir="ltr">Drone response time, time on scene, and sensor effectiveness are used as network performance metrics to develop a mission planning algorithm that attempts to maximize network response effectiveness. A composite objective function utilizes network response effectiveness and customer-defined region weights that indicate the probability of an incident occurring to represent the performance of the geospatial distribution of 911 drone bases. A Greedy Algorithm iterates upon this objective function to optimize the network topology.</p><p dir="ltr">Previous work [1] suggests that a heuristic based approach utilizing a hexagonal network topology centered around suburban/urban focal points is the preferred method for optimizing the dispatch topology of a 911 response drone network. The optimization strategy deployed here demonstrated an 11% improvement on the objective function compared to this heuristic when tested in Tippecanoe County, IN.</p><p dir="ltr">Previous work [2] also suggests that, of all drones in the design space compliant with FAA Part 107, a single Vertical Take-off and Landing (VTOL) type drone with an ability to transition into fixed wing horizontal flight adhering to specific performance requirements is the preferred drone for executing the emergency response mission. This thesis utilizes the optimization strategy deployed here to test this supposition by comparing the performance of a network with access to only this single drone type to a network with access to multiple types of fixed-wing VTOL drones. Findings indicate that access to only the single type of optimally-sized drone outperforms a network with access to multiple drone types; however, improvements to the greedy algorithm that consider the marginal value of each drone type and across diverse mission types may modify this conclusion.</p>
|
9 |
Deep Learning basierte Sprachinteraktion für Social Assistive RobotsGuhr, Oliver 25 September 2024 (has links)
In dieser Dissertation wurde ein Voice User Interface (VUI) für Socially Assistive Robot (SAR) konzipiert und entwickelt, mit dem Ziel, eine sprachbasierte Interaktion in Pflegeanwendungen zuermöglichen. Diese Arbeit schließt eine Forschungslücke, indem sie ein VUI entwickelt, das mit der natürlichen deutschen Alltagssprache operiert. Der Fokus lag auf der Nutzung von Fortschritten im Bereich der Deep Learning-basierten Sprachverarbeitung, um die Anforderungen der Robotik und der Nutzergruppen zu erfüllen. Es wurden zwei zentrale Forschungsfragen behandelt: die Ermittlung der Anforderungen an ein VUI für SARs in der Pflege und die Konzeption sowie Implementierung eines solchen VUIs. Die Arbeit erörtert die spezifischen Anforderungen der Robotik und der Nutzenden an ein VUIs. Des Weiteren wurden die geplanten Einsatzszenarien und Nutzergruppen des entwickelten VUIs, einschließlich dessen Anwendung in der Demenztherapie und in Pflegewohnungen, detailliert beschrieben. Im Hauptteil der Arbeit wurde das konzipierte VUI vorgestellt, das durch seine Offline-Fähigkeit und die Integration externer Sensoren und Aktoren des Roboters in das VUI auszeichnet. Die Arbeit behandelt auch die zentralen Bausteine für die Implementierung des VUIs, darunter Spracherkennung, Verarbeitung transkribierter Texte, Sentiment-Analyse und Textsegmentierung. Das entwickelte Dialogmanagement-Modell sowie die Evaluierung aktueller Sprachsynthesesysteme wurden ebenfalls diskutiert. In einer Nutzerstudie wurde die Anwendbarkeit des VUIs ohne spezifische Schulung getestet, mit dem Ergebnis, dass Teilnehmende 93% der Aufgaben erfolgreich lösen konnten. Zukünftige Forschungs- und Entwicklungsaktivitäten umfassen Langzeit-Evaluationen des VUIs in der Robotik und die Entwicklung eines digitalen Assistenten. Die Integration von LLMs undmultimodalen Modellen in VUIs stellt einen weiteren wichtigen Forschungsschwerpunkt dar, ebenso wie die Effizienzsteigerung von Deep Learning-Modellen für mobile Roboter.:Zusammenfassung 3
Abstract 4
1 Einleitung 13
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.2 Problemstellung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.1 RobotikinderPflege . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.2 AmbientAssistedLiving . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3 ZielsetzungundForschungsfragen . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.4 AufbauderArbeit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2 Grundlagen 19
2.1 SociallyAssistiveRobotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 VoiceUserInterfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3 DeepLearningzurSprachverarbeitung . . . . . . . . . . . . . . . . . . . . . . . 25
3 Konzeption 33
3.1 AnforderungenältererMenschenanVUIs . . . . . . . . . . . . . . . . . . . . . 33
3.2 AnforderungenderRobotikanVUIs . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3 Anwendungskontext . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3.1 RobotergestützteMAKS-Therapie . . . . . . . . . . . . . . . . . . . . . . 38
3.3.2 AALWohnung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4 Nutzeranalyse. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5 EntwicklungszielefürdasVUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4 Systemarchitektur 45
4.1 Architekturentscheidungen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.2 KomponentendesVUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2.1 Systemkontext . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2.2 AllgemeinesKomponentenmodell . . . . . . . . . . . . . . . . . . . . . 50
4.2.3 DetailliertesKomponentenmodell . . . . . . . . . . . . . . . . . . . . . . 51
4.3 ModulareerweiterbareInteraktion . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3.1 InteraktiondurchSkills . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
7
4.3.2 SchnittstellenmodellderSkills . . . . . . . . . . . . . . . . . . . . . . . . 54
4.3.3 ImplementierungsmodellderSkills . . . . . . . . . . . . . . . . . . . . . 56
5 Spracherkennung 59
5.1 VomgeprochenenWortzumText . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.2 VoiceActivityDetection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.3 AutomaticSpeechRecognotion . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.3.1 Evaulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.3.2 OptimierungderModellefürCPUInferenz . . . . . . . . . . . . . . . . 67
6 Sprachverarbeitung 71
6.1 VomTextzurIntention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
6.2 Intent-ErkennungundNamedEntityRecognition . . . . . . . . . . . . . . . . . 72
6.3 SegmentierungvonAussagen . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
6.3.1 Datensatz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.3.2 Modelle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.3.3 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.3.4 Ablationsstudien . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.3.5 Ergebnisse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
6.4 TextbasierteSentimentanalyse . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.4.1 Daten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.4.2 Modelle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.4.3 Ergebnisse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
7 DialogManagment 97
7.1 VonderIntentionzurAktion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
7.2 VerfahrenfürdasDialogManagment . . . . . . . . . . . . . . . . . . . . . . . . 98
7.3 EinmodularesDialogManagment . . . . . . . . . . . . . . . . . . . . . . . . . .100
7.4 Sprachsynthese. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103
8 EvaluationdesFrameworks 107
8.1 AufbauundZielsetzung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107
8.2 TestdesignundMethodologie . . . . . . . . . . . . . . . . . . . . . . . . . . . .107
8.3 TeilnehmendederStudie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .110
8.4 AnalyseundInterpretationderErgebnisse . . . . . . . . . . . . . . . . . . . . .111
8.5 EinschränkungenderStudie . . . . . . . . . . . . . . . . . . . . . . . . . . . . .117
8.6 Fazit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .118
9 ZusammenfassungundAusblick 121
9.1 Zusammenfassung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121
9.2 Ausblick . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .124 / In this dissertation, a Voice User Interface (VUI) for Socially Assistive Robot (SAR) was designed and developed, with the aim of enabling voice-based interaction in care applications. This work fills a research gap by developing a VUI that operates with natural everyday natural everyday German language. The focus was on utilising advances in the field of deep learning-based speech and language processing to fulfil the requirements of robotics and user groups. Two central research questions were addressed: determining the requirements for a VUI for SARs in care applications and the design and implementation of such a VUIs. The work discusses the specific requirements of robotics and the users of a VUI. Furthermore, the planned application scenarios and user groups of the developed VUI, including its application in dementia therapy and in care homes, were described in detail. In the main part of the thesis, the designed VUI was presented, which is characterised by its offline capability and the integration of external sensors and actuators of the robot into the VUI. The thesis also deals with the central building blocks for the implementation of the VUIs, including speech recognition, processing of transcribed texts, sentiment analysis and text segmentation. The dialogue management model developed and the evaluation of current speech synthesis systems were also discussed. In a user study, the applicability of the VUIs was tested. Without specific training, the participants were able to successfully solve 93% of the tasks. Future research and development activities should include longterm evaluations of the VUIs in robotics and the development of a digital assistant. The integration of LLMs and multimodal models in VUIs is another important research focus, as is increasing the efficiency of deep learning models for mobile robots:Zusammenfassung 3
Abstract 4
1 Einleitung 13
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.2 Problemstellung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.1 RobotikinderPflege . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.2 AmbientAssistedLiving . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3 ZielsetzungundForschungsfragen . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.4 AufbauderArbeit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2 Grundlagen 19
2.1 SociallyAssistiveRobotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 VoiceUserInterfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3 DeepLearningzurSprachverarbeitung . . . . . . . . . . . . . . . . . . . . . . . 25
3 Konzeption 33
3.1 AnforderungenältererMenschenanVUIs . . . . . . . . . . . . . . . . . . . . . 33
3.2 AnforderungenderRobotikanVUIs . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3 Anwendungskontext . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3.1 RobotergestützteMAKS-Therapie . . . . . . . . . . . . . . . . . . . . . . 38
3.3.2 AALWohnung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4 Nutzeranalyse. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5 EntwicklungszielefürdasVUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4 Systemarchitektur 45
4.1 Architekturentscheidungen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.2 KomponentendesVUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2.1 Systemkontext . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2.2 AllgemeinesKomponentenmodell . . . . . . . . . . . . . . . . . . . . . 50
4.2.3 DetailliertesKomponentenmodell . . . . . . . . . . . . . . . . . . . . . . 51
4.3 ModulareerweiterbareInteraktion . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3.1 InteraktiondurchSkills . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
7
4.3.2 SchnittstellenmodellderSkills . . . . . . . . . . . . . . . . . . . . . . . . 54
4.3.3 ImplementierungsmodellderSkills . . . . . . . . . . . . . . . . . . . . . 56
5 Spracherkennung 59
5.1 VomgeprochenenWortzumText . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.2 VoiceActivityDetection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.3 AutomaticSpeechRecognotion . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.3.1 Evaulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.3.2 OptimierungderModellefürCPUInferenz . . . . . . . . . . . . . . . . 67
6 Sprachverarbeitung 71
6.1 VomTextzurIntention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
6.2 Intent-ErkennungundNamedEntityRecognition . . . . . . . . . . . . . . . . . 72
6.3 SegmentierungvonAussagen . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
6.3.1 Datensatz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.3.2 Modelle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.3.3 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.3.4 Ablationsstudien . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.3.5 Ergebnisse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
6.4 TextbasierteSentimentanalyse . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.4.1 Daten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.4.2 Modelle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.4.3 Ergebnisse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
7 DialogManagment 97
7.1 VonderIntentionzurAktion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
7.2 VerfahrenfürdasDialogManagment . . . . . . . . . . . . . . . . . . . . . . . . 98
7.3 EinmodularesDialogManagment . . . . . . . . . . . . . . . . . . . . . . . . . .100
7.4 Sprachsynthese. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103
8 EvaluationdesFrameworks 107
8.1 AufbauundZielsetzung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107
8.2 TestdesignundMethodologie . . . . . . . . . . . . . . . . . . . . . . . . . . . .107
8.3 TeilnehmendederStudie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .110
8.4 AnalyseundInterpretationderErgebnisse . . . . . . . . . . . . . . . . . . . . .111
8.5 EinschränkungenderStudie . . . . . . . . . . . . . . . . . . . . . . . . . . . . .117
8.6 Fazit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .118
9 ZusammenfassungundAusblick 121
9.1 Zusammenfassung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121
9.2 Ausblick . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .124
|
10 |
A Learning-based Control Architecture for Socially Assistive Robots Providing Cognitive InterventionsChan, Jeanie 05 December 2011 (has links)
Due to the world’s rapidly growing elderly population, dementia is becoming increasingly prevalent. This poses considerable health, social, and economic concerns as it impacts individuals, families and healthcare systems. Current research has shown that cognitive interventions may slow the decline of or improve brain functioning in older adults. This research investigates the use of intelligent socially assistive robots to engage individuals in person-centered cognitively stimulating activities. Specifically, in this thesis, a novel learning-based control architecture is developed to enable socially assistive robots to act as social motivators during an activity. A hierarchical reinforcement learning approach is used in the architecture so that the robot can learn appropriate assistive behaviours based on activity structure and personalize an interaction based on the individual’s behaviour and user state. Experiments show that the control architecture is effective in determining the robot’s optimal assistive behaviours for a memory game interaction and a meal assistance scenario.
|
Page generated in 0.0552 seconds