Spelling suggestions: "subject:"human robot interaction"" "subject:"human cobot interaction""
281 |
Contributions to decisional human-robot interaction : towards collaborative robot companions / Contribution à l'interaction décisionelle homme-robot : Vers des robots compagnons collaboratifsAli, Muhammad 11 July 2012 (has links)
L'interaction homme-robot arrive dans une phase intéressante ou la relation entre un homme et un robot est envisage comme 'un partenariat plutôt que comme une simple relation maitre-esclave. Pour que cela devienne une réalité, le robot a besoin de comprendre le comportement humain. Il ne lui suffit pas de réagir de manière appropriée, il lui faut également être socialement proactif. Pour que ce comportement puis être mise en pratique le roboticien doit s'inspirer de la littérature déjà riche en sciences sociocognitives chez l'homme. Dans ce travail, nous allons identifier les éléments clés d'une telle interaction dans le contexte d'une tâche commune, avec un accent particulier sur la façon dont l'homme doit collaborer pour réaliser avec succès une action commune. Nous allons montrer l'application de ces éléments au cas un système robotique afin d'enrichir les interactions sociales homme-robot pour la prise de décision. A cet égard, une contribution a la gestion du but de haut niveau de robot et le comportement proactif est montre. La description d'un modèle décisionnel d'collaboration pour une tâche collaboratif avec l'humain est donnée. Ainsi, l'étude de l'interaction homme robot montre l'intéret de bien choisir le moment d'une action de communication lors des activités conjointes avec l'humain / Human Robot Interaction is entering into the interesting phase where the relationship with a robot is envisioned more as one of companionship with the human partner than a mere master-slave relationship. For this to become a reality, the robot needs to understand human behavior and not only react appropriately but also be socially proactive. A Companion Robot will also need to collaborate with the human in his daily life and will require a reasoning mechanism to manage thecollaboration and also handle the uncertainty in the human intention to engage and collaborate. In this work, we will identify key elements of such interaction in the context of a collaborative activity, with special focus on how humans successfully collaborate to achieve a joint action. We will show application of these elements in a robotic system to enrich its social human robot interaction aspect of decision making. In this respect, we provide a contribution to managing robot high-level goals and proactive behavior and a description of a coactivity decision model for collaborative human robot task. Also, a HRI user study demonstrates the importance of timing a verbal communication in a proactive human robot joint action
|
282 |
Development of Integration Algorithms for Vision/Force Robot Control with Automatic Decision SystemBdiwi, Mohamad 10 June 2014 (has links)
In advanced robot applications, the challenge today is that the robot should perform different successive subtasks to achieve one or more complicated tasks similar to human. Hence, this kind of tasks required to combine different kind of sensors in order to get full information about the work environment. However, from the point of view of control, more sensors mean more possibilities for the structure of the control system. As shown previously, vision and force sensors are the most common external sensors in robot system. As a result, in scientific papers it can be found numerous control algorithms and different structures for vision/force robot control, e.g. shared, traded control etc. The lacks in integration of vision/force robot control could be summarized as follows:
• How to define which subspaces should be vision, position or force controlled?
• When the controller should switch from one control mode to another one?
• How to insure that the visual information could be reliably used?
• How to define the most appropriated vision/force control structure?
In many previous works, during performing a specified task one kind of vision/force control structure has been used which is pre-defined by the programmer. In addition to that, if the task is modified or changed, it would be much complicated for the user to describe the task and to define the most appropriated vision/force robot control especially if the user is inexperienced. Furthermore, vision and force sensors are used only as simple feedback (e.g. vision sensor is used usually as position estimator) or they are intended to avoid the obstacles. Accordingly, much useful information provided by the sensors which help the robot to perform the task autonomously is missed.
In our opinion, these lacks of defining the most appropriate vision/force robot control and the weakness in the utilization from all the information which could be provided by the sensors introduce important limits which prevent the robot to be versatile, autonomous, dependable and user-friendly. For this purpose, helping to increase autonomy, versatility, dependability and user-friendly in certain area of robotics which requires vision/force integration is the scope of this thesis. More concretely:
1. Autonomy: In the term of an automatic decision system which defines the most appropriated vision/force control modes for different kinds of tasks and chooses the best structure of vision/force control depending on the surrounding environments and a priori knowledge.
2. Versatility: By preparing some relevant scenarios for different situations, where both the visual servoing and force control are necessary and indispensable.
3. Dependability: In the term of the robot should depend on its own sensors more than on reprogramming and human intervention. In other words, how the robot system can use all the available information which could be provided by the vision and force sensors, not only for the target object but also for the features extraction of the whole scene.
4. User-friendly: By designing a high level description of the task, the object and the sensor configuration which is suitable also for inexperienced user.
If the previous properties are relatively achieved, the proposed robot system can:
• Perform different successive and complex tasks.
• Grasp/contact and track imprecisely placed objects with different poses.
• Decide automatically the most appropriate combination of vision/force feedback for every task and react immediately to the changes from one control cycle to another because of occurrence of some unforeseen events.
• Benefit from all the advantages of different vision/force control structures.
• Benefit from all the information provided by the sensors.
• Reduce the human intervention or reprogramming during the execution of the task.
• Facilitate the task description and entering of a priori-knowledge for the user, even if he/she is inexperienced.
|
283 |
Toward Building A Social Robot With An Emotion-based Internal ControlMarpaung, Andreas 01 January 2004 (has links)
In this thesis, we aim at modeling some aspects of the functional role of emotions on an autonomous embodied agent. We begin by describing our robotic prototype, Cherry--a robot with the task of being a tour guide and an office assistant for the Computer Science Department at the University of Central Florida. Cherry did not have a formal emotion representation of internal states, but did have the ability to express emotions through her multimodal interface. The thesis presents the results of a survey we performed via our social informatics approach where we found that: (1) the idea of having emotions in a robot was warmly accepted by Cherry's users, and (2) the intended users were pleased with our initial interface design and functionalities. Guided by these results, we transferred our previous code to a human-height and more robust robot--Petra, the PeopleBot--where we began to build a formal emotion mechanism and representation for internal states to correspond to the external expressions of Cherry's interface. We describe our overall three-layered architecture, and propose the design of the sensory motor level (the first layer of the three-layered architecture) inspired by the Multilevel Process Theory of Emotion on one hand, and hybrid robotic architecture on the other hand. The sensory-motor level receives and processes incoming stimuli with fuzzy logic and produces emotion-like states without any further willful planning or learning. We will discuss how Petra has been equipped with sonar and vision for obstacle avoidance as well as vision for face recognition, which are used when she roams around the hallway to engage in social interactions with humans. We hope that the sensory motor level in Petra could serve as a foundation for further works in modeling the three-layered architecture of the Emotion State Generator.
|
284 |
Safe Reinforcement Learning for Social Human-Robot Interaction : Shielding for Appropriate Backchanneling Behavior / Säker förstärkningsinlärning för social människa-robotinteraktion : Avskärmning för lämplig uppbackningsbeteendeAkif, Mohamed January 2023 (has links)
Achieving appropriate and natural backchanneling behavior in social robots remains a challenge in Human-Robot Interaction (HRI). This thesis addresses this issue by utilizing methods from Safe Reinforcement Learning in particular shielding to improve social robot backchanneling behavior. The aim of the study is to develop and implement a safety shield that guarantees appropriate backchanneling. In order to achieve that, a Recurrent Neural Network (RNN) is trained on a human-human conversational dataset. Two agents are built; one uses a random algorithm to backchannel and another uses shields on top of its algorithm. The two agents are tested using a recorded human audio, and later evaluated in a between-subject user study with 41 participants. The results did not show any statistical significance between the two conditions, for the chosen significance level of α < 0.05. However, we observe that the agent with shield had a better listening behavior, more appropriate backchanneling behavior and missed less backchanneling opportunities than the agent without shields. This could indicate that shields have a positive impact on the robot’s behavior. We discuss potential explanations for why we did not obtain statistical significance and shed light on the potential for further exploration. / Att uppnå lämpligt och naturligt upbbackningsbeteende i sociala robotar är fortfarande en utmaning i Människa-Robot Interaktion (MRI). Den här avhandlingen tar upp detta problem genom att använda metoder från säker förstärkningsinlärning i synnerhet avskärmning för att förbättra sociala robotars upbbackningsbeteende. Syftet med studien är att utveckla och implementera en säkerhetsavskärmning som garanterar lämplig upbbackning. För att uppnå det, tränas ett återkommande neuralt nätverk på en människa-människa konversationsdatamängd. Två agenter byggs; en använder en slumpmässig algoritm för att upbbacka och en annan använder avskärmninng ovanpå sin algoritm. De två agenterna testas med hjälp av ett inspelat mänskligt ljud och utvärderas senare i en användarstudie med 41 deltagare. Resultaten visade inte någon statistisk signifikans mellan de två skicken, för den valda signifikansnivån < 0, 05. Vi observerar dock att agenten med avskärmning hade ett bättre lyssningsbeteende, mer lämplig upbbackningsbeteende och missade mindre upbbacknings-möjligheter än agenten utan avskärmning. Detta kan indikera att avskärmning har en positiv inverkan på robotarnas beteende. Vi diskuterar potentiella förklaringar till varför vi inte fick statistisk signifikans och belyser potentialen för ytterligare utforskning.
|
285 |
Towards Automatic Generation of Personality-Adapted Speech and Emotions for a Conversational Companion Robot / Mot Automatisk Generering av Personlighets Anpassade Tal och Känslor för en Samtalskunnig Sällskaps RobotGalatolo, Alessio January 2022 (has links)
Previous works in Human-Robot Interaction have demonstrated the positive potential benefit of designing highly anthropomorphic robots. This includes physical appearance but also whether they can express emotions, behave in a congruent manner, etc. This work wants to explore the creation of a robot that is able to express a given personality consistently throughout a dialogue while also manifesting congruent emotional expressions. Personality defines many aspects of the character of a person and it can influence how one speaks, behaves, reacts to events, etc. Here, we only focus our attention on language and on how it changes depending on one particular personality trait, the extraversion. To this end, we tested different language models to automate the process of generating language according to a particular personality. We also compared large language models such as GPT-3 to smaller ones, to analyse how size can correlate to performance in this task. We initially evaluated these methods through a fairly small user study in order to confirm the correct manipulation of personality in a text-only context. Results suggest that personality manipulation and how well it is understood highly depend on the context of a dialogue, with a more ‘personal’ dialogue being more successful in manifesting personality. Also, the performance of GPT-3 is comparable to smaller models, specifically trained, with the main difference only given in the perceived fluency of the generations. We then conducted a follow-up study where we chose to use a robot that is capable of showing different facial expressions used to manifest different emotions, the Furhat robot. We integrated into the robot the generations from our language models together with an emotion classification method that is used to guide its facial expressions. Whilst the output of our models did trigger different emotional expressions, resulting in robots which differed both in their language and nonverbal behaviour, resultant perception of these robots’ personality only approached significance (p ∼ 0.08). In this study, GPT3 performed very similarly to much smaller models, with the difference in fluency also being much smaller than before. We did not see any particular change in the perception of the robots in terms of likeability nor uncanniness. / Tidigare arbeten inom Människa-robotinteraktion har visat den positiva potentiella fördelen med att designa mycket antropomorfa robotar. Detta inkluderar fysiskt utseende men också huruvida de kan uttrycka känslor, bete sig på ett kongruent sätt, etc. Detta arbete vill utforska skapandet av en robot som kan uttrycka en given personlighet konsekvent under en dialog samtidigt som den manifesterar kongruenta känslomässiga uttryck. Personlighet definierar många aspekter av en persons karaktär och den kan påverka hur man talar, beter sig, reagerar på händelser etc. Här fokuserar vi vår uppmärksamhet endast på språket och på hur det förändras beroende på ett särskilt personlighetsdrag, extraversion. För detta ändamål testade vi olika språkmodeller för att automatisera processen att skapa språk enligt en viss personlighet. Vi jämförde även stora språkmodeller som GPT-3 med mindre, för att analysera hur storlek kan relatera till prestanda i denna uppgift. Vi utvärderade inledningsvis dessa metoder genom en mindre användarstudie för att bekräfta att personligheten kan manipuleras på rätt sätt i en textbaserad kontext. Resultaten tyder på att personlighetsmanipulation och hur väl den förstås i hög grad beror på sammanhanget i en dialog, där en mer ‘personlig’ dialog är mer framgångsrik när det gäller att manifestera personlighet. Prestandan hos GPT-3 är också jämförbar med mindre modeller, specifikt tränade på en uppgift, där den största skillnaden var i den genererade textens upplevda flyt. Vi gjorde sedan en uppföljningsstudie där vi valde att använda en robot som är kapabel att visa olika ansiktsuttryck och därigenom kapabel att manifestera olika känslor, Furhat-roboten. Vi integrerade talet som genererades från våra språkmodeller i roboten tillsammans med en känsloklassificeringsmetod som används för att styra dess ansiktsuttryck. Medan resultatet av våra modeller framkallade olika känslomässiga uttryck, vilket resulterade i robotar som skilde sig åt både i språk och icke-verbal kommunikation, närmade sig endast den resulterande uppfattningen av dessa robotars personlighet signifikans (p ∼ 0.08). I denna studie presterade GPT-3 mycket likartat med mycket mindre modeller, med skillnaden i flyt också mycket mindre än tidigare. Vi såg ingen speciell förändring i uppfattningen av robotarna när det gäller sympati eller obehaglighet.
|
286 |
Sociala robotar i klassrummet : Designförslag för läsaktiviteter med Bokbotten / Social robots in the classroom : Design proposals for reading activities with the BookBotVallin, Alva January 2024 (has links)
Att läsa har en avgörande roll i barn och ungdomars utveckling, men läsmotivationen bland elever i Sverige har sjunkit de senaste åren. Sociala robotar har potential att väcka intresse för läsning, men forskningen är fortfarande i ett tidigt skede. Studien syftar till att utforska och stödja utformningen av designförslag för läsaktiviteter med den sociala roboten Bokbotten. Målet är att undersöka hur läsaktiviteter med Bokbotten kan stimulera mellanstadieelevers intresse och motivation för läsning. För att uppnå studiens syfte användes Design Research Methodology (DRM). DRM är en iterativ metod med tre faser som kombinerar forskning, design och utvärdering för att generera insikter och lösningar. I förståelsefasen analyserades videomaterial från tidigare prototyptestning för att identifiera effektiva och problematiska aspekter av interaktionen mellan elever och Bokbotten. I designfasen utvecklades designförslag baserat på dessa insikter och kvalitativ innehållsanalys av expertintervjuer. Slutligen utvärderades designförslagen av en expertgrupp för att identifiera svagheter, utmaningar och intressanta aspekter. Resultaten visar att Bokbotten har potential att skapa en trygg och engagerande lärmiljö som främjar läslust genom att stödja elevernas autonomi, kompetens och samhörighet. Baserat på de insikter som gavs av studien föreslås följande designriktlinjer: förstärkt autonomi, samarbete med lärare, konceptuellt lärande, samhörighet och inkludering och bidra till känsla av kompetens. / Reading plays a crucial role in the development of children and adolescents, yet reading motivation among students in Sweden has declined in recent years. Social robots have the potential to spark interest in reading, but the research is still in its early stages. This study aims to explore and develop design proposals for reading activities using the BookBot. The overall goal is to examine how reading activities with the BookBot can stimulate middle school students’ interest and motivation for reading. To achieve the study’s objective, the Design Research Methodology (DRM) was employed. DRM is an iterative method comprising three phases - Descriptive Study I, Prescriptive Study, and Descriptive Study II - that integrate research, design, and evaluation to generate insights and solutions. In Descriptive Study I, recorded material from previous prototype testing was analyzed to identify effective and problematic aspects of the interaction between students and the BookBot. In Prescriptive Study, design proposals were developed based on the insights from Descriptive Study I and qualitative content analysis of expert interviews. Finally, the design proposals were evaluated by an expert group to identify weaknesses, challenges, and interesting aspects. The results indicate that the Bookbot has the potential to create a safe and engaging learning environment that fosters motivation for reading. This effect is achieved by supporting three key student needs identified in Self-Determination Theory: autonomy, competence, and relatedness. Based on the insights provided by the study, the following design guidelines are proposed: enhanced autonomy, collaboration with teachers, conceptual learning, relatedness and inclusion, and fostering a sense of competence.
|
287 |
Preferences for Mental Capacities in Robots : Investigating Preferences for Mental Capacities in Robots Across Different Application DomainsNääs, Hilda January 2024 (has links)
This study investigates if preferences for mental capacities in robots vary across different application domains and identifies influential factors, both in individuals’ characteristics and attributes specific to each robot domain. Employing a between-subject design, participants (N=271) completed a survey collecting both quantitative and qualitative data on preferences for 12 mental capacities across six robot types situated in a specific application domain (medicine, defense, household, social, education, customer service). Half of the mental capacities align with each dimension (experience and agency) in the two-dimensional model of mind (Gray et al., 2007; McMurtrie, 2023). Key findings reveal a general preference for high agency ability and low experience ability across all application domains. Exceptions were found in preference for lower agency ability in the cleaning robot and higher experience ability in the companion robot. Qualitative analysis indicates a desire for objective and logical robots functioning without emotions, while demonstrating empathy for human emotions. Additionally, gender and educational background emerged as factors influencing preference for lower experience abilities in robots. While previous research has mainly focused on attribution of mental capacities to technical agents, this study provides insights into human preferences and factors affecting them. These insights can guide responsible and ethics-driven development and design of robot technology within the field of human-robot interaction.
|
288 |
Timing multimodal turn-taking in human-robot cooperative activityChao, Crystal 27 May 2016 (has links)
Turn-taking is a fundamental process that governs social interaction. When humans interact, they naturally take initiative and relinquish control to each other using verbal and nonverbal behavior in a coordinated manner. In contrast, existing approaches for controlling a robot's social behavior do not explicitly model turn-taking, resulting in interaction breakdowns that confuse or frustrate the human and detract from the dyad's cooperative goals. They also lack generality, relying on scripted behavior control that must be designed for each new domain. This thesis seeks to enable robots to cooperate fluently with humans by automatically controlling the timing of multimodal turn-taking. Based on our empirical studies of interaction phenomena, we develop a computational turn-taking model that accounts for multimodal information flow and resource usage in interaction. This model is implemented within a novel behavior generation architecture called CADENCE, the Control Architecture for the Dynamics of Embodied Natural Coordination and Engagement, that controls a robot's speech, gesture, gaze, and manipulation. CADENCE controls turn-taking using a timed Petri net (TPN) representation that integrates resource exchange, interruptible modality execution, and modeling of the human user. We demonstrate progressive developments of CADENCE through multiple domains of autonomous interaction encompassing situated dialogue and collaborative manipulation. We also iteratively evaluate improvements in the system using quantitative metrics of task success, fluency, and balance of control.
|
289 |
Адаптивне бихевиористичке стратегије у интеракцији између човека и машине у контексту медицинске терапије / Adaptivne biheviorističke strategije u interakciji između čoveka i mašine u kontekstu medicinske terapije / Adaptive Behavioural Strategies in Human-Robot Interaction in the Context of Medical TherapyTasevski Jovica 10 September 2018 (has links)
<p>У овој дисертацији се разматрају изабрани аспекти истраживачког проблема спецификације, дизајнирања и имплементације конверзационих робота као асистивних средстава у терапији деце са церебралном парализом. Доприноси ови тезе су следећи. (i) Предложена је архитектура конверзационог агента опште намене која омогућава флексибилно интегрисање модула различитих функционалности. (ii) Дефинисана је и имплементирана адаптивна бихевиористичка стратегија коју робот примењује у интеракцији са децом. (iii) Предложена дијалошка стратегија је спроведена и позитивно процењена у интеркацији између деце и робота у реалистичном терапеутском контексту. (iv) Коначно, предложен је приступ за аутоматско детектовање критичних промена у дијалогу, заснован на појму нормализоване дијалошке ентропије.</p> / <p>U ovoj disertaciji se razmatraju izabrani aspekti istraživačkog problema specifikacije, dizajniranja i implementacije konverzacionih robota kao asistivnih sredstava u terapiji dece sa cerebralnom paralizom. Doprinosi ovi teze su sledeći. (i) Predložena je arhitektura konverzacionog agenta opšte namene koja omogućava fleksibilno integrisanje modula različitih funkcionalnosti. (ii) Definisana je i implementirana adaptivna bihevioristička strategija koju robot primenjuje u interakciji sa decom. (iii) Predložena dijaloška strategija je sprovedena i pozitivno procenjena u interkaciji između dece i robota u realističnom terapeutskom kontekstu. (iv) Konačno, predložen je pristup za automatsko detektovanje kritičnih promena u dijalogu, zasnovan na pojmu normalizovane dijaloške entropije.</p> / <p>This doctoral dissertation considers selected aspects of the research problem of specification, design, and implementation of conversational robots as assistive tools in therapy for children with cerebral palsy. This dissertation has made the following contributions: (i) It proposes a general architecture for conversational agents that allows for flexible integration of software modules implementing different functionalities. (ii) It introduces and implements an adaptive behavioural strategy that is applied by the robot in interaction with children. (iii) The proposed dialogue strategy is applied and evaluated in interaction between children and the robot MARKO, in realistic therapeutic settings. (iv) Finally, the dissertation proposes an approach to automatic detection of critical changes in human-machine interaction, based on the notion of normalized interactional entropy.</p>
|
290 |
Brain-computer interfaces for inducing brain plasticity and motor learning: implications for brain-injury rehabilitationBabalola, Karolyn Olatubosun 08 July 2011 (has links)
The goal of this investigation was to explore the efficacy of implementing a rehabilitation robot controlled by a noninvasive brain-computer interface (BCI) to influence brain plasticity and facilitate motor learning. The motivation of this project stemmed from the need to address the population of stroke survivors who have few or no options for therapy.
A stroke occurs every 40 seconds in the United States and it is the leading cause of long-term disability [1-3]. In a country where the elderly population is growing at an astounding rate, one in six persons above the age of 55 is at risk of having a stroke. Internationally, the rates of strokes and stroke-induced disabilities are comparable to those of the United States [1, 4-6]. Approximately half of all stroke survivors suffer from immediate unilateral paralysis or weakness, 30-60% of which never regain function [1, 6-9]. Many individuals who survive stroke will be forced to seek institutional care or long-term assistance.
Clinicians have typically implemented stroke rehabilitative treatment using active training techniques such as constraint induced movement therapy (CIMT) and robotic therapy [10-12]. Such techniques restore motor activity by forcing the movement of weakened limbs. That active engagement of the weakened limb movement stimulates neural pathways and activates the motor cortex, thus inducing brain plasticity and motor learning. Several studies have demonstrated that active training does in fact have an effect on the way the brain restores itself and leads to faster rehabilitation [10, 13-15]. In addition, studies involving mental practice, another form of rehabilitation, have shown that mental imagery directly stimulates the brain, but is not effective unless implemented as a supplemental to active training [16, 17]. Only stroke survivors retaining residual motor ability are able to undergo active rehabilitative training; the current selection of therapies has overlooked the significant population of stroke survivors suffering from severe control loss or complete paralysis [6, 10].
A BCI is a system or device that detects minute changes in brain signals to facilitate communication or control. In this investigation, the BCI was implemented through an electroencephalograph (EEG) device. EEG devices detect electrical brain signals transmitted through the scalp that corresponded with imagined motor activity. Within the BCI, a linear transformation algorithm converted EEG spectral features into control commands for an upper-limb rehabilitative robot, thus implementing a closed-looped feedback-control training system. The concept of the BCI-robot system implemented in this investigation may provide an alternative to current therapies by demonstrating the results of bypassing motor activity using brain signals to facilitate robotic therapy.
In this study, 24 able-bodied volunteers were divided into two study groups; one group trained to use sensorimotor rhythms (SMRs) (produced by imagining motor activity) to control the movement of a robot and the other group performed the 'guided-imagery' task of watching the robot move without control. This investigation looked for contrasts between the two groups that showed that the training involved with controlling the BCI-robot system had an effect on brain plasticity and motor learning.
To analyze brain plasticity and motor learning, EEG data corresponding to imagined arm movement and motor learning were acquired before, during, and after training. Features extracted from the EEG data consisted of frequencies in the 5-35Hz range, which produced amplitude fluctuations that were measurably significant during reaching. Motor learning data consisted of arm displacement measures (error) produced during an motor adaptation task performed daily by all subjects.
The results of the brain plasticity analysis showed persistent reductions in beta activity for subjects in the BCI group. The analysis also showed that subjects in the Non-BCI group had significant reductions in mu activity; however, these results were likely due to the fact that different EEG caps were used in each stage of the study. These results were promising but require further investigation.
The motor learning data showed that the BCI group out-performed non-BCI group in all measures of motor learning. These findings were significant because this was the first time a BCI had been applied to a motor learning protocol and the findings suggested that BCI had an influence on the speed at which subjects adapted to a motor learning task. Additional findings suggested that BCI subjects who were in the 40 and over age group had greater decreases in error after the learning phase of motor assessment. These finding suggests that BCI could have positive long term effects on individuals who are more likely to suffer from a stroke and possibly could be beneficial for chronic stroke patients.
In addition to exploring the effects of BCI training on brain plasticity and motor learning this investigation sought to detect whether the EEG features produced during guided-imagery could differentiate between reaching direction. While the analysis presented in this project produced classification accuracies no greater than ~77%, it formed the basis of future studies that would incorporate different pattern recognition techniques.
The results of this study show the potential for developing new rehabilitation therapies and motor learning protocols that incorporate BCI.
|
Page generated in 0.1584 seconds