Spelling suggestions: "subject:"hhared control"" "subject:"ehared control""
1 |
Contrôle cognitif, assistance à la conduite et coopération homme-machine : le maintien sur une trajectoire acceptable et sécurisée / Cognitive control, driving assistance and Human-‐Machine Cooperation : maintaining an acceptable and safe trajectoryDeroo, Mathieu 06 June 2012 (has links)
Ce travail de thèse utilise le cadre de la Coopération Homme-Machine afin d’étudier l’articulation entre le traitement symbolique (interprétatif) du contexte et le traitement subsymbolique de l’intervention d’assistances à la conduite directement sur le volant. Quatre études expérimentales menées sur simulateur de conduite sont présentées.Les deux premières études s’intéressent à la capacité des conducteurs à conserver la totale maîtrise du véhicule lorsque l’assistance intervient ponctuellement sur le volant, en cas de risque imminent de sortie de voie (amorçage haptique). Les résultats montrent que seuls les temps de réaction des conducteurs sont entièrement déterminés au niveau sensori-moteur, les processus symboliques intervenant très tôt pour inhiber ou moduler la réponse motrice. Les deux dernières études s’intéressent à la Coopération Homme-Machine lorsque l’intervention de l’automate sur le volant est continue (contrôle partagé). Le degré de partage entre les deux agents et l’adaptation à l’assistance sur le moyen terme ont été étudiés. Les résultats permettent de conclure qu’il est possible d’intégrer efficacement l’intervention des automatesdans les boucles de contrôle sensori-motrices à condition que cette intégration respecte certaines précautions. Il s’agira notamment de calibrer finement le degré d’intervention ou la temporalité d’intervention dans ces boucles / The work in this thesis employs the Human-Machine Cooperation framework to study the relationship between symbolic (interpretive) processing of the context and subsymbolic processing of the intervention of driving assistance devices that act directly on the steering wheel. Four experimental studies on a driving simulator are presented.The first two focus on the ability of drivers to remain in full control of the vehicle when the assistance occasionally intervenes on the steering wheel, in case of imminent risk of lane departure (haptic priming). The results show that the driver’s reaction times are fully determined at the sensorimotor level. However, symbolic processes are involved as well at a very early stage to inhibit or modulate the motor response. Two other studies focus on Human-Machine Cooperation when the intervention of the device is continuous (shared control). The optimal level of shared control and the medium-term adaptation of the driver to the automation are studied. The results show that it is possible to effectively integrate the intervention of automation into the drivers sensorimotor control loops. However, successful integration requires respecting certain constraints, such as the fine calibration of the level of automation and the timing of the intervention
|
2 |
Contributions aux architectures de contrôle partagé pour la télémanipulation avancée / Contributions to shared control architectures for advanced telemanipulationAbi-Farraj, Firas 18 December 2018 (has links)
Bien que la pleine autonomie dans des environnements inconnus soit encore loin, les architectures de contrôle partagé où l'humain et un contrôleur autonome travaillent ensemble pour atteindre un objectif commun peuvent constituer un « terrain intermédiaire » pragmatique. Dans cette thèse, nous avons abordé les différents problèmes des algorithmes de contrôle partagé pour les applications de saisie et de manipulation. En particulier, le travail s'inscrit dans le projet H2020 Romans dont l'objectif est d'automatiser le tri et la ségrégation des déchets nucléaires en développant des architectures de contrôle partagées permettant à un opérateur humain de manipuler facilement les objets d'intérêt. La thèse propose des architectures de contrôle partagé différentes pour manipulation à double bras avec un équilibre opérateur / autonomie différent en fonction de la tâche à accomplir. Au lieu de travailler uniquement sur le contrôle instantané du manipulateur, nous proposons des architectures qui prennent en compte automatiquement les tâches de pré-saisie et de post-saisie permettant à l'opérateur de se concentrer uniquement sur la tâche à accomplir. La thèse propose également une architecture de contrôle partagée pour contrôler un humanoïde à deux bras où l'utilisateur est informé de la stabilité de l'humanoïde grâce à un retour haptique. En plus, un nouvel algorithme d'équilibrage permettant un contrôle optimal de l'humanoïde lors de l'interaction avec l'environnement est également proposé. / While full autonomy in unknown environments is still in far reach, shared-control architectures where the human and an autonomous controller work together to achieve a common objective may be a pragmatic "middle-ground". In this thesis, we have tackled the different issues of shared-control architectures for grasping and sorting applications. In particular, the work is framed in the H2020 RoMaNS project whose goal is to automatize the sort and segregation of nuclear waste by developing shared control architectures allowing a human operator to easily manipulate the objects of interest. The thesis proposes several shared-control architectures for dual-arm manipulation with different operator/autonomy balance depending on the task at hand. While most of the approaches provide an instantaneous interface, we also propose architectures which automatically account for the pre-grasp and post-grasp trajectories allowing the operator to focus only on the task at hand (ex., grasping). The thesis also proposes a shared control architecture for controlling a force-controlled humanoid robot in which the user is informed about the stability of the humanoid through haptic feedback. A new balancing algorithm allowing for the optimal control of the humanoid under high interaction forces is also proposed.
|
3 |
[pt] APRENDIZADO POR REFORÇO PROFUNDO PARA CONTROLE HÁPTICO COMPARTILHADO EM TAREFAS DESCONHECIDAS / [en] DEEP REINFORCEMENT LEARNING FOR HAPTIC SHARED CONTROL IN UNKNOWN TASKSFRANKLIN CARDENOSO FERNANDEZ 19 November 2020 (has links)
[pt] Os últimos anos mostraram um interesse crescente no uso do controle háptico compartilhado (HSC) em sistemas teleoperados. No HSC, a aplicação de forças orientadoras virtuais, diminui o esforço de controle do usuário e melhora o tempo de execução em várias tarefas, apresentando uma boa alternativa em comparação com a teleoperação direta. O HSC, apesar de demonstrar bom desempenho, abre uma nova lacuna: como disenhar as forças orientadoras. Por esse motivo, o verdadeiro desafio está no desenvolvimento de controladores para fornecer as forças orientadoras virtuais, capazes de lidar com
novas situações que aparecem enquanto uma tarefa está sendo executada. Este trabalho aborda esse desafio, projetando um controlador baseado no algoritmo Deep Deterministic Policy Gradient (DDPG) para fornecer assistência, e uma rede neural convolucional (CNN) para executar a detecção da tarefa.
O agente aprende a minimizar o tempo que o ser humano leva para executar a tarefa desejada, minimizando simultaneamente sua resistência ao feedback fornecido. Essa resistência fornece ao algoritmo de aprendizado informações sobre a direção que o humano está tentando seguir, neste caso na tarefa
pick-and-place. Diversos resultados demonstram a aplicação bem-sucedida da abordagem proposta, aprendendo políticas personalizadas para cada usuário que foi solicitado a testar o sistema. Ele exibe convergência estável e ajuda o usuário a concluir a tarefa com o menor número possível de etapas. / [en] Recent years have shown a growing interest in using haptic share control (HSC) in teleoperated systems. In HSC, the application of virtual guiding forces decreases the user’s control effort and improves execution time in various tasks, presenting a good alternative in comparison with direct teleoperation. HSC, despite demonstrating good performance, opens a new gap: how to design the guiding forces. For this reason, the real challenge lies in developing controllers to provide the virtual guiding forces, able to deal with new situations that appear while a task is being performed. This work addresses this challenge by designing a controller based on the deep deterministic policy gradient (DDPG) algorithm to provide the assistance, and
a convolutional neural network (CNN) to perform the task detection. The agent learns to minimize the time it takes the human to execute the desired task, while simultaneously minimizing their resistance to the provided feedback. This resistance thus provides the learning algorithm with information about
which direction the human is trying to follow, in this case, the pick-and-place task. Diverse results demonstrate the successful application of the proposed approach by learning custom policies for each user who was asked to test the system. It exhibits stable convergence and aids the user in completing the task
with the least amount of steps possible.
|
4 |
Architecture fonctionnelle pour la planification des trajectoires des véhicules automatisés dans des environnements complexes / Functional architecture for automated vehicles trajectory planning in complex environmentsGonzález Bautista, David 03 April 2017 (has links)
Un des buts de la recherche et du développement des systèmes de transport intelligents est d'augmenter le confort et la sécurité des passagers, tout en réduisant la consommation d'énergie, la pollution de l'air et le temps de déplacement. L'introduction de voitures complètement autonomes sur la voie publique nécessite la résolution d'un certain nombre de problèmes techniques et en particulier de disposer de modules de planification de trajectoire robustes. Ce travail de thèse s'inscrit dans ce cadre. Il propose une architecture modulaire pour la planification de trajectoire d'un véhicule autonome. La méthode permet de générer des trajectoires constituées de courbes interpolées adaptées à des environnements complexes comme des virages, des ronds-points, etc., tout en garantissant la sécurité et le confort des passagers. La prise en compte de l'incertitude des systèmes de perception, des limites physiques du véhicule, de la disposition des routes et des règles de circulation est aussi assurée dans le calcul de la trajectoire. L'algorithme est capable de modifier en temps réel la trajectoire prédéfinie de façon à éviter les collisions. Le calcul de la nouvelle trajectoire maintient les accélérations latérales à leur minimum, assurant ainsi le confort du passager. L'approche proposée a été évaluée et validée dans des environnements simulés et sur des véhicules réels. Cette méthode permet d'éviter les obstacles statiques et dynamiques repérés par le système de perception.Un système d'aide à la conduite pour le contrôle partagé basé sur cette architecture est introduit. Il prend en compte l'arbitrage, la surveillance et le partage de la conduite tout en maintenant le conducteur dans la boucle de contrôle. Il laisse le conducteur agir tant qu'il n'y a pas de danger et interagit avec le conducteur dans le cas contraire. L'algorithme se décompose donc en deux processus : 1) évaluation du risque et, s'il y a un risque avéré 2) partage du contrôle à l'aide de signaux haptiques via le volant.La méthode de planification de trajectoire présentée dans cette thèse est modulaire et générique. Elle peut être intégrée facilement dans toute architecture d'un véhicule autonome. / Developments in the Intelligent Transportation Systems (ITS) field show promising results at increasing passengers comfort and safety, while decreasing energy consumption, emissions and travel time. In road transportation, the appearance of automated vehicles is significantly aiding drivers by reducing some driving-associated tedious tasks. However, there is still a long way to go before making the transition between automated vehicles (i.e. vehicles with some automated features) and autonomous vehicles on public roads (i.e. fully autonomous driving), specially from the motion planning point of view. With this in mind, the present PhD thesis proposes the design of a generic modular architecture for automated vehicles motion planning. It implements and improves curve interpolation techniques in the motion planning literature by including comfort as the main design parameter, addressing complex environments such as turns, intersections and roundabouts. It will be able to generate suitable trajectories that consider measurements' incertitude from the perception system, vehicle’s physical limits, the road layout and traffic rules. In case future collision states are detected, the proposed approach is able to change---in real-time---the current trajectory and avoid the obstacle in front. It permits to avoid obstacles in conflict with the current trajectory of the ego-vehicle, considering comfort limits and developing a new trajectory that keeps lateral accelerations at its minimum. The proposed approach is tested in simulated and real urban environments, including turns and two-lane roundabouts with different radii. Static and dynamic obstacles are considered as to face and interact with other road actors, avoiding collisions when detected. The functional architecture is also tested in shared control and arbitration applications, focusing in keeping the driver in the control loop to addition the system's supervision over drivers’ knowledge and skills in the driving task. The control sharing advanced driver assistance system (ADAS) is proposed in two steps: 1) risk assessment of the situation in hand, based on the optimal trajectory and driving boundaries identified by the motion planning architecture and; 2) control sharing via haptic signals sent to the driver through the steering wheel. The approach demonstrates the modularity of the functional architecture as it proposes a general solution for some of today's unsolved challenges in the automated driving field.
|
5 |
Shared control of hydraulic manipulators to decrease cycle timeEnes, Aaron R. 25 August 2010 (has links)
This thesis presents a technique termed Blended Shared Control, whereby a human operator's commands are merged with the commands of an electronic agent in real time to control a manipulator. A four degree-of-freedom hydraulic excavator is used as an application example, and two types of models are presented: a fully dynamic model incorporating the actuator and linkage systems suitable for high-fidelity user studies, and a reduced-order velocity-constrained kinematic model amenable for real-time optimization. Intended operator tasks are estimated with a recursive algorithm; the task is optimized in real time; and a command perturbation is computed which, when summed with the operator command, results in a lower task completion time. Experimental results compare Blended Shared Control to other types of controllers including manual control and haptic feedback. Trials indicate that Blended Shared Control decreases task completion time when compared to manual operation.
|
6 |
Brain-Computer Interface Control of an Anthropomorphic Robotic ArmClanton, Samuel T. 21 July 2011 (has links)
This thesis describes a brain-computer interface (BCI) system that was developed to allow direct cortical control of 7 active degrees of freedom in a robotic arm. Two monkeys with chronic microelectrode implants in their motor cortices were able to use the arm to complete an oriented grasping task under brain control. This BCI system was created as a clinical prototype to exhibit (1) simultaneous decoding of cortical signals for control of the 3-D translation, 3-D rotation, and 1-D finger aperture of a robotic arm and hand, (2) methods for constructing cortical signal decoding models based on only observation of a moving robot, (3) a generalized method for training subjects to use complex BCI prosthetic robots using a novel form of operator-machine shared control, and (4) integrated kinematic and force control of a brain-controlled prosthetic robot through a novel impedance-based robot controller. This dissertation describes each of these features individually, how their integration enriched BCI control, and results from the monkeys operating the resulting system.
|
7 |
Cognitive Interactive Robot LearningFonooni, Benjamin January 2014 (has links)
Building general purpose autonomous robots that suit a wide range of user-specified applications, requires a leap from today's task-specific machines to more flexible and general ones. To achieve this goal, one should move from traditional preprogrammed robots to learning robots that easily can acquire new skills. Learning from Demonstration (LfD) and Imitation Learning (IL), in which the robot learns by observing a human or robot tutor, are among the most popular learning techniques. Showing the robot how to perform a task is often more natural and intuitive than figuring out how to modify a complex control program. However, teaching robots new skills such that they can reproduce the acquired skills under any circumstances, on the right time and in an appropriate way, require good understanding of all challenges in the field. Studies of imitation learning in humans and animals show that several cognitive abilities are engaged to learn new skills correctly. The most remarkable ones are the ability to direct attention to important aspects of demonstrations, and adapting observed actions to the agents own body. Moreover, a clear understanding of the demonstrator's intentions and an ability to generalize to new situations are essential. Once learning is accomplished, various stimuli may trigger the cognitive system to execute new skills that have become part of the robot's repertoire. The goal of this thesis is to develop methods for learning from demonstration that mainly focus on understanding the tutor's intentions, and recognizing which elements of a demonstration need the robot's attention. An architecture containing required cognitive functions for learning and reproduction of high-level aspects of demonstrations is proposed. Several learning methods for directing the robot's attention and identifying relevant information are introduced. The architecture integrates motor actions with concepts, objects and environmental states to ensure correct reproduction of skills. Another major contribution of this thesis is methods to resolve ambiguities in demonstrations where the tutor's intentions are not clearly expressed and several demonstrations are required to infer intentions correctly. The provided solution is inspired by human memory models and priming mechanisms that give the robot clues that increase the probability of inferring intentions correctly. In addition to robot learning, the developed techniques are applied to a shared control system based on visual servoing guided behaviors and priming mechanisms. The architecture and learning methods are applied and evaluated in several real world scenarios that require clear understanding of intentions in the demonstrations. Finally, the developed learning methods are compared, and conditions where each of them has better applicability are discussed. / Att bygga autonoma robotar som passar ett stort antal olika användardefinierade applikationer kräver ett språng från dagens specialiserade maskiner till mer flexibla lösningar. För att nå detta mål, bör man övergå från traditionella förprogrammerade robotar till robotar som själva kan lära sig nya färdigheter. Learning from Demonstration (LfD) och Imitation Learning (IL), där roboten lär sig genom att observera en människa eller en annan robot, är bland de mest populära inlärningsteknikerna. Att visa roboten hur den ska utföra en uppgift är ofta mer naturligt och intuitivt än att modifiera ett komplicerat styrprogram. Men att lära robotar nya färdigheter så att de kan reproducera dem under nya yttre förhållanden, på rätt tid och på ett lämpligt sätt, kräver god förståelse för alla utmaningar inom området. Studier av LfD och IL hos människor och djur visar att flera kognitiva förmågor är inblandade för att lära sig nya färdigheter på rätt sätt. De mest anmärkningsvärda är förmågan att rikta uppmärksamheten på de relevanta aspekterna i en demonstration, och förmågan att anpassa observerade rörelser till robotens egen kropp. Dessutom är det viktigt att ha en klar förståelse av lärarens avsikter, och att ha förmågan att kunna generalisera dem till nya situationer. När en inlärningsfas är slutförd kan stimuli trigga det kognitiva systemet att utföra de nya färdigheter som blivit en del av robotens repertoar. Målet med denna avhandling är att utveckla metoder för LfD som huvudsakligen fokuserar på att förstå lärarens intentioner, och vilka delar av en demonstration som ska ha robotens uppmärksamhet. Den föreslagna arkitekturen innehåller de kognitiva funktioner som behövs för lärande och återgivning av högnivåaspekter av demonstrationer. Flera inlärningsmetoder för att rikta robotens uppmärksamhet och identifiera relevant information föreslås. Arkitekturen integrerar motorkommandon med begrepp, föremål och omgivningens tillstånd för att säkerställa korrekt återgivning av beteenden. Ett annat huvudresultat i denna avhandling rör metoder för att lösa tvetydigheter i demonstrationer, där lärarens intentioner inte är klart uttryckta och flera demonstrationer är nödvändiga för att kunna förutsäga intentioner på ett korrekt sätt. De utvecklade lösningarna är inspirerade av modeller av människors minne, och en primingmekanism används för att ge roboten ledtrådar som kan öka sannolikheten för att intentioner förutsägs på ett korrekt sätt. De utvecklade teknikerna har, i tillägg till robotinlärning, använts i ett halvautomatiskt system (shared control) baserat på visuellt guidade beteenden och primingmekanismer. Arkitekturen och inlärningsteknikerna tillämpas och utvärderas i flera verkliga scenarion som kräver en tydlig förståelse av mänskliga intentioner i demonstrationerna. Slutligen jämförs de utvecklade inlärningsmetoderna, och deras applicerbarhet under olika förhållanden diskuteras. / INTRO
|
8 |
Characterizing assistive shared control through vision-based and human-aware designs for wheelchair navigation assistance / Caractérisation d'une assistance par contrôle partagé : conception d'une solution basée vision d'aide à la navigation en environnement humain pour les fauteuils roulantsKarakkat Narayanan, Vishnu 23 November 2016 (has links)
Les premiers documents attestant l'utilisation d'une chaise à roues utilisée pour transporter une personne avec un handicap datent du 6ème siècle en Chine. À l'exception des fauteuils roulants pliables X-frame inventés en 1933, 1400 ans d'évolution de la science humaine n'ont pas changé radicalement la conception initiale des fauteuils roulants. Pendant ce temps, les progrès de l'informatique et le développement de l'intelligence artificielle depuis le milieu des années 1980 ont conduit inévitablement à la conduite de recherches sur des fauteuils roulants intelligents. Plutôt que de se concentrer sur l'amélioration de la conception sous-jacente, l'objectif principal de faire un fauteuil roulant intelligent est de le rendre le plus accessible. Même si l'invention des fauteuils roulants motorisés ont partiellement atténué la dépendance d'un utilisateur à d'autres personnes pour la réalisation de leurs actes quotidiens, certains handicaps qui affectent les mouvements des membres, le moteur ou la coordination visuelle, rendent impossible l'utilisation d'un fauteuil roulant électrique classique. L'accessibilité peut donc être interprétée comme l'idée d'un fauteuil roulant adaptée à la pathologie de l'utilisateur de telle sorte que il / elle soit capable d'utiliser les outils d'assistance. S'il est certain que les robots intelligents sont prêts à répondre à un nombre croissant de problèmes dans les industries de services et de santé, il est important de comprendre la façon dont les humains et les utilisateurs interagissent avec des robots afin d'atteindre des objectifs communs. En particulier dans le domaine des fauteuils roulants intelligents d'assistance, la préservation du sentiment d'autonomie de l'utilisateur est nécessaire, dans la mesure où la liberté individuelle est essentielle pour le bien-être physique et social. De façon globale, ce travail vise donc à caractériser l'idée d'une assistance par contrôle partagé, et se concentre tout particulièrement sur deux problématiques relatives au domaine de la robotique d'assistance appliquée au fauteuil roulant intelligent, à savoir une assistance basée sur la vision et la navigation en présence d'humains. En ciblant les tâches fondamentales qu'un utilisateur de fauteuil roulant peut avoir à exécuter lors d'une navigation en intérieur, une solution d'assistance à bas coût, basée vision, est conçue pour la navigation dans un couloir. Le système fournit une assistance progressive pour les tâches de suivi de couloir et de passage de porte en toute sécurité. L'évaluation du système est réalisée à partir d'un fauteuil roulant électrique de série et robotisé. A partir de la solution plug and play imaginée, une formulation adaptative pour le contrôle partagé entre l'utilisateur et le robot est déduite. De plus, dans la mesure où les fauteuils roulants sont des dispositifs fonctionnels qui opèrent en présence d'humains, il est important de considérer la question des environnements peuplés d'humains pour répondre de façon complète à la problématique de la mobilité en fauteuil roulant. En s'appuyant sur les concepts issus de l'anthropologie, et notamment sur les conventions sociales spatiales, une modélisation de la navigation en fauteuil roulant en présence d'humains est donc proposée. De plus, une stratégie de navigation, qui peut être intégrée sur un robot social (comme un fauteuil roulant intelligent), permet d'aborder un groupe d'humains en interaction de façon équitable et de se joindre à eux de façon socialement acceptable. Enfin, à partir des enseignements tirés des solutions proposées d'aide à la mobilité en fauteuil roulant, nous pouvons formaliser mathématiquement un contrôle adaptatif partagé pour la planification de mouvement relatif à l'assistance à la navigation. La validation de ce formalisme permet de proposer une structure générale pour les solutions de navigation assistée en fauteuil roulant et en présence d'humains. / Earliest records of a wheeled chair used to transport a person with disability dates back to the 6m century in China. With the exception of the collapsible X-frame wheelchairs invented in 1933, 1400 years of human scientific evolution has not radically changed the initial wheelchair design. Meanwhile, advancements in computing and the development of artificial intelligence since the mid-1980s has inevitably led to research on Intelligent Wheelchairs. Rather than focusing on improving the underlying design, the core objective of making a wheelchair intelligent is to make it more accessible. Even though the invention of the powered wheelchairs have partially mitigated a user's dependence on other people for their daily routines, some disabilities that affect limb movements, motor or visual coordination, make il impossible for a user to operate a common powered wheelchair. Accessibility can also thus be thought of as the idea, where the wheelchair adapts to the user malady such that he/she is able to utilize its assistive capabilities. While it is certain that intelligent robots are poised to address a growing number of issues in the service and medical care industries, it is important to resolve how humans and users interact with robots in order to accomplish common objectives. Particularly in the assistive intelligent wheelchair domain, preserving a sense of autonomy with the user is required, as individual agency is essential for his/her physical and social well-being. This work thus aims to globally characterize the idea of assistive shared control while particularly devoting the attention to two issues within the intelligent assistive wheelchair domain viz. vision-based assistance and human-aware navigation.Recognizing the fundamental tasks that a wheelchair user may have to execute in indoor environments, we design lowcost vision-based assistance framework for corridor navigation. The framework provides progressive assistance for the tasks of safe corridor following and doorway passing. Evaluation of the framework is carried out on a robotized off-theshelf wheelchair. From the proposed plug and play design, we infer an adaptive formulation for sharing control between user and robot. Furthermore, keeping in mind that wheelchairs are assistive devices that operate in human environments, it is important to consider the issue of human-awareness within wheelchair mobility. We leverage spatial social conventions from anthropology to surmise wheelchair navigation in human environments. Moreover, we propose a motion strategy that can be embedded on a social robot (such as an intelligent wheelchair) that allows il to equitably approach and join a group of humans in interaction. Based on the lessons learnt from the proposed designs for wheelchair mobility assistance, we can finally mathematically formalize adaptive shared control for assistive motion planning. ln closing, we demonstrate this formalism in order to design a general framework for assistive wheelchair navigation in human environments.
|
9 |
Evaluating the Impact of an Online English Language Tool's Ability to Improve Users' under Learner- and Shared-control ConditionsJanuary 2015 (has links)
abstract: This study aims to uncover whether English Central, an online English as a Second Language tool, improves speaking proficiency for undergraduate students with developing English skills. Eighty-three advanced English language learners from the American English and Culture Program at Arizona State University were randomly assigned to one of three conditions: the use of English Central with a learner-control, shared-control, and a no-treatment condition. The two treatment groups were assigned approximately 14.7 hours of online instruction. The relative impact of each of the three conditions was assessed using two measures. First, the Pearson Versant Test (www.versanttest.com), a well-established English-as-a-second-language speaking test, was administered to all of the participants as a pre- and post-test measure. Second, students were given a post-treatment questionnaire that measured their motivation in using online instruction in general, and English Central specifically. Since a significant teacher effect was found, teachers involved in this study were also interviewed in order to ascertain their attitude toward English Central as a homework tool. Learner outcomes were significantly different between the shared and learner conditions. Student motivation was predictive of learning outcomes. Subjects in the shared condition outperformed those in the learner condition. Furthermore, those in the shared condition scored higher than the control condition; however, this result did not reach statistical significance. Results of the follow-up teacher survey revealed that while a teacher's view of the tool (positive or negative), was not a predictor of student success, teacher presentation of the tool may lead to a significant impact on student learning outcomes. / Dissertation/Thesis / Doctoral Dissertation Educational Technology 2015
|
10 |
Sistema de navegação semiautônomo para robótica móvel assistiva baseado em movimentos de cabeça e comandos de vozMaciel, Guilherme Marins 22 January 2018 (has links)
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-04-10T14:32:43Z
No. of bitstreams: 1
guilhermemarinsmaciel.pdf: 9983956 bytes, checksum: 320096faa3f7fde371bb72a6ab3faba1 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-04-10T14:56:25Z (GMT) No. of bitstreams: 1
guilhermemarinsmaciel.pdf: 9983956 bytes, checksum: 320096faa3f7fde371bb72a6ab3faba1 (MD5) / Made available in DSpace on 2018-04-10T14:56:25Z (GMT). No. of bitstreams: 1
guilhermemarinsmaciel.pdf: 9983956 bytes, checksum: 320096faa3f7fde371bb72a6ab3faba1 (MD5)
Previous issue date: 2018-01-22 / O campo da robótica assistiva lida com as tecnologias voltadas para pessoas com imparidades físicas. Alguns deficientes possuem tetraplegia e não podem movimentar os membros inferiores e superiores mas, dependendo do grau da lesão, podem acionar as musculaturas do ombro e pescoço. Visando este grupo, o presente trabalho propõe uma arquitetura de sistema semiautônomo para cadeiras de rodas inteligentes (CRIs) baseada na plataforma Robotic Operating System (ROS), embarcado em uma Raspberry Pi 3, com ênfase em baixo custo e não necessidade de localização global. É apresentada uma proposta de interface homem-máquina baseada em comandos de voz, na qual uma central de controle utiliza uma Rede de Petri para configurar e administrar o sistema. As transições são disparadas por um conjunto de frases. Depois de cada evento, o usuário recebe uma frase de retorno através de alto-falantes. Para o sistema de navegação é utilizado um controle compartilhado de velocidade, em que, utiliza-se uma Unidade de Medição Inercial (IMU) para reconhecer o padrão de movimento desejado através de medições da inclinação da cabeça do usuário. Paralelamente, um algoritmo inteligente emprega uma câmera de profundidade para reconhecer os obstáculos nos arredores e atuar em conjunto no controle, de modo a aumentar a manobrabilidade e segurança. Uma metodologia de travessia de passagens estreitas de forma autônoma é uma outra técnica proposta. Nesta técnica duas extremidades de um acesso delgado são reconhecidas por meio da câmera de profundidade e um controlador não-linear realiza a tarefa. Nos resultados, as metodologias propostas foram analisadas através de modelos de CRIs em Matlab e na plataforma de simulação Gazebo. Para testes práticos, foi utilizado um robô diferencial Pionner-P3DX. Os resultados exibidos nesta dissertação se revelaram promissores, evidenciando a aplicabilidade do sistema. / The field of assistive robotics works with technologies aimed at people with phy-sical impairments, some handicapped have quadriplegia and cannot move the lower and upper limbs, but, depending on the degree of injury, the shoulder and neck musculature can be used. Aiming at this group, the present work proposes a semiautonomous sys-tem architecture for smart wheelchairs based on the Robotic Operating System (ROS) platform, embedded in a Raspberry Pi 3, with emphasis on low cost and no need for global localization. A proposal for a human-machine-interface based on voice commands is presented, where the control center uses a Petri Net to configure and administer the system. Transitions are triggered by a set of phrases and after each event, the user re-ceives a return phrase through speakers. For the navigation system, a shared control of velocity is used. An Inertial Measurement Unit (IMU) is applied to recognize the desired movement pattern through measurements of the user's head inclination. In parallel, an intelligent algorithm employs a depth camera to recognize obstacles around and work together in control, increasing maneuverability and safety. Another technique proposed is a methodology of autonomous narrow passages crossing, the depth camera recognizes the borders of thin access and a non-linear controller performs the task. In the results, the proposed methodologies were analyzed using simulated models in the Matlab and the Gazebo platform. For practical tests, a Pioneer-P3DX differential robot was used, the results presented in this dissertation were promising, evidencing the applicability of the system.
|
Page generated in 0.0464 seconds