• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A HUB-CI MODEL FOR NETWORKED TELEROBOTICS IN COLLABORATIVE MONITORING OF AGRICULTURAL GREENHOUSES

Ashwin Sasidharan Nair (6589922) 15 May 2019 (has links)
Networked telerobots are operated by humans through remote interactions and have found applications in unstructured environments, such as outer space, underwater, telesurgery, manufacturing etc. In precision agricultural robotics, target monitoring, recognition and detection is a complex task, requiring expertise, hence more efficiently performed by collaborative human-robot systems. A HUB is an online portal, a platform to create and share scientific and advanced computing tools. HUB-CI is a similar tool developed by PRISM center at Purdue University to enable cyber-augmented collaborative interactions over cyber-supported complex systems. Unlike previous HUBs, HUB-CI enables both physical and virtual collaboration between several groups of human users along with relevant cyber-physical agents. This research, sponsored in part by the Binational Agricultural Research and Development Fund (BARD), implements the HUB-CI model to improve the Collaborative Intelligence (CI) of an agricultural telerobotic system for early detection of anomalies in pepper plants grown in greenhouses. Specific CI tools developed for this purpose include: (1) Spectral image segmentation for detecting and mapping to anomalies in growing pepper plants; (2) Workflow/task administration protocols for managing/coordinating interactions between software, hardware, and human agents, engaged in the monitoring and detection, which would reliably lead to precise, responsive mitigation. These CI tools aim to minimize interactions’ conflicts and errors that may impede detection effectiveness, thus reducing crops quality. Simulated experiments performed show that planned and optimized collaborative interactions with HUB-CI (as opposed to ad-hoc interactions) yield significantly fewer errors and better detection by improving the system efficiency by between 210% to 255%. The anomaly detection method was tested on the spectral image data available in terms of number of anomalous pixels for healthy plants, and plants with stresses providing statistically significant results between the different classifications of plant health using ANOVA tests (P-value = 0). Hence, it improves system productivity by leveraging collaboration and learning based tools for precise monitoring for healthy growth of pepper plants in greenhouses.
2

BI-DIRECTIONAL COACHING THROUGH SPARSE HUMAN-ROBOT INTERACTIONS

Mythra Varun Balakuntala Srinivasa Mur (16377864) 15 June 2023 (has links)
<p>Robots have become increasingly common in various sectors, such as manufacturing, healthcare, and service industries. With the growing demand for automation and the expectation for interactive and assistive capabilities, robots must learn to adapt to unpredictable environments like humans can. This necessitates the development of learning methods that can effectively enable robots to collaborate with humans, learn from them, and provide guidance. Human experts commonly teach their collaborators to perform tasks via a few demonstrations, often followed by episodes of coaching that refine the trainee’s performance during practice. Adopting a similar approach that facilitates interactions to teaching robots is highly intuitive and enables task experts to teach the robots directly. Learning from Demonstration (LfD) is a popular method for robots to learn tasks by observing human demonstrations. However, for contact-rich tasks such as cleaning, cutting, or writing, LfD alone is insufficient to achieve a good performance. Further, LfD methods are developed to achieve observed goals while ignoring actions to maximize efficiency. By contrast, we recognize that leveraging human social learning strategies of practice and coaching in conjunction enables learning tasks with improved performance and efficacy. To address the deficiencies of learning from demonstration, we propose a Coaching by Demonstration (CbD) framework that integrates LfD-based practice with sparse coaching interactions from a human expert.</p> <p><br></p> <p>The LfD-based practice in CbD was implemented as an end-to-end off-policy reinforcement learning (RL) agent with the action space and rewards inferred from the demonstration. By modeling the reward as a similarity network trained on expert demonstrations, we eliminate the need for designing task-specific engineered rewards. Representation learning was leveraged to create a novel state feature that captures interaction markers necessary for performing contact-rich skills. This LfD-based practice was combined with coaching, where the human expert can improve or correct the objectives through a series of interactions. The dynamics of interaction in coaching are formalized using a partially observable Markov decision process. The robot aims to learn the true objectives by observing the corrective feedback from the human expert. We provide an approximate solution by reducing this to a policy parameter update using KL divergence between the RL policy and a Gaussian approximation based on coaching. The proposed framework was evaluated on a dataset of 10 contact-rich tasks from the assembly (peg-insertion), service (cleaning, writing, peeling), and medical domains (cricothyroidotomy, sonography). Compared to baselines of behavioral cloning and reinforcement learning algorithms, CbD demonstrates improved performance and efficiency.</p> <p><br></p> <p>During the learning process, the demonstrations and coaching feedback imbue the robot with expert knowledge of the task. To leverage this expertise, we develop a reverse coaching model where the robot can leverage knowledge from demonstrations and coaching corrections to provide guided feedback to human trainees to improve their performance. Providing feedback adapted to individual trainees' "style" is vital to coaching. To this end, we have proposed representing style as objectives in the task null space. Unsupervised clustering of the null-space trajectories using Gaussian mixture models allows the robot to learn different styles of executing the same skill. Given the coaching corrections and style clusters database, a style-conditioned RL agent was developed to provide feedback to human trainees by coaching their execution using virtual fixtures. The reverse coaching model was evaluated on two tasks, a simulated incision and obstacle avoidance through a haptic teleoperation interface. The model improves human trainees’ accuracy and completion time compared to a baseline without corrective feedback. Thus, by taking advantage of different human-social learning strategies, human-robot collaboration can be realized in human-centric environments. </p> <p><br></p>

Page generated in 0.1407 seconds