Spelling suggestions: "subject:"human inn then loop"" "subject:"human inn then hoop""
11 |
Offline Reinforcement Learning from Imperfect Human Guidance / 不完全な人間の誘導からのオフライン強化学習Zhang, Guoxi 24 July 2023 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第24856号 / 情博第838号 / 新制||情||140(附属図書館) / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 鹿島, 久嗣, 教授 河原, 達也, 教授 森本, 淳 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
12 |
Human-in-the-loop Machine Learning: Algorithms and ApplicationsLiang, Jiongqian 25 September 2018 (has links)
No description available.
|
13 |
Human Computer Interaction Design for Assisted Bridge Inspections via Augmented RealitySmith, Alan Glynn 03 June 2024 (has links)
To address some of the challenges associated with aging bridge infrastructure, this dissertation explores the development and evaluation of a novel tool for bridge inspections leveraging Augmented Reality (AR) and computer vision (CV) technologies to facilitate measurements. Named the Wearable Inspection Report Management System (WIRMS), the system supports various data entry methods and an adaptable automation workflow for defect measurements, showcasing AR's potential to improve bridge inspection efficiency and accuracy. Within this context, the work's main research goal is to understand the difference in performance between traditional field data collection methods (i.e. pen and paper) and automated methods like spoken data entry and CV-based structural defect measurements. In case of CV assistance, emphasis was placed on human-computer interaction (HCI) to understand whether partial, collaborative automation could address some of the limitations of fully automated inspection methods. The project began with comprehensive data collection through interviews, surveys, and observations at bridge sites, which informed the creation of a Virtual Reality (VR) prototype. An initial user study tested the feasibility of using voice commands for data entry in the AR environment but found it impractical. A second user study focused on optimizing interaction methods for virtual concrete crack measurements by testing different degrees of automated CV assistance. As part of this effort, major technical contributions were made to back-end technologies and CV algorithms to improve human-machine collaboration and ensure the accuracy of measurements. Results were mixed, with larger degrees of automation resulting in significant reductions in inspection time and perceived workload, but also significant increases in the amount of measurement error. The latter result is strongly associated with a lack of field robustness of CV methods, which can under-perform if conditions are not ideal. In general, hybrid techniques which allow the user to correct CV results were seen as the most favorable. Field validations with bridge inspectors showed promising potential for practical field implementation, though further refinement is needed for broader deployment. Overall, the research establishes a viable path for making AR a central component to future inspection practices, including digital data collection, automation, data analytics, and other technologies currently in development. / Doctor of Philosophy / This dissertation investigates the development of an innovative tool designed to transform bridge inspections using Augmented Reality (AR) technology, incorporating advanced computer vision (CV) techniques to assist with measurements. The project began with thorough data collection, including interviews and observational studies at bridge sites, which directly influenced the tool's design. A prototype was initially created in a Virtual Reality (VR) environment to refine the functionalities needed for AR application. The resulting AR system supports various interactive methods for documenting and measuring bridge defects, showcasing how AR can streamline and enhance traditional bridge inspection processes. However, challenges remain, particularly in accurately measuring certain types of defects, indicating that some traditional tools are still necessary. Despite these challenges, early tests with bridge inspectors have been promising, suggesting that AR could significantly improve the efficiency and accuracy of bridge inspections. The research demonstrates a clear path forward for further development, with the potential to revolutionize how bridge inspections are conducted.
|
14 |
Anthropomimetic Control Synthesis: Adaptive Vehicle Traction ControlKirchner, William 02 May 2012 (has links)
Human expert drivers have the unique ability to build complex perceptive models using correlated sensory inputs and outputs. In the case of longitudinal vehicle traction, this work will show a direct correlation in longitudinal acceleration to throttle input in a controlled laboratory environment. In fact, human experts have the ability to control a vehicle at or near the performance limits, with respect to vehicle traction, without direct knowledge of the vehicle states; speed, slip or tractive force. Traditional algorithms such as PID, full state feedback, and even sliding mode control have been very successful at handling low level tasks where the physics of the dynamic system are known and stationary. The ability to learn and adapt to changing environmental conditions, as well as develop perceptive models based on stimulus-response data, provides expert human drivers with significant advantages. When it comes to bandwidth, accuracy, and repeatability, automatic control systems have clear advantages over humans; however, most high performance control systems lack many of the unique abilities of a human expert. The underlying motivation for this work is that there are advantages to framing the traction control problem in a manner that more closely resembles how a human expert drives a vehicle. The fundamental idea is the belief that humans have a unique ability to adapt to uncertain environments that are both temporal and spatially varying. In this work, a novel approach to traction control is developed using an anthropomimetic control synthesis strategy. The proposed anthropomimetic traction control algorithm operates on the same correlated input signals that a human expert driver would in order to maximize traction. A gradient ascent approach is at the heart of the proposed anthropomimetic control algorithm, and a real-time implementation is described using linear operator techniques, even though the tire-ground interface is highly non-linear. Performance of the proposed anthropomimetic traction control algorithm is demonstrated using both a longitudinal traction case study and a combined mode traction case study, in which longitudinal and lateral accelerations are maximized simultaneously. The approach presented in this research should be considered as a first step in the development of a truly anthropomimetic solution, where an advanced control algorithm has been designed to be responsive to the same limited input signals that a human expert would rely on, with the objective of maximizing traction. This work establishes the foundation for a general framework for an anthropomimetic control algorithm that is capable of learning and adapting to an uncertain, time varying environment. The algorithms developed in this work are well suited for efficient real time control in ground vehicles in a variety of applications from a driver assist technology to fully autonomous applications. / Ph. D.
|
15 |
Human-AI Sensemaking with Semantic Interaction and Deep LearningBian, Yali 07 March 2022 (has links)
Human-AI interaction can improve overall performance, exceeding the performance that either humans or AI could achieve separately, thus producing a whole greater than the sum of the parts. Visual analytics enables collaboration between humans and AI through interactive visual interfaces. Semantic interaction is a design methodology to enhance visual analytics systems for sensemaking tasks. It is widely applied for sensemaking in high-stakes domains such as intelligence analysis and academic research. However, existing semantic interaction systems support collaboration between humans and traditional machine learning models only; they do not apply state-of-the-art deep learning techniques.
The contribution of this work is the effective integration of deep neural networks into visual analytics systems with semantic interaction. More specifically, I explore how to redesign the semantic interaction pipeline to enable collaboration between human and deep learning models for sensemaking tasks. First, I validate that semantic interaction systems with pre-trained deep learning better support sensemaking than existing semantic interaction systems with traditional machine learning. Second, I integrate interactive deep learning into the semantic interaction pipeline to enhance inference ability in capturing analysts' precise intents, thereby promoting sensemaking. Third, I add semantic explanation into the pipeline to interpret the interactively steered deep learning model. With a clear understanding of DL, analysts can make better decisions. Finally, I present a neural design of the semantic interaction pipeline to further boost collaboration between humans and deep learning for sensemaking. / Doctor of Philosophy / Human AI interaction can harness the separate strengths of human and machine intelligence to accomplish tasks neither can solve alone. Analysts are good at making high-level hypotheses and reasoning from their domain knowledge. AI models are better at data computation based on low-level input features. Successful human-AI interactions can perform real-world, high-stakes tasks, such as issuing medical diagnoses, making credit assessments, and determining cases of discrimination. Semantic interaction is a visual methodology providing intuitive communications between analysts and traditional machine learning models. It is commonly utilized to enhance visual analytics systems for sensemaking tasks, such as intelligence analysis and scientific research.
The contribution of this work is to explore how to use semantic interaction to achieve collaboration between humans and state-of-the-art deep learning models for complex sensemaking tasks. To do this, I first evaluate the straightforward solution of integrating the pretrained deep learning model into the traditional semantic interaction pipeline. Results show that the deep learning representation matches human cognition better than hand engineering features via semantic interaction. Next, I look at methods for supporting semantic interaction systems with interactive and interpretable deep learning. The new pipeline provides effective communication between human and deep learning models. Interactive deep learning enables the system to better capture users' intents. Interpretable deep learning lets users have a clear understanding of models. Finally, I improve the pipeline to better support collaboration using a neural design. I hope this work can contribute to future designs for the human-in-the-loop analysis with deep learning and visual analytics techniques.
|
16 |
Human-Machine Alignment for Context Recognition in the WildBontempelli, Andrea 30 April 2024 (has links)
The premise for AI systems like personal assistants to provide guidance and suggestions to an end-user is to understand, at any moment in time, the personal context that the user is in. The context – where the user is, what she is doing and with whom – allows the machine to represent the world in user’s terms. The context must be inferred from a stream of sensor readings generated by smart wearables such as smartphones and smartwatches, and the labels are acquired from the user directly. To perform robust context prediction in this real-world scenario, the machine must handle the egocentric nature of the context, adapt to the changing world and user, and maintain a bidirectional interaction with the user to ensure the user-machine alignment of world representations. To this end, the machine must learn incrementally on the input stream of sensor readings and user supervision. In this work, we: (i) introduce interactive classification in the wild and present knowledge drift (KD), a special form of concept drift, occurring due to world and user changes; (ii) develop simple and robust ML methods to tackle these scenarios; (iii) showcase the advantages of each of these methods in empirical evaluations on controlled synthetic and real-world data sets; (iv) design a flexible and modular architecture that combines the methods above to support context recognition in the wild; (v) present an evaluation with real users in a concrete social science use case.
|
17 |
Involving humans in the self-adaptive system loop : A Literature ReviewSimakina, Katarina, Wang, Zejian January 2024 (has links)
Self-adaptive systems (SAS) are a vital area of study with wide-ranging applications acrss various domains. These systems are designed to autonomously adjust their behavior in response to environmental changes or internal state shifts. However, fully autonomous systems face challenges in maintaining control and ensuring reliability, especially in high-stakes settings. Many studies have highlighted the importance of human involvement in SAS, pointing out that human oversight can significantly enhance system performance and reliability. Despite these findings, there is a lack of a literature review addressing this topic comprehensively. This thesis explores the critical role of human involvement in SAS and investigates how integrating human roles can enhance system performance and reliability by addressing why SAS require human involvement, identifying the most effective roles and processes for human participation, and outlining optimal integration methods. The findings indicate that human input is crucial for monitoring, decision-making, and executing system adaptations, particularly in complex and unpredictable scenarios. This integration improves system adaptability, usability, and overall efficiency. The results suggest that balancing automation with human oversight can significantly benefit autonomous systems, ensuring they align with human strategic goals and operational standards.
|
18 |
Control barrier function-enabled human-in-the-loop control for multi-robot systems : Centralized and distributed approaches / Kontrollbarriärfunktion som möjliggör mänsklig kontroll i kretsloppet för flerrobotsystem : Centraliserade och distribuerade tillvägagångssättNan Fernandez-Ayala, Victor January 2022 (has links)
Autonomous multi-robot systems have found many real-world applications in factory settings, rescue tasks and light shows. Albeit these successful applications, the multi-robot system is usually pre-programmed with limited flexibility for online adaptation. Having a human-in-the-loop feature would provide additional flexibility such as handling unexpected situations, detecting and correcting bad behaviours and supporting the automated decision making. In addition, it would also allow for an extra level of cooperation between the robots and the human that facilitates certain real-world tasks, for example in the agricultural sector. Control barrier functions (CBFs), as a convenient modular-design tool, will be mainly explored. CBFs have seen a lot of development in recent years and extending them to the field of multi-robot systems is still new. In particular, creating an original distributed approach, instead of a centralized one, will be one of the key topics of this Master’s thesis project. In this thesis work, several multi-robot coordination protocols and safety constraints will be identified and these constraints will be enforced using a control barrier function-enabled mixer module. This module will take in the commands from both the planner and the human operator, prioritizing the commands from the human operator as long as the safety constraints are not violated. Otherwise, the mixer module will filter the commands and send out a safe alternative. The underlying multi-robot tasks are expected to be achieved whenever feasible. Simulations in ROS, Python and MATLAB environments are developed to experimentally assess the safety and optimality of the system, and experiments with real robots in a lab are performed to show the applicability of this algorithm. Finally, a distributed approach to the mixer module has been developed, based on previous research and extended to allow for more versatility. This is of key importance since it would allow each robot to compute its own controller based on local information, making the multi-robot system both more robust and flexible to be deployed on real-world applications. / Autonoma multirobotsystem har fått många verkliga tillämpningar i fabriksmiljöer, räddningsuppdrag och ljusshower. Trots dessa framgångsrika tillämpningar är multirobotsystemet vanligtvis förprogrammerat med begränsad flexibilitet för anpassning online. En människa i loopen skulle ge ytterligare flexibilitet, t.ex. när det gäller att hantera oväntade situationer, upptäcka och korrigera dåliga beteenden och stödja det automatiska beslutsfattandet. Dessutom skulle det också möjliggöra en extra samarbetsnivå mellan robotarna och människan som underlättar vissa verkliga uppgifter, till exempel inom jordbrukssektorn. Kontrollbarriärfunktioner (CBF), som ett bekvämt verktyg för modulbaserad utformning, kommer huvudsakligen att undersökas. CBF:er har utvecklats mycket under de senaste åren och det är fortfarande nytt att utvidga dem till flerrobotsystem. Att skapa ett originellt distribuerat tillvägagångssätt i stället för ett centraliserat kommer att vara ett av de viktigaste ämnena i detta examensarbete. I detta examensarbete kommer flera samordningsprotokoll och säkerhetsbegränsningar för flera robotar att identifieras och dessa begränsningar kommer att upprätthållas med hjälp av en mixermodul med kontrollbarriärfunktion. Denna modul kommer att ta emot kommandon från både planeraren och den mänskliga operatören och prioritera kommandon från den mänskliga operatören så länge säkerhetsbegränsningarna inte överträds. I annat fall kommer mixermodulen att filtrera kommandona och skicka ut ett säkert alternativ. De underliggande multirobotuppgifterna förväntas uppnås närhelst det är möjligt. Simuleringar i ROS-, Python- och MATLAB-miljöerna utvecklas för att experimentellt bedöma systemets säkerhet och optimalitet, och experiment med riktiga robotar i ett labb utförs för att visa algoritmens tillämpbarhet. Slutligen har ett distribuerat tillvägagångssätt för mixermodulen utvecklats, baserat på tidigare forskning och utökat för att möjliggöra större mångsidighet. Detta är av central betydelse eftersom det skulle göra det möjligt för varje robot att beräkna sin egen styrning utifrån lokal information, vilket gör systemet med flera robotar både mer robust och flexibelt för att kunna användas i verkliga tillämpningar.
|
19 |
Human Intention Recognition Based Assisted Telerobotic Grasping of Objects in an Unstructured EnvironmentKhokar, Karan Hariharan 01 January 2013 (has links)
In this dissertation work, a methodology is proposed to enable a robot to identify an object to be grasped and its intended grasp configuration while a human is teleoperating a robot towards the desired object. Based on the detected object and grasp configuration, the human is assisted in the teleoperation task. The environment is unstructured and consists of a number of objects, each with various possible grasp configurations. The identification of the object and the grasp configuration is carried out in real time, by recognizing the intention of the human motion. Simultaneously, the human user is assisted to preshape over the desired grasp configuration. This is done by scaling the components of the remote arm end-effector motion that lead to the desired grasp configuration and simultaneously attenuating the components that are in perpendicular directions. The complete process occurs while manipulating the master device and without having to interact with another interface.
Intention recognition from motion is carried out by using Hidden Markov Model (HMM) theory. First, the objects are classified based on their shapes. Then, the grasp configurations are preselected for each object class. The selection of grasp configurations is based on the human knowledge of robust grasps for the various shapes. Next, an HMM for each object class is trained by having a skilled teleoperator perform repeated preshape trials over each grasp configuration of the object class in consideration. The grasp configurations are modeled as the states of each HMM whereas the projections of translation and orientation vectors, over each reference vector, are modeled as observations. The reference vectors are the ideal translation and rotation trajectories that lead the remote arm end-effector towards a grasp configuration. During an actual grasping task performed by a novice or a skilled user, the trained model is used to detect their intention. The output probability of the HMM associated with each object in the environment is computed as the user is teleoperating towards the desired object. The object that is associated with the HMM which has the highest output probability, is taken as the desired object. The most likely Viterbi state sequence of the selected HMM gives the desired grasp configuration. Since an HMM is associated with every object, objects can be shuffled around, added or removed from the environment without the need to retrain the models. In other words, the HMM for each object class needs to be trained only once by a skilled teleoperator.
The intention recognition algorithm was validated by having novice users, as well as the skilled teleoperator, grasp objects with different grasp configurations from a dishwasher rack. Each object had various possible grasp configurations. The proposed algorithm was able to successfully detect the operator's intention and identify the object and the grasp configuration of interest. This methodology of grasping was also compared with unassisted mode and maximum-projection mode. In the unassisted mode, the operator teleoperated the arm without any assistance or intention recognition. In the maximum-projection mode, the maximum projection of the motion vectors was used to determine the intended object and the grasp configuration of interest. Six healthy and one wheelchair-bound individuals, each executed twelve pick-and-place trials in intention-based assisted mode and unassisted mode. In these trials, they picked up utensils from the dishwasher and laid them on a table located next to it. The relative positions and orientations of the utensils were changed at the end of every third trial. It was observed that the subjects were able to pick-and-place the objects 51% faster and with less number of movements, using the proposed method compared to the unassisted method. They found it much easier to execute the task using the proposed method and experienced less mental and overall workloads. Two able-bodied subjects also executed three preshape trials over three objects in intention-based assisted and maximum projection mode. For one of the subjects, the objects were shuffled at the end of the six trials and she was asked to carry out three more preshape trials in the two modes. This time, however, the subject was made to change their intention when she was about to preshape to the grasp configurations. It was observed that intention recognition was consistently accurate through the trajectory in the intention-based assisted method except at a few points. However, in the maximum-projection method the intention recognition was consistently inaccurate and fluctuated. This often caused to subject to be assisted in the wring directions and led to extreme frustration. The intention-based assisted method was faster and had less hand movements. The accuracy of the intention based method did not change when the objects were shuffled. It was also shown that the model for intention recognition can be trained by a skilled teleoperator and be used by a novice user to efficiently execute a grasping task in teleoperation.
|
20 |
Data-Efficient Reinforcement Learning Control of Robotic Lower-Limb Prosthesis With Human in the LoopJanuary 2020 (has links)
abstract: Robotic lower limb prostheses provide new opportunities to help transfemoral amputees regain mobility. However, their application is impeded by that the impedance control parameters need to be tuned and optimized manually by prosthetists for each individual user in different task environments. Reinforcement learning (RL) is capable of automatically learning from interacting with the environment. It becomes a natural candidate to replace human prosthetists to customize the control parameters. However, neither traditional RL approaches nor the popular deep RL approaches are readily suitable for learning with limited number of samples and samples with large variations. This dissertation aims to explore new RL based adaptive solutions that are data-efficient for controlling robotic prostheses.
This dissertation begins by proposing a new flexible policy iteration (FPI) framework. To improve sample efficiency, FPI can utilize either on-policy or off-policy learning strategy, can learn from either online or offline data, and can even adopt exiting knowledge of an external critic. Approximate convergence to Bellman optimal solutions are guaranteed under mild conditions. Simulation studies validated that FPI was data efficient compared to several established RL methods. Furthermore, a simplified version of FPI was implemented to learn from offline data, and then the learned policy was successfully tested for tuning the control parameters online on a human subject.
Next, the dissertation discusses RL control with information transfer (RL-IT), or knowledge-guided RL (KG-RL), which is motivated to benefit from transferring knowledge acquired from one subject to another. To explore its feasibility, knowledge was extracted from data measurements of able-bodied (AB) subjects, and transferred to guide Q-learning control for an amputee in OpenSim simulations. This result again demonstrated that data and time efficiency were improved using previous knowledge.
While the present study is new and promising, there are still many open questions to be addressed in future research. To account for human adaption, the learning control objective function may be designed to incorporate human-prosthesis performance feedback such as symmetry, user comfort level and satisfaction, and user energy consumption. To make the RL based control parameter tuning practical in real life, it should be further developed and tested in different use environments, such as from level ground walking to stair ascending or descending, and from walking to running. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2020
|
Page generated in 0.071 seconds