• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • Tagged with
  • 34
  • 34
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Multi-Agent Neural Rearrangement Planning of Objects in Cluttered Environments

Vivek Gupta (16642227) 27 July 2023 (has links)
<p>Object rearrangement is a fundamental problem in robotics with various practical applications ranging from managing warehouses to cleaning and organizing home kitchens. While existing research has primarily focused on single-agent solutions, real-world scenarios often require multiple robots to work together on rearrangement tasks. We propose a comprehensive learning-based framework for multi-agent object rearrangement planning, addressing the challenges of task sequencing and path planning in complex environments. The proposed method iteratively selects objects, determines their relocation regions, and pairs them with available robots under kinematic feasibility and task reachability for execution to achieve the target arrangement. Our experiments on a diverse range of environments demonstrate the effectiveness and robustness of the proposed framework. Furthermore, results indicate improved performance in terms of traversal time and success rate compared to baseline approaches. The videos and supplementary material are available at https://sites.google.com/view/maner-supplementary</p>
12

A HUB-CI MODEL FOR NETWORKED TELEROBOTICS IN COLLABORATIVE MONITORING OF AGRICULTURAL GREENHOUSES

Ashwin Sasidharan Nair (6589922) 15 May 2019 (has links)
Networked telerobots are operated by humans through remote interactions and have found applications in unstructured environments, such as outer space, underwater, telesurgery, manufacturing etc. In precision agricultural robotics, target monitoring, recognition and detection is a complex task, requiring expertise, hence more efficiently performed by collaborative human-robot systems. A HUB is an online portal, a platform to create and share scientific and advanced computing tools. HUB-CI is a similar tool developed by PRISM center at Purdue University to enable cyber-augmented collaborative interactions over cyber-supported complex systems. Unlike previous HUBs, HUB-CI enables both physical and virtual collaboration between several groups of human users along with relevant cyber-physical agents. This research, sponsored in part by the Binational Agricultural Research and Development Fund (BARD), implements the HUB-CI model to improve the Collaborative Intelligence (CI) of an agricultural telerobotic system for early detection of anomalies in pepper plants grown in greenhouses. Specific CI tools developed for this purpose include: (1) Spectral image segmentation for detecting and mapping to anomalies in growing pepper plants; (2) Workflow/task administration protocols for managing/coordinating interactions between software, hardware, and human agents, engaged in the monitoring and detection, which would reliably lead to precise, responsive mitigation. These CI tools aim to minimize interactions’ conflicts and errors that may impede detection effectiveness, thus reducing crops quality. Simulated experiments performed show that planned and optimized collaborative interactions with HUB-CI (as opposed to ad-hoc interactions) yield significantly fewer errors and better detection by improving the system efficiency by between 210% to 255%. The anomaly detection method was tested on the spectral image data available in terms of number of anomalous pixels for healthy plants, and plants with stresses providing statistically significant results between the different classifications of plant health using ANOVA tests (P-value = 0). Hence, it improves system productivity by leveraging collaboration and learning based tools for precise monitoring for healthy growth of pepper plants in greenhouses.
13

Designing Multifunctional Material Systems for Soft Robotic Components

Raymond Adam Bilodeau (8787839) 01 May 2020 (has links)
<p>By using flexible and stretchable materials in place of fixed components, soft robots can materially adapt or change to their environment, providing built-in safeties for robotic operation around humans or fragile, delicate objects. And yet, building a robot out of only soft and flexible materials can be a significant challenge depending on the tasks that the robot needs to perform, for example if the robot were to need to exert higher forces (even temporarily) or self-report its current state (as it deforms unexpectedly around external objects). Thus, the appeal of multifunctional materials for soft robots, wherein the materials used to build the body of the robot also provide actuation, sensing, or even simply electrical connections, all while maintaining the original vision of environmental adaptability or safe interactions. Multifunctional material systems are explored throughout the body of this dissertation in three ways: (1) Sensor integration into high strain actuators for state estimation and closed-loop control. (2) Simplified control of multifunctional material systems by enabling multiple functions through a single input stimulus (<i>i.e.</i>, only requiring one source of input power). (3) Presenting a solution for the open challenge of controlling both well established and newly developed thermally-responsive soft robotic materials through an on-body, high strain, uniform, Joule-heating energy source. Notably, these explorations are not isolated from each other as, for example, work towards creating a new material for thermal control also facilitated embedded sensory feedback. The work presented in this dissertation paves a way forward for multifunctional material integration, towards the end-goal of full-functioning soft robots, as well as (more broadly) design methodologies for other safety-forward or adaptability-forward technologies.</p>
14

Human-in-the-loop of Cyber Physical Agricultural Robotic Systems

Maitreya Sreeram (9706730) 15 December 2020 (has links)
The onset of Industry 4.0 has provided considerable benefits to Intelligent Cyber-Physical Systems (ICPS), with technologies such as internet of things, wireless sensing, cognitive computing and artificial intelligence to improve automation and control. However, with increasing automation, the “human” element in industrial systems is often overlooked for the sake of standardization. While automation aims to redirect the workload of human to standardized and programmable entities, humans possess qualities such as cognitive awareness, perception and intuition which cannot be automated (or programmatically replicated) but can provide automated systems with much needed robustness and sustainability, especially in unstructured and dynamic environments. Incorporating tangible human skills and knowledge within industrial environments is an essential function of “Human-in-the-loop” (HITL) Systems, a term for systems powerfully augmented by different qualities of human agents. The primary challenge, however, lies in the realistic modelling and application of these qualities; an accurate human model must be developed, integrated and tested within different cyber-physical workflows to 1) validate the assumed advantages, investments and 2) ensure optimized collaboration between entities. Agricultural Robotic Systems (ARS) are an example of such cyber-physical systems (CPS) which, in order to reduce reliance on traditional human-intensive approaches, leverage sensor networks, autonomous robotics and vision systems and for the early detection of diseases in greenhouse plants. Complete elimination of humans from such environments can prove sub-optimal given that greenhouses present a host of dynamic conditions and interactions which cannot be explicitly defined or managed automatically. Supported by efficient algorithms for sampling, routing and search, HITL augmentation into ARS can provide improved detection capabilities, system performance and stability, while also reducing the workload of humans as compared to traditional methods. This research thus studies the modelling and integration of humans into the loop of ARS, using simulation techniques and employing intelligent protocols for optimized interactions. Human qualities are modelled in human “classes” within an event-based, discrete time simulation developed in Python. A logic controller based on collaborative intelligence (HUB-CI) efficiently dictates workflow logic, owing to the multi-agent and multi-algorithm nature of the system. Two integration hierarchies are simulated to study different types of integrations of HITL: Sequential, and Shared Integration. System performance metrics such as costs, number of tasks and classification accuracy are measured and compared for different collaboration protocols within each hierarchy, to verify the impact of chosen sampling and search algorithms. The experiments performed show the statistically significant advantages of HUB-CI based protocol over traditional protocols in terms of collaborative task performance and disease detectability, thus justifying added investment due to the inclusion of HITL. The results also discuss the competitive factors between both integrations, laying out the relative advantages and disadvantages and the scope for further research. Improving human modelling and expanding the range of human activities within the loop can help to improve the practicality and accuracy of the simulation in replicating an HITL-ARS. Finally, the research also discusses the development of a user-interface software based on ARS methodologies to test the system in the real-world.<br>
15

Pose Imitation Constraints For Kinematic Structures

Glebys T Gonzalez (14486934) 09 February 2023 (has links)
<p> </p> <p>The usage of robots has increased in different areas of society and human work, including medicine, transportation, education, space exploration, and the service industry. This phenomenon has generated a sudden enthusiasm to develop more intelligent robots that are better equipped to perform tasks in a manner that is equivalently good as those completed by humans. Such jobs require human involvement as operators or teammates since robots struggle with automation in everyday settings. Soon, the role of humans will be far beyond users or stakeholders and include those responsible for training such robots. A popular teaching form is to allow robots to mimic human behavior. This method is intuitive and natural and does not require specialized knowledge of robotics. While there are other methods for robots to complete tasks effectively, collaborative tasks require mutual understanding and coordination that is best achieved by mimicking human motion. This mimicking problem has been tackled through skill imitation, which reproduces human-like motion during a task shown by a trainer. Skill imitation builds on faithfully replicating the human pose and requires two steps. In the first step, an expert's demonstration is captured and pre-processed, and motion features are obtained; in the second step, a learning algorithm is used to optimize for the task. The learning algorithms are often paired with traditional control systems to transfer the demonstration to the robot successfully. However, this methodology currently faces a generalization issue as most solutions are formulated for specific robots or tasks. The lack of generalization presents a problem, especially as the frequency at which robots are replaced and improved in collaborative environments is much higher than in traditional manufacturing. Like humans, we expect robots to have more than one skill and the same skills to be completed by more than one type of robot. Thus, we address this issue by proposing a human motion imitation framework that can be efficiently computed and generalized for different kinematic structures (e.g., different robots).</p> <p> </p> <p>This framework is developed by training an algorithm to augment collaborative demonstrations, facilitating the generalization to unseen scenarios. Later, we create a model for pose imitation that converts human motion to a flexible constraint space. This space can be directly mapped to different kinematic structures by specifying a correspondence between the main human joints (i.e., shoulder, elbow, wrist) and robot joints. This model permits having an unlimited number of robotic links between two assigned human joints, allowing different robots to mimic the demonstrated task and human pose. Finally, we incorporate the constraint model into a reward that informs a Reinforcement Learning algorithm during optimization. We tested the proposed methodology in different collaborative scenarios. Thereafter, we assessed the task success rate, pose imitation accuracy, the occlusion that the robot produces in the environment, the number of collisions, and finally, the learning efficiency of the algorithm.</p> <p> </p> <p>The results show that the proposed framework creates effective collaboration in different robots and tasks.</p>
16

Towards Manipulator Task-Oriented Programming: Automating Behavior-Tree Configuration

Yue Cao (18985100) 08 July 2024 (has links)
<p dir="ltr">Task-oriented programming is a way of programming manipulators in terms of high-level tasks instead of explicit motions. It has been a long-standing vision in robotics since its early days. Despite its potential, several challenges have hindered its full realization. This thesis identifies three major challenges, particularly in task specification and the planning-to-execution transition: 1) The absence of natural language integration in system input. 2) The dilemma of continuously developing non-uniform and domain-specific primitive-task libraries. 3) The requirement for much human intervention.</p><p dir="ltr">To overcome these difficulties, this thesis introduces a novel approach that integrates natural language inputs, eliminates the need on fixed primitive-task libraries, and minimizes human intervention. It adopts the behavior tree, a modular and user-friendly form, as the task representation and advances its usage in task specification and planning-to-execution transition. The thesis is structured into two parts – Task Specification and Planning-to-Execution Transition.</p><p dir="ltr">Task specification explores the use of large language models to generate a behavior tree from an end-user's input. A Phase-Step prompt is designed to enable the automatic behavior-tree generation from end-user's abstract task descriptions in natural languages. With the powerful generalizability of large language models, it breaks the dilemma that stays with fixed primitive-task libraries in task generation. A full-process case study demonstrated the proposed approach. An ablation study was conducted to evaluate the effectiveness of the Phase-Step prompts. Task specification also proposes behavior-tree embeddings to facilitate the retrieval-augmented generation of behavior trees. The integration of behavior-tree embeddings not only eliminates the need for manual prompt configuration but also provides a way to incorporate external domain knowledge into the generation process. Three types of evaluations were performed to assess the performance of the behavior-tree embedding method.</p><p dir="ltr">Planning-to-execution transition explores how to transit primitive tasks from task specification into manipulator executions. Two types of primitive tasks are considered separately: point-to-point movement tasks and object-interaction tasks. For point-to-point movement tasks, a behavior-tree reward is proposed to enable reinforcement learning over low-level movement while following high-level running order of the behavior tree. End-users only need to specify rewards on the primitive tasks over the behavior tree, and the rest of the process will be handled automatically. A 2D space movement simulation was provided to justify the approach. For object-interaction tasks, the planning-to-execution transition uses a large-language-model-based generation approach. This approach takes natural-language-described primitive tasks as input and directly produces task-frame-formalism set-points. Combined with hybrid position/force control systems, a transition process from primitive tasks directly into joint-level execution can be realized. Evaluations over a set of 30 primitive tasks were conducted.</p><p dir="ltr">Overall, this thesis proposes an approach that advances the behavior-tree towards automated task specification and planning-to-execution transitions. It opens up new possibilities for building better task-oriented manipulator programming systems.</p>
17

LEARNING GRASP POLICIES FOR MODULAR END-EFFECTORS OF MOBILE MANIPULATION PLATFORMS IN CLUTTERED ENVIRONMENTS

Juncheng Li (18418974) 22 April 2024 (has links)
<p dir="ltr">This dissertation presents the findings and research conducted during my Ph.D. study, which focuses on developing grasp policies for modular end-effectors on mobile manipulation platforms operating in cluttered environments. The primary objective of this research is to enhance the performance and accuracy of robotic manipulation systems in complex, real-world scenarios. The work has potential implications for various domains, including the rapidly growing Industry 4.0 and the advancement of autonomous systems in space habitats.</p><p dir="ltr">The dissertation offers a comprehensive literature review, emphasizing the challenges faced by mobile manipulation platforms in cluttered environments and the state-of-the-art techniques for grasping and manipulation. It showcases the development and evaluation of a Modular End-Effector System (MEES) for mobile manipulation platforms, which includes the investigation of object 6D pose estimation techniques, the generation of a deep learning-based grasping dataset for MEES, the development of a suction cup gripper grasping policy (Sim-Suction), the development of a two-finger grasping policy (Sim-Grasp), and the integration of Modular End-Effector System grasping policy (Sim-MEES). The proposed methodology integrates hardware designs, control algorithms, data-driven methods, and large language models to facilitate adaptive grasping strategies that consider the unique constraints and requirements of cluttered environments.</p><p dir="ltr">Furthermore, the dissertation discusses future research directions, such as further investigating the Modular End-Effector System grasping policy. This Ph.D. study aims to contribute to the advancement of robotic manipulation technology, ultimately enabling more versatile and robust mobile manipulation platforms capable of effectively interacting with complex environments.</p>
18

Monocular Camera-based Localization and Mapping for Autonomous Mobility

Shyam Sundar Kannan (6630713) 10 October 2024 (has links)
<p dir="ltr">Visual localization is a crucial component for autonomous vehicles and robots, enabling them to navigate effectively by interpreting visual cues from their surroundings. In visual localization, the agent estimates its six degrees of freedom camera pose using images captured by onboard cameras. However, the operating environment of the agent can undergo various changes, such as variations in illumination, time of day, seasonal shifts, and structural modifications, all of which can significantly affect the performance of vision-based localization systems. To ensure robust localization in dynamic conditions, it is vital to develop methods that can adapt to these variations.</p><p dir="ltr">This dissertation presents a suite of methods designed to enhance the robustness and accuracy of visual localization for autonomous agents, addressing the challenges posed by environmental changes. First, we introduce a visual place recognition system that aids the autonomous agent in identifying its location within a large-scale map by retrieving a reference image closely matching the query image captured by the camera. This system employs a vision transformer to extract both global and patch-level descriptors from the images. Global descriptors, which are compact vectors devoid of geometric details, facilitate the rapid retrieval of candidate images from the reference dataset. Patch-level descriptors, derived from the patch tokens of the transformer, are subsequently used for geometric verification, re-ranking the candidate images to pinpoint the reference image that most closely matches the query.</p><p dir="ltr">Building on place recognition, we present a method for pose refinement and relocalization that integrates the environment's 3D point cloud with the set of reference images. The closest image retrieved in the initial place recognition step provides a coarse pose estimate of the query image, which is then refined to compute a precise six degrees of freedom pose. This refinement process involves extracting features from the query image and the closest reference image and then regressing these features using a transformer-based network that estimates the pose of the query image. The features are appended with 2D and 3D positional embeddings that are calculated based on the camera parameters and the 3D point cloud of the environment. These embeddings add spatial awareness to the regression model, hence enhancing the accuracy of the pose estimation. The resulting refined pose can serve as a robust initialization for various localization frameworks or be used for localization on the go. </p><p dir="ltr">Recognizing that the operating environment may undergo permanent changes, such as structural modifications that can render existing reference maps outdated, we also introduce a zero-shot visual change detection framework. This framework identifies and localizes changes by comparing current images with historical images from the same locality on the map, leveraging foundational vision models to operate without extensive annotated training data. It accurately detects changes and classifies them as temporary or permanent, enabling timely and informed updates to reference maps. This capability is essential for maintaining the accuracy and robustness of visual localization systems over time, particularly in dynamic environments.</p><p dir="ltr">Collectively, the contributions of this dissertation in place recognition, pose refinement, and change detection advance the state of visual localization, providing a comprehensive and adaptable framework that supports the evolving requirements of autonomous mobility. By enhancing the accuracy, robustness, and adaptability of visual localization, these methods contribute significantly to the development and deployment of fully autonomous systems capable of navigating complex and changing environments with high reliability.</p>
19

Reinforcement learning and convergence analysis with applications to agent-based systems

Leng, Jinsong January 2008 (has links)
Agent-based systems usually operate in real-time, stochastic and dynamic environments. Many theoretical and applied techniques have been applied to the investigation of agent architecture with respect to communication, cooperation, and learning, in order to provide a framework for implementing artificial intelligence and computing techniques. Intelligent agents are required to be able to adapt and learn in uncertain environments via communication and collaboration (in both competitive and cooperative situations). The ability of reasoning and learning is one fundamental feature for intelligent agents. Due to the inherent complexity, however, it is difficult to verify the properties of the complex and dynamic environments a priori. Since analytic techniques are inadequate for solving these problems, reinforcement learning (RL) has appeared as a popular approach by mapping states to actions, so as to maximise the long-term rewards. Computer simulation is needed to replicate an experiment for testing and verifying the efficiency of simulation-based optimisation techniques. In doing so, a simulation testbed called robot soccer is used to test the learning algorithms in the specified scenarios. This research involves the investigation of simulation-based optimisation techniques in agent-based systems. Firstly, a hybrid agent teaming framework is presented for investigating agent team architecture, learning abilities, and other specific behaviors. Secondly, the novel reinforcement learning algorithms to verify goal-oriented agents; competitive and cooperative learning abilities for decision-making are developed. In addition, the function approximation technique known as tile coding (TC), is used to avoid the state space growing exponentially with the curse of dimensionality. Thirdly, the underlying mechanism of eligibility traces is analysed in terms of on-policy algorithm and off-policy algorithm, accumulating traces and replacing traces. Fourthly, the "design of experiment" techniques, such as Simulated Annealing method and Response Surface methodology, are integrated with reinforcement learning techniques to enhance the performance. Fifthly, a methodology is proposed to find the optimal parameter values to improve convergence and efficiency of the learning algorithms. Finally, this thesis provides a serious full-fledged numerical analysis on the efficiency of various RL techniques.
20

Development of Learning Control Strategies for a Cable-Driven Device Assisting a Human Joint

Hao Xiong (7954217) 25 November 2019 (has links)
<div>There are millions of individuals in the world who currently experience limited mobility as a result of aging, stroke, injuries to the brain or spinal cord, and certain neurological diseases. Robotic Assistive Devices (RADs) have shown superiority in helping people with limited mobility by providing physical movement assistance. However, RADs currently existing on the market for people with limited mobility are still far from intelligent.</div><div><br></div><div>Learning control strategies are developed in this study to make a Cable-Driven Assistive Device (CDAD) intelligent in assisting a human joint (e.g., a knee joint, an ankle joint, or a wrist joint). CDADs are a type of RADs designed based on Cable-Driven Parallel Robots (CDPRs). A PID–FNN control strategy and DDPG-based strategies are proposed to allow a CDAD to learn physical human-robot interactions when controlling the pose of the human joint. Both pose-tracking and trajectory-tracking tasks are designed to evaluate the PID–FNN control strategy and the DDPG-based strategies through simulations. Simulations are conducted in the Gazebo simulator using an example CDAD with three degrees of freedom and four cables. Simulation results show that the proposed PID–FNN control strategy and DDPG-based strategies work in controlling a CDAD with proper learning.</div>

Page generated in 0.1092 seconds