• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3284
  • 568
  • 314
  • 234
  • 196
  • 127
  • 108
  • 83
  • 83
  • 83
  • 83
  • 83
  • 83
  • 32
  • 29
  • Tagged with
  • 6046
  • 6046
  • 2179
  • 1672
  • 1376
  • 1254
  • 943
  • 921
  • 809
  • 790
  • 717
  • 658
  • 635
  • 587
  • 572
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

Flexibility in a knowledge-based system for solving dynamic resource-constrained scheduling problems

Hildum, David Waldau 01 January 1994 (has links)
The resource-constrained scheduling problem (RCSP) involves the assignment of a limited set of resources to a collection of tasks, with the intent of satisfying some particular qualitative objective, under a variety of technological and temporal constraints. Real-world environments, however, introduce a variety of complications to the standard RCSP. The dynamic resource-constrained scheduling problem describes a class of real-world RCSPs that exist within the context of dynamic and unpredictable environments, where the details of the problem are often incomplete, and subject to change over time, without notice. Previous approaches to solving resource-constrained scheduling problems failed to focus on the dynamic nature of real-world environments. The scheduling process occurs away from the environment in which the resulting schedule is executed. Complete prior knowledge of the order set is assumed, and reaction to changes in the environment, if at all, is limited. We have developed a generic, multi-faceted, knowledge-based approach to solving dynamic resource-constrained scheduling problems, which focuses on issues of flexibility during the solution process to enable effective reaction to dynamic environments. Our approach is characterized by a highly opportunistic control scheme that provides the ability to adapt quickly to changes in the environment, a least-commitment scheduling procedure that preserves maneuverability by explicitly incorporating slack time into the developing schedule, and the systematic consultation of a range of relevant scheduling perspectives at key decision-making points that provides an informed view of the current state of problem-solving at all times. The Dynamic Scheduling System (DSS) is a working implementation of our scheduling approach, capable of representing a wide range of dynamic RCSPs, and producing quality schedules under a variety of real-world conditions. It handles a number of additional domain complexities, such as inter-order tasks and mobile resources with significant travel requirements. We discuss our scheduling approach and its application to two different RCSP domains, and evaluate its effectiveness in each, using special application systems built with DSS.
492

Paying attention to what matters: Observation abstraction in partially observable environments

Wolfe, Alicia Peregrin 01 January 2010 (has links)
Autonomous agents may not have access to complete information about the state of the environment. For example, a robot soccer player may only be able to estimate the locations of other players not in the scope of its sensors. However, even though all the information needed for ideal decision making cannot be sensed, all that is sensed is usually not needed. The noise and motion of spectators, for example, can be ignored in order to focus on the game field. Standard formulations do not consider this situation, assuming that all the can be sensed must be included in any useful abstraction. This dissertation extends the Markov Decision Process Homomorphism framework (Ravindran, 2004) to partially observable domains, focusing specically on reducing Partially Observable Markov Decision Processes (POMDPs) when the model is known. This involves ignoring aspects of the observation function which are irrelevant to a particular task. Abstraction is particularly important in partially observable domains, as it enables the formation of a smaller domain model and thus more efficient use of the observed features.
493

SwinFSR: Stereo Image Super-Resolution using SwinIR and Frequency Domain Knowledge

CHEN, KE January 2023 (has links)
Stereo Image Super-Resolution (stereoSR) has attracted significant attention in recent years due to the extensive deployment of dual cameras in mobile phones, autonomous vehicles and robots. In this work, we propose a new StereoSR method, named SwinFSR, based on an extension of SwinIR, originally designed for single image restoration, and the frequency domain knowledge obtained by the Fast Fourier Convolution (FFC). Specifically, to effectively gather global information, we modify the Residual Swin Transformer blocks (RSTBs) in SwinIR by explicitly incorporating the frequency domain knowledge using the FFC and employing the resulting residual Swin Fourier Transformer blocks (RSFTBlocks) for feature extraction. Besides, for the efficient and accurate fusion of stereo views, we propose a new cross-attention module referred to as RCAM, which achieves highly competitive performance while requiring less computational cost than the state-of-the-art cross-attention modules. Extensive experimental results and ablation studies demonstrate the effectiveness and efficiency of our proposed SwinFSR. iv / Thesis / Master of Applied Science (MASc)
494

Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery

Koh, Senglee 01 January 2018 (has links)
State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with 'pick-and-place' tasks in an ideal 'Blocks World' environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic 'Object' and 'Location' grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control.
495

Transparency and Communication Patterns in Human-Robot Teaming

Lakhmani, Shan 01 May 2019 (has links)
In anticipation of the complex, dynamic battlefields of the future, military operations are increasingly demanding robots with increased autonomous capabilities to support soldiers. Effective communication is necessary to establish a common ground on which human-robot teamwork can be established across the continuum of military operations. However, the types and format of communication for mixed-initiative collaboration is still not fully understood. This study explores two approaches to communication in human-robot interaction, transparency and communication pattern, and examines how manipulating these elements with a robot teammate affects its human counterpart in a collaborative exercise. Participants were coupled with a computer-simulated robot to perform a cordon-and-search-like task. A human-robot interface provided different transparency types - about the robot's decision making process alone, or about the robot's decision making process and its prediction of the human teammate's decision making process - and different communication patterns - either conveying information to the participant or both conveying information to and soliciting information from the participant. This experiment revealed that participants found robots that both conveyed and solicited information to be more animate, likeable, and intelligent than their less interactive counterparts, but working with those robots led to more misses in a target classification task. Furthermore, the act of responding to the robot led to a reduction in the number of correct identifications made, but only when the robot was solely providing information about its own decision making process. Findings from this effort inform the design of next-generation visual displays supporting human-robot teaming.
496

Learning to solve Markovian decision processes

Singh, Satinder Pal 01 January 1994 (has links)
This dissertation is about building learning control architectures for agents embedded in finite, stationary, and Markovian environments. Such architectures give embedded agents the ability to improve autonomously the efficiency with which they can achieve goals. Machine learning researchers have developed reinforcement learning (RL) algorithms based on dynamic programming (DP) that use the agent's experience in its environment to improve its decision policy incrementally. This is achieved by adapting all evaluation function in such a way that the decision policy that is "greedy" with respect to it improves with experience. This dissertation focuses on finite, stationary and Markovian environments for two reasons: it allows the development and use of a strong theory of RL, and there are many challenging real-world RL tasks that fall into this category. This dissertation establishes a novel connection between stochastic approximation theory and RL that provides a uniform framework for understanding all the different RL algorithms that have been proposed to date. It also highlights a dimension that clearly separates all RL research from prior work on DP. Two other theoretical results showing how approximations affect performance in RL provide partial justification for the use of compact function approximators in RL. In addition, a new family of "soft" DP algorithms is presented. These algorithms converge to solutions that are more robust than the solutions found by classical DP algorithms. Despite all of the theoretical progress, conventional RL architectures scale poorly enough to make them impractical for many real-world problems. This dissertation studies two aspects of the scaling issue: the need to accelerate RL, and the need to build RL architectures that can learn to solve multiple tasks. It presents three RL architectures, CQ-L, H-DYNA, and BB-RL, that accelerate learning by facilitating transfer of training from simple to complex tasks. Each architecture uses a different method to achieve transfer of training; CQ-L uses the evaluation functions for simple tasks as building blocks to construct the evaluation function for complex tasks, H-DYNA uses the evaluation functions for simple tasks to build an abstract environment model, and BB-RL uses the decision policies found for the simple tasks as the primitive actions for the complex tasks. A mixture of theoretical and empirical results are presented to support the new RL architectures developed in this dissertation.
497

The Role of Innovative Elements in the Patentability of Machine Learning Algorithms

Power, Cheryl Denise 16 December 2022 (has links)
Advances in data-driven digital innovations during Industrial Revolution 4.0 are the foundation for this patent discussion. In a shifting technological paradigm, I argue for an approach that considers the broader theoretical perspectives on innovation and the place of the term invention within that perspective. This research could inform the assessment of a patent for Machine Learning algorithms in Artificial Intelligence. For instance, inventions may have elements termed abstract (yet innovative) and not previously within the purview of patent law. Emergent algorithms do not necessarily align with existing patent guidance rather algorithms are nuanced, increasing support for a refined approach. In this thesis, I discuss the term algorithm and how a novel combination of elements or a cooperating set of essential and non-essential elements, can result in a patentable result. For instance, a patentable end can include an algorithm as part of an application, whether it is integrated with a functional physical component such as a computer, whether it includes sophisticated calculations with a tangible end, or whether parameters adjust for speed or utility. I plan to reconsider the term algorithm in my arguments by exploring some challenges to the section 27(8) of the Patent Act, “What may not be patented,” including, that “no patent shall be granted for any mere scientific principle or abstract theorem.” The role of the algorithm in the proposed invention can be determinative of patent eligibility. There are three lines of evidence used in this thesis. First, the thesis uses theoretical perspectives in innovation, some close to a century old. These are surprisingly relevant in the digital era. I illustrate the importance of considering these perspectives in innovation when identifying key contributing factors in a patent framework. For instance, I use innovation perspectives, including cluster theory, to inform the development of an approach to the patentable subject matter and the obviousness standard in AI software inventions. This approach highlights applications of emerging algorithmic technologies and considers the evolving nature of math beyond the basic algorithm and as a part of a physical machine or manufacture that is important in this emerging technological context. As part of the second line of evidence, I review how the existing Canadian Federal & Supreme Court cases inform patent assessments for algorithms found in emerging technologies such as Artificial Intelligence. I explore the historical understanding of patent eligibility in software, professional skills, and business methods and apply cases that use relevant inventions from a different discipline. As such, I reflect upon the differing judicial perspectives that could influence achieving patent-eligible subject matter in the software space and, by extension how these decisions would hold in current times. Further to patent eligibility, I review the patentability requirements for novelty, utility, and non-obviousness. As part of the third line of evidence, I reflect on why I collected the interview data and justify why it contributes to a better understanding of the thesis issues and overall narrative. Next, I provide detail and explain why certain questions formed a part of the interview and how the responses helped to synthesize the respective chapters of the thesis. The questions focus on patent drafting, impressions of the key cases, innovation, and the in-depth expertise of the experts on these topics. Finally, I provide recommendations for how the patent office and the courts could explore areas for further inquiry and action.
498

Evaluation of Dentists’ Perceptions and Intention to Use Voice Assistant Technology

Warren, Spencer 10 November 2022 (has links)
No description available.
499

Dynamic scenario simulation optimization

André Monteiro de Oliveira Restivo January 2006 (has links)
The optimization of parameter driven simulations has been the focus of many research papers. Algorithms like Hill Climbing, Tabu-Search and Simulated Annealing have been thoroughly discussed and analyzed. However, these algorithms do not take into account the fact that simulations can have dynamic scenarios. In this dissertation, the possibility of using the classical optimization methods just mentioned, combined with clustering techniques, in order to optimize parameter driven simulations having dynamic scenarios, will be analyzed. This will be accomplished by optimizing simulations in several random static scenarios. The optimum results of each of these optimizations will be clustered in order to find a set of typical solutions for the simulation. These typical solutions can then be used in dynamic scenario simulations as references that will help the simulation adapt to scenario changes. A generic optimization and clustering system was developed in order to test the method just described. A simple traffic simulation system, to be used as a testbed, was also developed. The results of this approach show that, in some cases, it is possible to improve the outcome of simulations in dynamic environments and still use the classical methods developed for static scenarios.
500

Learning object recognition strategies

Draper, Bruce Anthony 01 January 1993 (has links)
Most knowledge-directed vision systems recognize objects by the use of hand-crafted, heuristic control strategies. Generally, the programmer or knowledge engineer who constructs them begins with an intuitive notion of how an object should be recognized, a notion that is laboriously refined by trial-and-error. Eventually the programmer finds a combination of features (e.g. shape, color, or context) and methods (e.g. geometric model matching, minimum-distance classification or generalized Hough transforms) that allow each object to be reliably identified within its domain. Unfortunately, human engineering is not cost-effective for many real-world applications, a defect that has relegated most knowledge-directed visions systems to the laboratory. Knowledge-directed systems also tend to be difficult to analyze, since their performance, in terms of cost, accuracy, and reliability, is unknown, and comparisons to other hand-crafted systems are difficult at best. Worst of all, when the domain is changed, knowledge-directed systems often have to be rebuilt from scratch. The Schema Learning System (SLS) addresses these problems by learning knowledge-directed recognition strategies under supervision. More precisely, SLS learns its recognition strategies from training images (with solutions) and a library of generic visual procedures. The result is a system that develops robust and efficient recognition strategies with a minimum of human involvement, and that analyzes the strategies it learns to estimate both their expected cost and probability of failure. In order to represent strategies, recognition is modeled in SLS as a sequence of small verification tasks interleaved with representational transformations. At each level of representation, features of a representational instance, called a hypothesis, are measured in order to verify or reject the hypothesis. Hypotheses that are verified (or, more accurately, not rejected) are then transformed to a more abstract level of representation, where features of the new representation are measured and the process repeats itself. The recognition graphs learned by SLS are executable recognition graphs capable of recognizing the 3D locations and orientations of objects in complex scenes.

Page generated in 0.1187 seconds