• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5512
  • 1072
  • 768
  • 625
  • 541
  • 355
  • 145
  • 96
  • 96
  • 96
  • 96
  • 96
  • 96
  • 95
  • 83
  • Tagged with
  • 11494
  • 6047
  • 2543
  • 1989
  • 1676
  • 1419
  • 1350
  • 1317
  • 1217
  • 1136
  • 1075
  • 1037
  • 1011
  • 891
  • 877
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
681

A knowledge-based system for the conceptual design of gas and liquid separation systems

Beilstein, James Ralph 01 January 1996 (has links)
Most recent research in the synthesis of separation systems has been focused on the development of more rigorous procedures for the design and synthesis of specific unit operations, rather than the synthesis of these units as a part of the overall flowsheet. Furthermore, many of the chemical process used in the production of commodity and specialty chemicals are characterized by many reaction steps with separations required between each step. The coupling of the reaction and separation steps has a significant impact on the recycle and separation system structures. A completely integrated hierarchical procedure has been developed for the conceptual design of vapor-liquid separation systems found in multiple reaction step processes. A new decomposition procedure is presented for determining the general structure of the separation system for processes involving reactions which occur in vapor-liquid-liquid-solid phase mixtures. The interactions of the separation subsystems for; vapor recovery, solid recovery, and liquid separations are identified, including process recycles between subsystems. The vapor recovery system and distillation sequence are then synthesized, the dominant process design variables are identified, the size and cost of the process units are determined, and the economic potential is calculated. Alternatives can then be quickly compared and ranked. Design procedures have been implemented in an expert system environment for the synthesis of gas membrane, condensation (high pressure or low temperature), and complex distillation columns (sidestream, side rectifier, and side stripper columns) separations. Finally, a procedure is presented for assessing the incentive for the combining of distillation systems in multiple step reaction processes. Dilute mixture separations generally represent the highest separation costs in the distillation system. The pooling of plant distillation separations can lead to better separations by reducing flow imbalances and dilute mixtures in the separation system feed. A hybrid of object-oriented and rule-based techniques has been used in the development and implementation of the procedure in PIPII, a computer-aided design tool which can rapidly generate process flowsheet alternatives and estimate the optimum range of the process conditions. The hierarchical nature of the procedure quickly prunes the number of viable alternatives which must be examined for a given process. The procedures, reasoning, and methods are thoroughly discussed within.
682

Learning text analysis rules for domain-specific natural language processing

Soderland, Stephen Glenn 01 January 1997 (has links)
An enormous amount of knowledge is needed to infer the meaning of unrestricted natural language. The problem can be reduced to a manageable size by restricting attention to a specific domain, which is a corpus of texts together with a predefined set of concepts that are of interest to that domain. Two widely different domains are used to illustrate this domain-specific approach. One domain is a collection of Wall Street Journal articles in which the target concept is management succession events: identifying persons moving into corporate management positions or moving out. A second domain is a collection of hospital discharge summaries in which the target concepts are various classes of diagnosis or symptom. The goal of an information extraction system is to identify references to the concept of interest for a particular domain. A key knowledge source for this purpose is a set of text analysis rules based on the vocabulary, semantic classes, and writing style peculiar to the domain. This thesis presents CRYSTAL, an implemented system that automatically induces domain-specific text analysis rules from training examples. CRYSTAL learns rules that approach the performance of hand-coded rules, are robust in the face of noise and inadequate features, and require only a modest amount of training data. CRYSTAL belongs to a class of machine learning algorithms called covering algorithms, and presents a novel control strategy with time and space complexities that are independent of the number of features. CRYSTAL navigates efficiently through an extremely large space of possible rules. CRYSTAL also demonstrates that expressive rule representation is essential for high performance, robust text analysis rules. While simple rules are adequate to capture the most salient regularities in the training data, high performance can only be achieved when rules are expressive enough to reflect the subtlety and variability of unrestricted natural language.
683

The development of hierarchical knowledge in robot systems

Hart, Stephen W 01 January 2009 (has links)
This dissertation investigates two complementary ideas in the literature on machine learning and robotics—those of embodiment and intrinsic motivation—to address a unified framework for skill learning and knowledge acquisition. "Embodied" systems make use of structure derived directly from sensory and motor configurations for learning behavior. Intrinsically motivated systems learn by searching for native, hedonic value through interaction with the world. Psychological theories of intrinsic motivation suggest that there exist internal drives favoring open-ended cognitive development and exploration. I argue that intrinsically motivated, embodied systems can learn generalizable skills, acquire control knowledge, and form an epistemological understanding of the world in terms of behavioral affordances. I propose that the development of behavior results from the assembly of an agent's sensory and motor resources into state and action spaces that can be explored autonomously. I introduce an intrinsic reward function that can lead to the open-ended learning of hierarchical behavior. This behavior is factored into declarative "recipes" for patterned activity and common sense procedural strategies for implementing them in a variety of run-time contexts. These skills form a categorical basis for the robot to interpret and model its world in terms of the behavior it affords. Experiments conducted on a bimanual robot illustrate a progression of cumulative manipulation behavior addressing manual and visual skills. Such accumulation of skill over the long-term by a single robot is a novel contribution that has yet to be demonstrated in the literature.
684

Learning the structure of activities for a mobile robot

Schmill, Matthew D 01 January 2004 (has links)
At birth, the human infant has only a very rudimentary perceptual system and similarly rudimentary control over its musculature. As time goes on, a child develops. Its ability to control, perceive, and predict its own behavior improves as it interacts with its environment. We are interested in the process of development, in particular with respect to activity. How might an intelligent agent of our own design learn to represent and organize procedural knowledge so that over time it becomes more competent at its achieving goals in its own environment? In this dissertation, we present a system that allows an agent to learn models of activity and its environment and then use those models to create units of behavior of increasing sophistication for the purpose of achieving its own internally-generated goals.
685

Meta-level control in multi-agent systems

Raja, Anita 01 January 2003 (has links)
Sophisticated agents operating in open environments must make complex real-time control decisions on scheduling and coordination of domain activities. These decisions are made in the context of limited resources and uncertainty about the outcomes of activities. Many efficient architectures and algorithms that support these computation-intensive activities have been developed and studied. However, none of these architectures explicitly reason about the consumption of time and other resources by these activities, which may degrade an agent's performance. The problem of sequencing execution and computational activities without consuming too many resources in the process, is the meta-level control problem for a resource-bounded rational agent. The focus of this research is to provide effective allocation of computation and unproved performance of individual agents in a cooperative multi-agent system. This is done by approximating the ideal solution to meta-level decisions made by these agents using reinforcement learning methods. A meta-level agent control architecture for meta-level reasoning with bounded computational overhead is described. This architecture supports decisions on when to accept, delay or reject a new task, when it is appropriate to negotiate with another agent, whether to renegotiate when a negotiation task fails, how much effort to put into scheduling when reasoning about a new task and whether to reschedule when actual execution performance deviates from expected performance. The major contributions of this work are: a resource-bounded framework that supports detailed reasoning about scheduling and coordination costs; an abstract representation of the agent state which is used by hand-generated heuristic strategies to make meta-level control decisions; and a reinforcement learning based approach which automatically learns efficient meta-level control policies.
686

Autonomous robot skill acquisition

Konidaris, George Dimitri 01 January 2011 (has links)
Among the most impressive of aspects of human intelligence is skill acquisition—the ability to identify important behavioral components, retain them as skills, refine them through practice, and apply them in new task contexts. Skill acquisition underlies both our ability to choose to spend time and effort to specialize at particular tasks, and our ability to collect and exploit previous experience to become able to solve harder and harder problems over time with less and less cognitive effort. Hierarchical reinforcement learning provides a theoretical basis for skill acquisition, including principled methods for learning new skills and deploying them during problem solving. However, existing work focuses largely on small, discrete problems. This dissertation addresses the question of how we scale such methods up to high-dimensional, continuous domains, in order to design robots that are able to acquire skills autonomously. This presents three major challenges; we introduce novel methods addressing each of these challenges. First, how does an agent operating in a continuous environment discover skills? Although the literature contains several methods for skill discovery in discrete environments, it offers none for the general continuous case. We introduce skill chaining, a general skill discovery method for continuous domains. Skill chaining incrementally builds a skill tree that allows an agent to reach a solution state from any of its start states by executing a sequence (or chain) of acquired skills. We empirically demonstrate that skill chaining can improve performance over monolithic policy learning in the Pinball domain, a challenging dynamic and continuous reinforcement learning problem. Second, how do we scale up to high-dimensional state spaces? While learning in relatively small domains is generally feasible, it becomes exponentially harder as the number of state variables grows. We introduce abstraction selection, an efficient algorithm for selecting skill-specific, compact representations from a library of available representations when creating a new skill. Abstraction selection can be combined with skill chaining to solve hard tasks by breaking them up into chains of skills, each defined using an appropriate abstraction. We show that abstraction selection selects an appropriate representation for a new skill using very little sample data, and that this leads to significant performance improvements in the Continuous Playroom, a relatively high-dimensional reinforcement learning problem. Finally, how do we obtain good initial policies? The amount of experience required to learn a reasonable policy from scratch in most interesting domains is unrealistic for robots operating in the real world. We introduce CST, an algorithm for rapidly constructing skill trees (with appropriate abstractions) from sample trajectories obtained via human demonstration, a feedback controller, or a planner. We use CST to construct skill trees from human demonstration in the Pinball domain, and to extract a sequence of low-dimensional skills from demonstration trajectories on a mobile robot. The resulting skills can be reliably reproduced using a small number of example trajectories. Finally, these techniques are applied to build a mobile robot control system for the uBot-5, resulting in a mobile robot that is able to acquire skills autonomously. We demonstrate that this system is able to use skills acquired in one problem to more quickly solve a new problem.
687

The Parameter Signature Isolation Method and Applications

McCusker, James R 01 January 2011 (has links)
The aim of this research was to develop a method of system identification that would draw inspiration from the approach taken by human experts for simulation model tuning and validation. Human experts are able to utilize their natural pattern recognition ability to identify the various shape attributes, or signatures, of a time series from simulation model outputs. They can also intelligently and effectively perform tasks ranging from system identification to model validation. However, the feature extraction approach employed by them cannot be readily automated due to the difficulty in measuring shape attributes. In order to bridge the gap between the approach taken by human experts and those employed for traditional iterative approaches, a method to quantify the shape attributes was devised. The method presented in this dissertation, the Parameter Signature Isolation Method (PARSIM), uses continuous wavelet transformation to characterize specific aspects of the time series shape through surfaces in the time-scale domain. A salient characteristic of these surfaces is their enhanced delineation of the model outputs and/or their sensitivities. One benefit of this enhanced delineation is the capacity to isolate regions of the time-scale plane, coined as parameter signatures, wherein individual output sensitivities dominate all the others. The parameter signatures enable the estimation of each model parameter error separately with applicability to parameter estimation. The proposed parameter estimation method has unique features, one of them being the capacity for noise suppression, wherein the feature of relying entirely on the time-scale domain for parameter estimation offers direct noise compensation in this domain. Yet another utility of parameter signatures is in measurement selection, whereby the existence of parameter signatures is attributed to the identifiability of model parameters through various outputs. The effectiveness of PARSIM is demonstrated through an array of theoretical models, such as the Lorenz System and the Van der Pol oscillator, as well as through the real-world simulation models of an injection molding process and a jet engine.
688

Decision-theoretic meta-reasoning in partially observable and decentralized settings

Carlin, Alan 01 January 2012 (has links)
This thesis examines decentralized meta-reasoning. For a single agent or multiple agents, it may not be enough for agents to compute correct decisions if they do not do so in a timely or resource efficient fashion. The utility of agent decisions typically increases with decision quality, but decreases with computation time. The reasoning about one's computation process is referred to as meta-reasoning. Aspects of meta-reasoning considered in this thesis include the reasoning about how to allocate computational resources, including when to stop one type of computation and begin another, and when to stop all computation and report an answer. Given a computational model, this translates into computing how to schedule the basic computations that solve a problem. This thesis constructs meta-reasoning strategies for the purposes of monitoring and control in multi-agent settings, specifically settings that can be modeled by the Decentralized Partially Observable Markov Decision Process (Dec-POMDP). It uses decision theory to optimize computation for efficiency in time and space in communicative and non-communicative decentralized settings. Whereas base-level reasoning describes the optimization of actual agent behaviors, the meta-reasoning strategies produced by this thesis dynamically optimize the computational resources which lead to the selection of base-level behaviors.
689

Weakly supervised learning for unconstrained face processing

Huang, Gary B 01 January 2012 (has links)
Machine face recognition has traditionally been studied under the assumption of a carefully controlled image acquisition process. By controlling image acquisition, variation due to factors such as pose, lighting, and background can be either largely eliminated or specifically limited to a study over a discrete number of possibilities. Applications of face recognition have had mixed success when deployed in conditions where the assumption of controlled image acquisition no longer holds. This dissertation focuses on this unconstrained face recognition problem, where face images exhibit the same amount of variability that one would encounter in everyday life. We formalize unconstrained face recognition as a binary pair matching problem (verification), and present a data set for benchmarking performance on the unconstrained face verification task. We observe that it is comparatively much easier to obtain many examples of unlabeled face images than face images that have been labeled with identity or other higher level information, such as the position of the eyes and other facial features. We thus focus on improving unconstrained face verification by leveraging the information present in this source of weakly supervised data. We first show how unlabeled face images can be used to perform unsupervised face alignment, thereby reducing variability in pose and improving verification accuracy. Next, we demonstrate how deep learning can be used to perform unsupervised feature discovery, providing additional image representations that can be combined with representations from standard hand-crafted image descriptors, to further improve recognition performance. Finally, we combine unsupervised feature learning with joint face alignment, leading to an unsupervised alignment system that achieves gains in recognition performance matching that achieved by supervised alignment.
690

A probabilistic model of hierarchical music analysis

Kirlin, Phillip B 01 January 2014 (has links)
Schenkerian music theory supposes that Western tonal compositions can be viewed as hierarchies of musical objects. The process of Schenkerian analysis reveals this hierarchy by identifying connections between notes or chords of a composition that illustrate both the small- and large-scale construction of the music. We present a new probabilistic model of this variety of music analysis, details of how the parameters of the model can be learned from a corpus, an algorithm for deriving the most probable analysis for a given piece of music, and both quantitative and human-based evaluations of the algorithm's performance. In addition, we describe the creation of the corpus, the first publicly available data set to contain both musical excerpts and corresponding computer-readable Schenkerian analyses. Combining this corpus with the probabilistic model gives us the first completely data-driven computational approach to hierarchical music analysis.

Page generated in 0.0505 seconds