• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2623
  • 525
  • 226
  • 196
  • 138
  • 127
  • 83
  • 83
  • 83
  • 83
  • 83
  • 83
  • 81
  • 29
  • 28
  • Tagged with
  • 5002
  • 5002
  • 1911
  • 1279
  • 1240
  • 985
  • 777
  • 761
  • 696
  • 657
  • 624
  • 492
  • 488
  • 488
  • 443
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Intelligent Library Systems: Artificial Intelligence Technology and Library Automation Systems

Bailey, Charles W. January 1991 (has links)
Artificial Intelligence (AI) encompasses the following general areas of research: (1) automatic programming, (2) computer vision, (3) expert systems, (4) intelligent computer-assisted instruction, (5) natural language processing, (6) planning and decision support, (7) robotics, and (8) speech recognition. Intelligent library systems utilize artificial intelligence technologies to provide knowledge-based services to library patrons and staff. This paper examines certain key aspects of AI that determine its potential utility as a tool for building library systems. It discusses the barriers that inhibit the development of intelligent library systems, and it suggests possible strategies for making progress in this important area. While all of the areas of AI research indicated previously may have some eventual application in the development of library systems, this paper primarily focuses on a few that the author judges to be of most immediate significance--expert systems, intelligent computer-assisted instruction, and natural language applications. This paper does not discuss the use of AI knowledge-bases in libraries as subject-oriented library materials.
12

Integrating Multiple Modalities into Deep Learning Networks

McNeil, Patrick N. 30 June 2017 (has links)
<p> Deep learning networks in the literature traditionally only used a single input modality (or data stream). Integrating multiple modalities into deep learning networks with the goal of correlating extracted features was a major issue. Traditional methods involved treating each modality separately and then writing custom code to combine the extracted features.</p><p> Current solutions for small numbers of modalities (three or less) showed there are multiple architectures for modality integration. With an increase in the number of modalities, the &ldquo;curse of dimensionality&rdquo; affects the performance of the system. The research showed current methods for larger scale integrations required separate, custom created modules with another integration layer outside the deep learning network. These current solutions do not scale well nor provide good generalized performance. This research report studied architectures using multiple modalities and the creation of a scalable and efficient architecture.</p>
13

Learning from Temporally-Structured Human Activities Data

Lipton, Zachary C. 06 January 2018 (has links)
<p> Despite the extraordinary success of deep learning on diverse problems, these triumphs are too often confined to large, clean datasets and well-defined objectives. Face recognition systems train on millions of perfectly annotated images. Commercial speech recognition systems train on thousands of hours of painstakingly-annotated data. But for applications addressing human activity, data can be noisy, expensive to collect, and plagued by missing values. In electronic health records, for example, each attribute might be observed on a different time scale. Complicating matters further, deciding precisely what objective warrants optimization requires critical consideration of both algorithms and the application domain. Moreover, deploying human-interacting systems requires careful consideration of societal demands such as safety, interpretability, and fairness.</p><p> The aim of this thesis is to address the obstacles to mining temporal patterns in human activity data. The primary contributions are: (1) the first application of RNNs to multivariate clinical time series data, with several techniques for bridging long-term dependencies and modeling missing data; (2) a neural network algorithm for forecasting surgery duration while simultaneously modeling heteroscedasticity; (3) an approach to quantitative investing that uses RNNs to forecast company fundamentals; (4) an exploration strategy for deep reinforcement learners that significantly speeds up dialogue policy learning; (5) an algorithm to minimize the number of catastrophic mistakes made by a reinforcement learner; (6) critical works addressing model interpretability and fairness in algorithmic decision-making.</p><p>
14

Learning recursive definitions in prolog.

Rios, Riverson. January 1998 (has links)
Inductive Logic Programming (ILP) is one of the new and fast growing sub-fields of artificial intelligence. Given a specification language, the goal is to induce a logic program from examples of how the program should work (and also of how it should not work). One main difficulty of ILP lies in learning recursively defined predicates. Today's systems strongly rely on a set of supporting predicates known as the background knowledge that helps define the recursive clause. The dependence on background knowledge has its drawbacks in that it is assumed that the user knows in advance what sort of predicates are required by the target definition. Predicate invention, a research topic that has received a lot of attention lately, can remedy the situation by extending the specification language with new concepts, which appear neither in the examples nor in the background knowledge, and finding a definition for them. A serious concern is that no examples of the invented predicate are explicitly given but rather of the target predicate, so learning has to be done in the absence or scarcity of examples. This research is concerned with the problem of learning recursive definitions based on inverting clausal implication from a small data set. The aim is both to derive an autonomous learning method that can invent the recursive predicates it needs, and to implement it in an efficient manner. Experiments show that the system is capable of finding a correct definition of many relations by inventing the necessary predicates, but does not perform very well on random examples. A comparison between several similar systems that learn recursive definitions of a single predicate is shown. We also show the need for system-generated negative examples and discuss several pitfalls of predicate invention and the absence/scarcity of examples.
15

DEPARS: Design Pattern Recognition System.

Sun, Te-Wei. January 1997 (has links)
The industry has widely accepted the concept of design patterns to promote quality design reuse in the recent years. However, there are several problems preventing design patterns being used efficiently and effectively. The design pattern recognition system, DEPARS, discussed in this dissertation relieves these problems and promotes design pattern reuse. DEPARS recognizes patterns in object models by matching to templates in the knowledge base. DEPARS arranges the templates in the knowledge base in a hierarchy such that templates close to the root of the hierarchy are the bases of the ones below. The hierarchy reduces DEPARS's matching effort because it narrows the search area. DEPARS provides information about the recognized patterns to designers. This information helps designers to apply appropriate patterns in designs. DEPARS has pattern mining capability. DEPARS recognizes new patterns that may be reusable in the future from existing designs. In addition, DEPARS also facilitates designers verifying the recurrence of proto-patterns by storing the proto-patterns in the knowledge base. Once the proto-patterns are in the knowledge base, DEPARS can recognize them in future designs and hence shows the recurrence of the proto-patterns. The dissertation presents the design and operation of DEPARS. The dissertation also reports and discusses the evaluation results of DEPARS. The evaluation shows promising results indicating that DEPARS is adequate for practical use.
16

Developing mobile distributed intelligent network services using RM-ODP.

Rampal, Gaurav S. January 1998 (has links)
The Intelligent Network (IN) is a conceptual model for a service development technology to create telecommunication services. In its current form, IN is limited to service creation in isolated networks and cannot support co-operative service development between two or more networks. Rapid development in networking paradigms and standards has led to an urgent need of finding solutions to the problem of interworking heterogeneous networks. Differing abstraction levels make meaningful exchange of information difficult, and IN has not been able to meet this requirement. The Reference Model for Open Distributed Processing (RM-ODP) is a distributed object based architecture which provides a high level framework for distributed systems. The emphasis is to develop a set of re-usable functional abstractions that can be recombined in various configurations to develop required applications. This work uses RM-ODP framework to supplement deficiencies evident in IN. Two specific aspects are examined and developed. The first is service portability through service profile modeling. A model for service development in a mobile environment, and related concepts of service profile modeling and transfer are developed. The second, IN domain interworking in the ODP framework. A ODP framework for the modeling of this service profile and its migration as the user moves to different domains is proposed. Our approach allows dynamically configured interworking of domains.
17

Learning of disjunctive concepts with explanation-based learning.

Salembier-Pelletier, Maude. January 1990 (has links)
Abstract Not Available.
18

Intelligent search techniques for large software systems.

Liu, Huixiang. January 2002 (has links)
There are many tools available today to help software engineers search in source code systems. It is often the case, however, that there is a gap between what people really want to find and the actual query strings they specify. This is because a concept in a software system may be represented by many different terms, while the same term may have different meanings in different places. Therefore, software engineers often have to guess as they specify a search, and often have to repeatedly search before finding what they want. To alleviate the search problem, this thesis describes a study of what we call intelligent search techniques as implemented in a software exploration environment, whose purpose is to facilitate software maintenance. We propose to utilize some information retrieval techniques to automatically apply transformations to the query strings. The thesis first introduces the intelligent search techniques used in our study, including abbreviation concatenation and abbreviation expansion. Then it describes in detail the rating algorithms used to evaluate the query results' similarity to the original query strings. Next, we describe a series of experiments we conducted to assess the effectiveness of both the intelligent search methods and our rating algorithms. Finally, we describe how we use the analysis of the experimental results to recommend an effective combination of searching techniques for software maintenance, as well as to guide our future research.
19

Interactive hierarchical generate and test search.

Xu, Xin. January 1991 (has links)
Most of the search methods used in AI are inflexible. Interactive search is a new kind of search in which the search system can communicate and cooperate with external agents. There are two kinds of agents: human agents and non-human agents. Through interaction with human agents (man-machine interaction), the search system can make use of the human talent of judging the quality of a solution. Through interaction with non-human agents (machine-machine interaction), the search system can automatically exploit knowledge from its environment. An interactive search system has the ability to take advice from external agents. The ordinary non-interactive search models are the special instances of interactive search when the advice sequences are empty. We are investigating a particular kind of Interactive Search, IHGT (Interactive Hierarchical Generate and Test) search, which is established by introducing interactive ability into HGT (Hierarchical Generate and Test) search. To make HGT search interactive, we created an editor called GE (Generator Editor). GE was implemented in Prolog. GE is a bottom level language shell outside the HGT search model which translates advice into dynamic changes of all the three search factors. (Abstract shortened by UMI.)
20

A symbol's role in learning low-level control functions.

Drummond, Chris. January 1999 (has links)
This thesis demonstrates how the power of symbolic processing can be exploited in the learning of low level control functions. It proposes a novel hybrid architecture with a tight coupling between a variant of symbolic planning and reinforcement learning. This architecture combines the strengths of the function approximation of subsymbolic learning with the more abstract compositional nature of symbolic learning. The former is able to represent mappings of world states to actions in an accurate way. The latter allows a more rapid solution to problems by exploiting structure within the domain. A control function is learnt over time through interaction with the world. Symbols are attached to features in the functions. The symbolic attachments act as anchor points used to transform the function of a previously learnt task to that of a new task. The solution of more complex tasks is achieved through composing simpler functions, using the symbolic attachments to determine the composition. The result is used as the initial control function of the new task and then modified through further learning. This is shown to produce a significant speed up over basic reinforcement learning.

Page generated in 0.1412 seconds