• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3058
  • 561
  • 231
  • 230
  • 196
  • 127
  • 98
  • 83
  • 83
  • 83
  • 83
  • 83
  • 83
  • 30
  • 29
  • Tagged with
  • 5665
  • 5665
  • 2093
  • 1538
  • 1339
  • 1159
  • 870
  • 867
  • 773
  • 747
  • 691
  • 601
  • 551
  • 529
  • 529
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Towards an intelligent assistant in a task environment.

Roth, Gerhard, Carleton University. Dissertation. Computer Science. January 1984 (has links)
Thesis (M.C.S.)--Carleton University, 1985. / Also available in electronic format on the Internet.
22

The design of a parallel programming language for artificial intelligence applications

Honda, Masahiro. January 1978 (has links)
Thesis--University of Wisconsin--Madison. / Typescript. Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 159-164).
23

Automatic Conversation Review for Intelligent Virtual Assistants

Beaver, Ian 26 September 2018 (has links)
<p> When reviewing the performance of Intelligent Virtual Assistants (IVAs), it is desirable to prioritize conversations involving misunderstood human inputs. These conversations uncover error in natural language understanding and help prioritize and expedite improvements to the IVA. As human reviewer time is valuable and manual analysis is time consuming, prioritizing the conversations where misunderstanding has likely occurred reduces costs and speeds improvement. A system for measuring the posthoc <i>risk of missed intent </i> associated with a single human input is presented. Numerous indicators of risk are explored and implemented. These indicators are combined using various means and evaluated on real world data. In addition, the ability for the system to adapt to different domains of language is explored. Finally, the system performance in identifying errors in IVA understanding is compared to that of human reviewers and multiple aspects of system deployment for commercial use are discussed.</p><p>
24

The Effect of the Implementation of a Swarm Intelligence Algorithm on the Efficiency of the Cosmos Open Source Managed Operating System

Usman, Modibo 24 May 2018 (has links)
<p> As the complexity of mankind&rsquo;s day-to-day challenges increase, so does a need for the optimization of know solutions to accommodate for this increase in complexity. Today&rsquo;s computer systems use the Input, Processing, and Output (IPO) model as a way to deliver efficiency and optimization in human activities. Since the relative quality of an output utility derived from an IPO based computer system is closely coupled to the quality of its input media, the measure of the Optimal Quotient (OQ) is the ratio of the input to output which is 1:1. This relationship ensures that all IPO based computers are not just linearly predictable, but also characterized by the Garbage In Garbage Out (GIGO) design concept. While current IPO based computer systems have been relatively successful at delivering some measure of optimization, there is a need to examine (Li &amp; Malik, 2016) alternative methods of achieving optimization. The purpose of this quantitative research study, through an experimental research design, is to determine the effects of the application of a Swarm Intelligence algorithm on the efficiency of the Cosmos Open Source Managed Operating System. </p><p> By incorporating swarm intelligence into an improved IPO design, this research addresses the need for optimization in computer systems through the creation of an improved operating system Scheduler. The design of a Swarm Intelligence Operating System (SIOS) is an attempt to solve some inherent vulnerabilities and problems of complexity and optimization otherwise unresolved in the design of conventional operating systems. This research will use the Cosmos open source operating system as a test harness to ensure improved internal validity while the subsequent measurement between the conventional and improved IPO designs will demonstrate external validity to real world applications. </p><p>
25

A Framework for Enhancing Speaker Age and Gender Classification by Using a New Feature Set and Deep Neural Network Architectures

Abumallouh, Arafat 14 March 2018 (has links)
<p> Speaker age and gender classification is one of the most challenging problems in speech processing. Recently with developing technologies, identifying a speaker age and gender has become a necessity for speaker verification and identification systems such as identifying suspects in criminal cases, improving human-machine interaction, and adapting music for awaiting people queue. Although many studies have been carried out focusing on feature extraction and classifier design for improvement, classification accuracies are still not satisfactory. The key issue in identifying speaker&rsquo;s age and gender is to generate robust features and to design an in-depth classifier. Age and gender information is concealed in speaker&rsquo;s speech, which is liable for many factors such as, background noise, speech contents, and phonetic divergences.</p><p> In this work, different methods are proposed to enhance the speaker age and gender classification based on the deep neural networks (DNNs) as a feature extractor and classifier. First, a model for generating new features from a DNN is proposed. The proposed method uses the Hidden Markov Model toolkit (HTK) tool to find tied-state triphones for all utterances, which are used as labels for the output layer in the DNN. The DNN with a bottleneck layer is trained in an unsupervised manner for calculating the initial weights between layers, then it is trained and tuned in a supervised manner to generate transformed mel-frequency cepstral coefficients (T-MFCCs). Second, the shared class labels method is introduced among misclassified classes to regularize the weights in DNN. Third, DNN-based speakers models using the SDC feature set is proposed. The speakers-aware model can capture the characteristics of the speaker age and gender more effectively than a model that represents a group of speakers. In addition, AGender-Tune system is proposed to classify the speaker age and gender by jointly fine-tuning two DNN models; the first model is pre-trained to classify the speaker age, and second model is pre-trained to classify the speaker gender. Moreover, the new T-MFCCs feature set is used as the input of a fusion model of two systems. The first system is the DNN-based class model and the second system is the DNN-based speaker model. Utilizing the T-MFCCs as input and fusing the final score with the score of a DNN-based class model enhanced the classification accuracies. Finally, the DNN-based speaker models are embedded into an AGender-Tune system to exploit the advantages of each method for a better speaker age and gender classification.</p><p> The experimental results on a public challenging database showed the effectiveness of the proposed methods for enhancing the speaker age and gender classification and achieved the state of the art on this database.</p><p>
26

Learning from Temporally-Structured Human Activities Data

Lipton, Zachary C. 06 January 2018 (has links)
<p> Despite the extraordinary success of deep learning on diverse problems, these triumphs are too often confined to large, clean datasets and well-defined objectives. Face recognition systems train on millions of perfectly annotated images. Commercial speech recognition systems train on thousands of hours of painstakingly-annotated data. But for applications addressing human activity, data can be noisy, expensive to collect, and plagued by missing values. In electronic health records, for example, each attribute might be observed on a different time scale. Complicating matters further, deciding precisely what objective warrants optimization requires critical consideration of both algorithms and the application domain. Moreover, deploying human-interacting systems requires careful consideration of societal demands such as safety, interpretability, and fairness.</p><p> The aim of this thesis is to address the obstacles to mining temporal patterns in human activity data. The primary contributions are: (1) the first application of RNNs to multivariate clinical time series data, with several techniques for bridging long-term dependencies and modeling missing data; (2) a neural network algorithm for forecasting surgery duration while simultaneously modeling heteroscedasticity; (3) an approach to quantitative investing that uses RNNs to forecast company fundamentals; (4) an exploration strategy for deep reinforcement learners that significantly speeds up dialogue policy learning; (5) an algorithm to minimize the number of catastrophic mistakes made by a reinforcement learner; (6) critical works addressing model interpretability and fairness in algorithmic decision-making.</p><p>
27

Integrating Multiple Modalities into Deep Learning Networks

McNeil, Patrick N. 30 June 2017 (has links)
<p> Deep learning networks in the literature traditionally only used a single input modality (or data stream). Integrating multiple modalities into deep learning networks with the goal of correlating extracted features was a major issue. Traditional methods involved treating each modality separately and then writing custom code to combine the extracted features.</p><p> Current solutions for small numbers of modalities (three or less) showed there are multiple architectures for modality integration. With an increase in the number of modalities, the &ldquo;curse of dimensionality&rdquo; affects the performance of the system. The research showed current methods for larger scale integrations required separate, custom created modules with another integration layer outside the deep learning network. These current solutions do not scale well nor provide good generalized performance. This research report studied architectures using multiple modalities and the creation of a scalable and efficient architecture.</p>
28

Learning of disjunctive concepts with explanation-based learning.

Salembier-Pelletier, Maude. January 1990 (has links)
Abstract Not Available.
29

Intelligent search techniques for large software systems.

Liu, Huixiang. January 2002 (has links)
There are many tools available today to help software engineers search in source code systems. It is often the case, however, that there is a gap between what people really want to find and the actual query strings they specify. This is because a concept in a software system may be represented by many different terms, while the same term may have different meanings in different places. Therefore, software engineers often have to guess as they specify a search, and often have to repeatedly search before finding what they want. To alleviate the search problem, this thesis describes a study of what we call intelligent search techniques as implemented in a software exploration environment, whose purpose is to facilitate software maintenance. We propose to utilize some information retrieval techniques to automatically apply transformations to the query strings. The thesis first introduces the intelligent search techniques used in our study, including abbreviation concatenation and abbreviation expansion. Then it describes in detail the rating algorithms used to evaluate the query results' similarity to the original query strings. Next, we describe a series of experiments we conducted to assess the effectiveness of both the intelligent search methods and our rating algorithms. Finally, we describe how we use the analysis of the experimental results to recommend an effective combination of searching techniques for software maintenance, as well as to guide our future research.
30

Interactive hierarchical generate and test search.

Xu, Xin. January 1991 (has links)
Most of the search methods used in AI are inflexible. Interactive search is a new kind of search in which the search system can communicate and cooperate with external agents. There are two kinds of agents: human agents and non-human agents. Through interaction with human agents (man-machine interaction), the search system can make use of the human talent of judging the quality of a solution. Through interaction with non-human agents (machine-machine interaction), the search system can automatically exploit knowledge from its environment. An interactive search system has the ability to take advice from external agents. The ordinary non-interactive search models are the special instances of interactive search when the advice sequences are empty. We are investigating a particular kind of Interactive Search, IHGT (Interactive Hierarchical Generate and Test) search, which is established by introducing interactive ability into HGT (Hierarchical Generate and Test) search. To make HGT search interactive, we created an editor called GE (Generator Editor). GE was implemented in Prolog. GE is a bottom level language shell outside the HGT search model which translates advice into dynamic changes of all the three search factors. (Abstract shortened by UMI.)

Page generated in 0.1044 seconds