71 |
The development of an improved method of making capacity-achievement comparisonsPrescott, George Arthur January 1950 (has links)
Thesis (Ed.D.)--Boston University
|
72 |
Automatic Conversation Review for Intelligent Virtual AssistantsBeaver, Ian 26 September 2018 (has links)
<p> When reviewing the performance of Intelligent Virtual Assistants (IVAs), it is desirable to prioritize conversations involving misunderstood human inputs. These conversations uncover error in natural language understanding and help prioritize and expedite improvements to the IVA. As human reviewer time is valuable and manual analysis is time consuming, prioritizing the conversations where misunderstanding has likely occurred reduces costs and speeds improvement. A system for measuring the posthoc <i>risk of missed intent </i> associated with a single human input is presented. Numerous indicators of risk are explored and implemented. These indicators are combined using various means and evaluated on real world data. In addition, the ability for the system to adapt to different domains of language is explored. Finally, the system performance in identifying errors in IVA understanding is compared to that of human reviewers and multiple aspects of system deployment for commercial use are discussed.</p><p>
|
73 |
The Effect of the Implementation of a Swarm Intelligence Algorithm on the Efficiency of the Cosmos Open Source Managed Operating SystemUsman, Modibo 24 May 2018 (has links)
<p> As the complexity of mankind’s day-to-day challenges increase, so does a need for the optimization of know solutions to accommodate for this increase in complexity. Today’s computer systems use the Input, Processing, and Output (IPO) model as a way to deliver efficiency and optimization in human activities. Since the relative quality of an output utility derived from an IPO based computer system is closely coupled to the quality of its input media, the measure of the Optimal Quotient (OQ) is the ratio of the input to output which is 1:1. This relationship ensures that all IPO based computers are not just linearly predictable, but also characterized by the Garbage In Garbage Out (GIGO) design concept. While current IPO based computer systems have been relatively successful at delivering some measure of optimization, there is a need to examine (Li & Malik, 2016) alternative methods of achieving optimization. The purpose of this quantitative research study, through an experimental research design, is to determine the effects of the application of a Swarm Intelligence algorithm on the efficiency of the Cosmos Open Source Managed Operating System. </p><p> By incorporating swarm intelligence into an improved IPO design, this research addresses the need for optimization in computer systems through the creation of an improved operating system Scheduler. The design of a Swarm Intelligence Operating System (SIOS) is an attempt to solve some inherent vulnerabilities and problems of complexity and optimization otherwise unresolved in the design of conventional operating systems. This research will use the Cosmos open source operating system as a test harness to ensure improved internal validity while the subsequent measurement between the conventional and improved IPO designs will demonstrate external validity to real world applications. </p><p>
|
74 |
A Framework for Enhancing Speaker Age and Gender Classification by Using a New Feature Set and Deep Neural Network ArchitecturesAbumallouh, Arafat 14 March 2018 (has links)
<p> Speaker age and gender classification is one of the most challenging problems in speech processing. Recently with developing technologies, identifying a speaker age and gender has become a necessity for speaker verification and identification systems such as identifying suspects in criminal cases, improving human-machine interaction, and adapting music for awaiting people queue. Although many studies have been carried out focusing on feature extraction and classifier design for improvement, classification accuracies are still not satisfactory. The key issue in identifying speaker’s age and gender is to generate robust features and to design an in-depth classifier. Age and gender information is concealed in speaker’s speech, which is liable for many factors such as, background noise, speech contents, and phonetic divergences.</p><p> In this work, different methods are proposed to enhance the speaker age and gender classification based on the deep neural networks (DNNs) as a feature extractor and classifier. First, a model for generating new features from a DNN is proposed. The proposed method uses the Hidden Markov Model toolkit (HTK) tool to find tied-state triphones for all utterances, which are used as labels for the output layer in the DNN. The DNN with a bottleneck layer is trained in an unsupervised manner for calculating the initial weights between layers, then it is trained and tuned in a supervised manner to generate transformed mel-frequency cepstral coefficients (T-MFCCs). Second, the shared class labels method is introduced among misclassified classes to regularize the weights in DNN. Third, DNN-based speakers models using the SDC feature set is proposed. The speakers-aware model can capture the characteristics of the speaker age and gender more effectively than a model that represents a group of speakers. In addition, AGender-Tune system is proposed to classify the speaker age and gender by jointly fine-tuning two DNN models; the first model is pre-trained to classify the speaker age, and second model is pre-trained to classify the speaker gender. Moreover, the new T-MFCCs feature set is used as the input of a fusion model of two systems. The first system is the DNN-based class model and the second system is the DNN-based speaker model. Utilizing the T-MFCCs as input and fusing the final score with the score of a DNN-based class model enhanced the classification accuracies. Finally, the DNN-based speaker models are embedded into an AGender-Tune system to exploit the advantages of each method for a better speaker age and gender classification.</p><p> The experimental results on a public challenging database showed the effectiveness of the proposed methods for enhancing the speaker age and gender classification and achieved the state of the art on this database.</p><p>
|
75 |
Learning from Temporally-Structured Human Activities DataLipton, Zachary C. 06 January 2018 (has links)
<p> Despite the extraordinary success of deep learning on diverse problems, these triumphs are too often confined to large, clean datasets and well-defined objectives. Face recognition systems train on millions of perfectly annotated images. Commercial speech recognition systems train on thousands of hours of painstakingly-annotated data. But for applications addressing human activity, data can be noisy, expensive to collect, and plagued by missing values. In electronic health records, for example, each attribute might be observed on a different time scale. Complicating matters further, deciding precisely what objective warrants optimization requires critical consideration of both algorithms and the application domain. Moreover, deploying human-interacting systems requires careful consideration of societal demands such as safety, interpretability, and fairness.</p><p> The aim of this thesis is to address the obstacles to mining temporal patterns in human activity data. The primary contributions are: (1) the first application of RNNs to multivariate clinical time series data, with several techniques for bridging long-term dependencies and modeling missing data; (2) a neural network algorithm for forecasting surgery duration while simultaneously modeling heteroscedasticity; (3) an approach to quantitative investing that uses RNNs to forecast company fundamentals; (4) an exploration strategy for deep reinforcement learners that significantly speeds up dialogue policy learning; (5) an algorithm to minimize the number of catastrophic mistakes made by a reinforcement learner; (6) critical works addressing model interpretability and fairness in algorithmic decision-making.</p><p>
|
76 |
Integrating Multiple Modalities into Deep Learning NetworksMcNeil, Patrick N. 30 June 2017 (has links)
<p> Deep learning networks in the literature traditionally only used a single input modality (or data stream). Integrating multiple modalities into deep learning networks with the goal of correlating extracted features was a major issue. Traditional methods involved treating each modality separately and then writing custom code to combine the extracted features.</p><p> Current solutions for small numbers of modalities (three or less) showed there are multiple architectures for modality integration. With an increase in the number of modalities, the “curse of dimensionality” affects the performance of the system. The research showed current methods for larger scale integrations required separate, custom created modules with another integration layer outside the deep learning network. These current solutions do not scale well nor provide good generalized performance. This research report studied architectures using multiple modalities and the creation of a scalable and efficient architecture.</p>
|
77 |
Temporal Markov Decision Problems : Formalization and ResolutionRachelson, Emmanuel 23 March 2009 (has links) (PDF)
This thesis addresses the question of planning under uncertainty within a time-dependent changing environment. Original motivation for this work came from the problem of building an autonomous agent able to coordinate with its
uncertain environment; this environment being composed of other agents communicating their intentions or non-controllable processes for which some discrete-event model is available. We investigate several approaches for modeling continuous time-dependency in the framework of Markov Decision Processes (MDPs), leading us to a definition of Temporal Markov Decision Problems. Then our approach focuses on two separate paradigms. First, we investigate time-dependent problems as \emph{implicit-event} processes and describe them through the formalism of Time-dependent MDPs (TMDPs). We extend the existing results concerning optimality equations and present a new Value Iteration algorithm based on piecewise polynomial function representations in order to solve a more general class of TMDPs. This paves the way to a more general discussion on parametric actions in hybrid state and action spaces MDPs with continuous time. In a second time, we investigate the
option of separately modeling the concurrent contributions of exogenous events. This approach of \emph{explicit-event} modeling leads to the use of Generalized Semi-Markov Decision Processes (GSMDP). We establish a link between the general framework of Discrete Events Systems Specification (DEVS) and the formalism of GSMDP, allowing us to build sound discrete-event compatible simulators. Then we introduce a simulation-based Policy Iteration approach for
explicit-event Temporal Markov Decision Problems. This algorithmic contribution brings together results from simulation theory, forward search in MDPs, and statistical learning theory. The implicit-event approach was tested on a
specific version of the Mars rover planning problem and on a drone patrol mission planning problem while the explicit-event approach was evaluated on a subway network control problem.
|
78 |
Learning of disjunctive concepts with explanation-based learning.Salembier-Pelletier, Maude. January 1990 (has links)
Abstract Not Available.
|
79 |
Intelligent search techniques for large software systems.Liu, Huixiang. January 2002 (has links)
There are many tools available today to help software engineers search in source code systems. It is often the case, however, that there is a gap between what people really want to find and the actual query strings they specify. This is because a concept in a software system may be represented by many different terms, while the same term may have different meanings in different places. Therefore, software engineers often have to guess as they specify a search, and often have to repeatedly search before finding what they want. To alleviate the search problem, this thesis describes a study of what we call intelligent search techniques as implemented in a software exploration environment, whose purpose is to facilitate software maintenance. We propose to utilize some information retrieval techniques to automatically apply transformations to the query strings. The thesis first introduces the intelligent search techniques used in our study, including abbreviation concatenation and abbreviation expansion. Then it describes in detail the rating algorithms used to evaluate the query results' similarity to the original query strings. Next, we describe a series of experiments we conducted to assess the effectiveness of both the intelligent search methods and our rating algorithms. Finally, we describe how we use the analysis of the experimental results to recommend an effective combination of searching techniques for software maintenance, as well as to guide our future research.
|
80 |
Interactive hierarchical generate and test search.Xu, Xin. January 1991 (has links)
Most of the search methods used in AI are inflexible. Interactive search is a new kind of search in which the search system can communicate and cooperate with external agents. There are two kinds of agents: human agents and non-human agents. Through interaction with human agents (man-machine interaction), the search system can make use of the human talent of judging the quality of a solution. Through interaction with non-human agents (machine-machine interaction), the search system can automatically exploit knowledge from its environment. An interactive search system has the ability to take advice from external agents. The ordinary non-interactive search models are the special instances of interactive search when the advice sequences are empty. We are investigating a particular kind of Interactive Search, IHGT (Interactive Hierarchical Generate and Test) search, which is established by introducing interactive ability into HGT (Hierarchical Generate and Test) search. To make HGT search interactive, we created an editor called GE (Generator Editor). GE was implemented in Prolog. GE is a bottom level language shell outside the HGT search model which translates advice into dynamic changes of all the three search factors. (Abstract shortened by UMI.)
|
Page generated in 0.098 seconds