• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 151
  • 109
  • 49
  • 22
  • 14
  • Tagged with
  • 389
  • 294
  • 290
  • 264
  • 264
  • 236
  • 197
  • 197
  • 194
  • 194
  • 192
  • 146
  • 118
  • 109
  • 96
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Predicting the behavior of robotic swarms in discrete simulation

Lancaster, Joseph Paul, Jr January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / David Gustafson / We use probabilistic graphs to predict the location of swarms over 100 steps in simulations in grid worlds. One graph can be used to make predictions for worlds of different dimensions. The worlds are constructed from a single 5x5 square pattern, each square of which may be either unoccupied or occupied by an obstacle or a target. Simulated robots move through the worlds avoiding the obstacles and tagging the targets. The interactions between the robots and the robots and the environment lead to behavior that, even in deterministic simulations, can be difficult to anticipate. The graphs capture the local rate and direction of swarm movement through the pattern. The graphs are used to create a transition matrix, which along with an occupancy matrix, can be used to predict the occupancy in the patterns in the 100 steps using 100 matrix multiplications. In the future, the graphs could be used to predict the movement of physical swarms though patterned environments such as city blocks in applications such as disaster response search and rescue. The predictions could assist in the design and deployment of such swarms and help rule out undesirable behavior.
92

A Reasoning Module for Long-lived Cognitive Agents

Vassos, Stavros 03 March 2010 (has links)
In this thesis we study a reasoning module for agents that have cognitive abilities, such as memory, perception, action, and are expected to function autonomously for long periods of time. The module provides the ability to reason about action and change using the language of the situation calculus and variants of the basic action theories. The main focus of this thesis is on the logical problem of progressing an action theory. First, we investigate the conjecture by Lin and Reiter that a practical first-order definition of progression is not appropriate for the general case. We show that Lin and Reiter were indeed correct in their intuitions by providing a proof for the conjecture, thus resolving the open question about the first-order definability of progression and justifying the need for a second-order definition. Then we proceed to identify three cases where it is possible to obtain a first-order progression with the desired properties: i) we extend earlier work by Lin and Reiter and present a case where we restrict our attention to a practical class of queries that may only quantify over situations in a limited way; ii) we revisit the local-effect assumption of Liu and Levesque that requires that the effects of an action are fixed by the arguments of the action and show that in this case a first-order progression is suitable; iii) we investigate a way that the local-effect assumption can be relaxed and show that when the initial knowledge base is a database of possible closures and the effects of the actions are range-restricted then a first-order progression is also suitable under a just-in-time assumption. Finally, we examine a special case of the action theories with range-restricted effects and present an algorithm for computing a finite progression. We prove the correctness and the complexity of the algorithm, and show its application in a simple example that is inspired by video games.
93

Adaptive Image Quality Improvement with Bayesian Classification for In-line Monitoring

Yan, Shuo 01 August 2008 (has links)
Development of an automated method for classifying digital images using a combination of image quality modification and Bayesian classification is the subject of this thesis. The specific example is classification of images obtained by monitoring molten plastic in an extruder. These images were to be classified into two groups: the “with particle” (WP) group which showed contaminant particles and the “without particle” (WO) group which did not. Previous work effected the classification using only an adaptive Bayesian model. This work combines adaptive image quality modification with the adaptive Bayesian model. The first objective was to develop an off-line automated method for determining how to modify each individual raw image to obtain the quality required for improved classification results. This was done in a very novel way by defining image quality in terms of probability using a Bayesian classification model. The Nelder Mead Simplex method was then used to optimize the quality. The result was a “Reference Image Database” which was used as a basis for accomplishing the second objective. The second objective was to develop an in-line method for modifying the quality of new images to improve classification over that which could be obtained previously. Case Based Reasoning used the Reference Image Database to locate reference images similar to each new image. The database supplied instructions on how to modify the new image to obtain a better quality image. Experimental verification of the method used a variety of images from the extruder monitor including images purposefully produced to be of wide diversity. Image quality modification was made adaptive by adding new images to the Reference Image Database. When combined with adaptive classification previously employed, error rates decreased from about 10% to less than 1% for most images. For one unusually difficult set of images that exhibited very low local contrast of particles in the image against their background it was necessary to split the Reference Image Database into two parts on the basis of a critical value for local contrast. The end result of this work is a very powerful, flexible and general method for improving classification of digital images that utilizes both image quality modification and classification modeling.
94

Statistical Methods for Dating Collections of Historical Documents

Tilahun, Gelila 31 August 2011 (has links)
The problem in this thesis was originally motivated by problems presented with documents of Early England Data Set (DEEDS). The central problem with these medieval documents is the lack of methods to assign accurate dates to those documents which bear no date. With the problems of the DEEDS documents in mind, we present two methods to impute missing features of texts. In the first method, we suggest a new class of metrics for measuring distances between texts. We then show how to combine the distances between the texts using statistical smoothing. This method can be adapted to settings where the features of the texts are ordered or unordered categoricals (as in the case of, for example, authorship assignment problems). In the second method, we estimate the probability of occurrences of words in texts using nonparametric regression techniques of local polynomial fitting with kernel weight to generalized linear models. We combine the estimated probability of occurrences of words of a text to estimate the probability of occurrence of a text as a function of its feature -- the feature in this case being the date in which the text is written. The application and results of our methods to the DEEDS documents are presented.
95

Structured prediction and generative modeling using neural networks

Kastner, Kyle 08 1900 (has links)
Cette thèse traite de l'usage des Réseaux de Neurones pour modélisation de données séquentielles. La façon dont l'information a été ordonnée et structurée est cruciale pour la plupart des données. Les mots qui composent ce paragraphe en constituent un exemple. D'autres données de ce type incluent les données audio, visuelles et génomiques. La Prédiction Structurée est l'un des domaines traitant de la modélisation de ces données. Nous allons aussi présenter la Modélisation Générative, qui consiste à générer des points similaires aux données sur lesquelles le modèle a été entraîné. Dans le chapitre 1, nous utiliserons des données clients afin d'expliquer les concepts et les outils de l'Apprentissage Automatique, incluant les algorithmes standards d'apprentissage ainsi que les choix de fonction de coût et de procédure d'optimisation. Nous donnerons ensuite les composantes fondamentales d'un Réseau de Neurones. Enfin, nous introduirons des concepts plus complexes tels que le partage de paramètres, les Réseaux Convolutionnels et les Réseaux Récurrents. Le reste du document, nous décrirons de plusieurs types de Réseaux de Neurones qui seront à la fois utiles pour la prédiction et la génération et leur application à des jeux de données audio, d'écriture manuelle et d'images. Le chapitre 2 présentera le Réseau Neuronal Récurrent Variationnel (VRNN pour variational recurrent neural network). Le VRNN a été développé dans le but de générer des échantillons semblables aux exemples de la base d'apprentissage. Nous présenterons des modèles entraînées de manière non-supervisée afin de générer du texte manuscrites, des effets sonores et de la parole. Non seulement ces modèles prouvent leur capacité à apprendre les caractéristiques de chaque type de données mais établissent aussi un standard en terme de performance. Dans le chapitre 3 sera présenté ReNet, un modèle récemment développé. ReNet utilise les sorties structurées d'un Réseau Neuronal Récurrent pour classifier des objets. Ce modèle atteint des performances compétitives sur plusieurs tâches de reconnaissance d'images, tout en utilisant une architecture conçue dès le départ pour de la Prédiction Structurée. Dans ce cas-ci, les résultats du modèle sont utilisés simplement pour de la classification mais des travaux suivants (non inclus ici) ont utilisé ce modèle pour de la Prédiction Structurée. Enfin, au Chapitre 4 nous présentons les résultats récents non-publiés en génération acoustique. Dans un premier temps, nous fournissons les concepts musicaux et représentations numériques fondamentaux à la compréhension de notre approche et introduisons ensuite une base de référence et de nouveaux résultats de recherche avec notre modèle, RNN-MADE. Ensuite, nous introduirons le concept de synthèse vocale brute et discuterons de notre recherche en génération. Dans notre dernier Chapitre, nous présenterons enfin un résumé des résultats et proposerons de nouvelles pistes de recherche. / In this thesis we utilize neural networks to effectively model data with sequential structure. There are many forms of data for which both the order and the structure of the information is incredibly important. The words in this paragraph are one example of this type of data. Other examples include audio, images, and genomes. The work to effectively model this type of ordered data falls within the field of structured prediction. We also present generative models, which attempt to generate data that appears similar to the data which the model was trained on. In Chapter 1, we provide an introduction to data and machine learning. First, we motivate the need for machine learning by describing an expert system built on a customer database. This leads to a discussion of common algorithms, losses, and optimization choices in machine learning. We then progress to describe the basic building blocks of neural networks. Finally, we add complexity to the models, discussing parameter sharing and convolutional and recurrent layers. In the remainder of the document, we discuss several types of neural networks which find common use in both prediction and generative modeling and present examples of their use with audio, handwriting, and images datasets. In Chapter 2, we introduce a variational recurrent neural network (VRNN). Our VRNN is developed with to generate new sequential samples that resemble the dataset that is was trained on. We present models that learned in an unsupervised manner how to generate handwriting, sound effects, and human speech setting benchmarks in performance. Chapter 3 shows a recently developed model called ReNet. In ReNet, intermediate structured outputs from recurrent neural networks are used for object classification. This model shows competitive performance on a number of image recognition tasks, while using an architecture designed to handle structured prediction. In this case, the final model output is only used for simple classification, but follow-up work has expanded to full structured prediction. Lastly, in Chapter 4 we present recent unpublished experiments in sequential audio generation. First we provide background in musical concepts and digital representation which are fundamental to understanding our approach and then introduce a baseline and new research results using our model, RNN-MADE. Next we introduce the concept of raw speech synthesis and discuss our investigation into generation. In our final chapter, we present a brief summary of results and postulate future research directions.
96

MASSPEC: multiagent system specification through policy exploration and checking

Harmon, Scott J. January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Sciences / Scott A. DeLoach / Multiagent systems have been proposed as a way to create reliable, adaptable, and efficient systems. As these systems grow in complexity, configuration, tuning, and design of these systems can become as complex as the problems they claim to solve. As researchers in multiagent systems engineering, we must create the next generation of theories and tools to help tame this growing complexity and take some of the burden off the systems engineer. In this thesis, I propose guidance policies as a way to do just that. I also give a framework for multiagent system design, using the concept of guidance policies to automatically generate a set of constraints based on a set of multiagent system models as well as provide an implementation for generating code that will conform to these constraints. Presenting a formal definition for guidance policies, I show how they can be used in a machine learning context to improve performance of a system and avoid failures. I also give a practical demonstration of converting abstract requirements to concrete system requirements (with respect to a given set of design models).
97

A Learning-based Semi-autonomous Control Architecture for Robotic Exploration in Search and Rescue Environments

Doroodgar, Barzin 07 December 2011 (has links)
Semi-autonomous control schemes can address the limitations of both teleoperation and fully autonomous robotic control of rescue robots in disaster environments by allowing cooperation and task sharing between a human operator and a robot with respect to tasks such as navigation, exploration and victim identification. Herein, a unique hierarchical reinforcement learning (HRL) -based semi-autonomous control architecture is presented for rescue robots operating in unknown and cluttered urban search and rescue (USAR) environments. The aim of the controller is to allow a rescue robot to continuously learn from its own experiences in an environment in order to improve its overall performance in exploration of unknown disaster scenes. A new direction-based exploration technique and a rubble pile categorization technique are integrated into the control architecture for exploration of unknown rubble filled environments. Both simulations and physical experiments in USAR-like environments verify the robustness of the proposed control architecture.
98

Computationally Characterizing Schizophrenia

Green, Adrian 20 November 2012 (has links)
An accurate diagnosis of schizophrenia is difficult; no reliable biomarkers of the disease exist. We present a computational approach for diagnosis of schizophrenia from electroencephalography (EEG) recordings. Novel and existing mathematical methods for the interpretation of EEG are surveyed and compared. Methods utilizing single electrodes are used in conjunction with those incorporating the recordings of multiple electrodes. A data-driven, machine-learning approach is used to automate the selection of relevant features, which are then classified using least-squares support vector machines. This approach yielded a prediction accuracy of 86.5%, using a stringent application of correct statistical techniques. Those features deemed most relevant are related with known abnormalities symptomatic of schizophrenia.
99

Computationally Characterizing Schizophrenia

Green, Adrian 20 November 2012 (has links)
An accurate diagnosis of schizophrenia is difficult; no reliable biomarkers of the disease exist. We present a computational approach for diagnosis of schizophrenia from electroencephalography (EEG) recordings. Novel and existing mathematical methods for the interpretation of EEG are surveyed and compared. Methods utilizing single electrodes are used in conjunction with those incorporating the recordings of multiple electrodes. A data-driven, machine-learning approach is used to automate the selection of relevant features, which are then classified using least-squares support vector machines. This approach yielded a prediction accuracy of 86.5%, using a stringent application of correct statistical techniques. Those features deemed most relevant are related with known abnormalities symptomatic of schizophrenia.
100

Computational Prediction of Gene Function From High-throughput Data Sources

Mostafavi, Sara 31 August 2011 (has links)
A large number and variety of genome-wide genomics and proteomics datasets are now available for model organisms. Each dataset on its own presents a distinct but noisy view of cellular state. However, collectively, these datasets embody a more comprehensive view of cell function. This motivates the prediction of function for uncharacterized genes by combining multiple datasets, in order to exploit the associations between such genes and genes of known function--all in a query-specific fashion. Commonly, heterogeneous datasets are represented as networks in order to facilitate their combination. Here, I show that it is possible to accurately predict gene function in seconds by combining multiple large-scale networks. This facilitates function prediction on-demand, allowing users to take advantage of the persistent improvement and proliferation of genomics and proteomics datasets and continuously make up-to-date predictions for large genomes such as humans. Our algorithm, GeneMANIA, uses constrained linear regression to combine multiple association networks and uses label propagation to make predictions from the combined network. I introduce extensions that result in improved predictions when the number of labeled examples for training is limited, or when an ontological structure describing a hierarchy of gene function categorization scheme is available. Further, motivated by our empirical observations on predicting node labels for general networks, I propose a new label propagation algorithm that exploits common properties of real-world networks to increase both the speed and accuracy of our predictions.

Page generated in 0.0291 seconds