• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2339
  • 506
  • 197
  • 196
  • 166
  • 126
  • 107
  • 67
  • 67
  • 67
  • 67
  • 67
  • 67
  • 32
  • 29
  • Tagged with
  • 4682
  • 4682
  • 1661
  • 1318
  • 1080
  • 992
  • 745
  • 740
  • 670
  • 647
  • 611
  • 545
  • 494
  • 481
  • 459
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Simulation and learning in decision processes

Jones, Richard Anthony January 1999 (has links)
In this thesis we address the problem of adaptive control in complex stochastic systems when the system parameters are both known and unknown. The type of models we consider are those which, in the full information case, are known as Markov Decision Processes. We introduce versions of two new algorithms, the optimiser and the p-learner. The optimiser is a simulation based method for finding optimal values and optimal policies when the system parameters are known. The p-learner is an algorithm for learning about the state transition probabilities; we use it in conjunction with the optimiser when the system parameters are unknown. We carefully discuss the choice of different components in the different versions of the algorithms, and we look at two extended case studies to evaluate their performances over a range of different learning parameters. In each case, we compare the results with that of a deterministic method. We also address the convergence of the solutions generated by the optimiser
212

An intelligent co-reference resolver for Winograd schema sentences containing resolved semantic entities

January 2013 (has links)
abstract: There has been a lot of research in the field of artificial intelligence about thinking machines. Alan Turing proposed a test to observe a machine's intelligent behaviour with respect to natural language conversation. The Winograd schema challenge is suggested as an alternative, to the Turing test. It needs inferencing capabilities, reasoning abilities and background knowledge to get the answer right. It involves a coreference resolution task in which a machine is given a sentence containing a situation which involves two entities, one pronoun and some more information about the situation and the machine has to come up with the right resolution of a pronoun to one of the entities. The complexity of the task is increased with the fact that the Winograd sentences are not constrained by one domain or specific sentence structure and it also contains a lot of human proper names. This modification makes the task of association of entities, to one particular word in the sentence, to derive the answer, difficult. I have developed a pronoun resolver system for the confined domain Winograd sentences. I have developed a classifier or filter which takes input sentences and decides to accept or reject them based on a particular criteria. Once the sentence is accepted. I run parsers on it to obtain the detailed analysis. Furthermore I have developed four answering modules which use world knowledge and inferencing mechanisms to try and resolve the pronoun. The four techniques I use are : ConceptNet knowledgebase, Search engine pattern counts,Narrative event chains and sentiment analysis. I have developed a particular aggregation mechanism for the answers from these modules to arrive at a final answer. I have used caching technique for the association relations that I obtain for different modules, so as to boost the performance. I run my system on the standard ‘nyu dataset’ of Winograd sentences and questions. This dataset is then restricted, by my classifier, to 90 sentences. I evaluate my system on this 90 sentence dataset. When I compare my results against the state of the art system on the same dataset, I get nearly 4.5 % improvement in the restricted domain. / Dissertation/Thesis / M.S. Computer Science 2013
213

Terminology-based knowledge acquisition

Al-Jabir, Shaikha January 1999 (has links)
A methodology for knowledge acquisition from terminology databases is presented. The methodology outlines how the content of a terminology database can be mapped onto a knowledge base with a minimum of human intervention. Typically, terms are defined and elaborated by terminologists by using sentences that have a common syntactic and semantic structure. It has been argued that in defining terms, terminologists use a local grammar and that this local grammar can be used to parse the definitions. The methodology has been implemented in a program called DEARSys (Definition Analysis and Representation System), that reads definition sentences and extracts new concepts and conceptual relations about the defined terms. The linguistic component of the system is a parser for the sublanguage of terminology definitions that analyses a definition into its logical form, which in turn is mapped onto a frame-based representation. The logical form is based on first-order logic (FOL) extended with untyped lambda calculus. Our approach is data-driven and domain independent; it has been applied to definitions of various domains. Experiments were conducted with human subjects to evaluate the information acquired by the system. The results of the preliminary evaluation were encouraging.
214

Adaptive resonance theory : theory and application to synthetic aperture radar

Saddington, P. January 2002 (has links)
Artificial Neural Networks are massively parallel systems that are constructed from many simple processing elements called neurons. The neurons are connected via weights. This structure is inspired by the current understanding of how biological networks function. Since the 1980s, research into this field has exploded into the hive of activity that currently surrounds neural networks and intelligent systems. The work in this thesis is concerned with one particular artificial neural network: Adaptive Resonance Theory (ART). It is an unsupervised neural network that attempts to solve the stability-plasticity dilemma. The model is, however, limited by a few serious problems that restrict its use in real life situations. The network's ability to cluster consistently with uncorrupt inputs when the input is subject to even modest amounts of noise is severely handicapped. The work detailed herein attempts to improve on ART's behaviour towards noisy inputs. Novel equations are developed and described that improve on the network's performance when the system is subject to noisy inputs. One of the novel equations affecting vigilance makes a significant improvement over the originators' equations and can cope with 16% target noise before results fall to the same values as the standard equation. The novel work is tested using a real-life (not simulated) data set from the MSTAR database. Synthetic Aperture Radar targets are clustered and then subject to noise before being represented to the network. These data simulate a typical environment where a clustering or classifying module would be needed for object recognition. Such a module could then be used in an Automatic Target Recognition (ATR) system. Once the problem is mitigated, Adaptive Resonance Theory neural networks could play important roles in ATR systems due to its lack of computational complexity and low memory requirements when compared with other clustering techniques. Keywords: Adaptive Resonance Theory, clustering consistency, neural network, automatic target recognition, noisy inputs.
215

Theoretical and practical approaches to the modelling of crystal and molecular structures

Baldwin, Colin Richard January 1997 (has links)
A genetic algorithm has been proposed as a computational method for producing molecular mechanics force field parameters, using input data from the Cambridge Structural Database. The method has been applied initially to simple test data and to a coordination compound under various conditions and the results have been analysed in an attempt to determine the most suitable operating parameters. Finally, several possible approaches, both software and hardware, aimed towards improving the algorithm's performance, are discussed. Two approaches for extending the performance of a PC have been considered, namely upgrading the computational power and the graphics capabilities using state-of-the-art hardware solutions. Both of these features can be considered essential for crystal modelling. Conclusions have then been drawn regarding the applicability of these approaches to a modern, top-of-the-range PC. Finally, a variety of software modules are proposed, aimed at the 'engineering' of known crystal structures. Many of these techniques are graphical in nature, enabling the visualisation and manipulation of the inherent symmetry these systems display.
216

Metareasoning and Mental Simulation

Hamrick, Jessica B. 27 April 2018 (has links)
<p> At any given moment, the mind needs to decide <i>how</i> to think about <i>what</i>, and for <i>how long</i>. The mind's ability to manage itself is one of the hallmarks of human cognition, and these meta-level questions are crucially important to understanding how cognition is so fluid and flexible across so many situations. In this thesis, I investigate the problem of cognitive resource management by focusing in particular on the domain of <i>mental simulation</i>. Mental simulation is a phenomenon in which people can perceive and manipulate objects and scenes in their imagination in order to make decisions, predictions, and inferences about the world. Importantly, if thinking is computation, then mental simulation is one particular type of computation analogous to having a rich, forward model of the world. </p><p> Given access to such a model as rich and flexible as mental simulation, how should the mind use it? How does the mind infer anything from the outcomes of its simulations? How many resources should be allocated to running which simulations? When should such a rich forward model be used in the first place, in contrast to other types of computation such as heuristics or rules? Understanding the answers to these questions provides broad insight into people's meta-level reasoning because mental simulation is involved in almost every aspect of cognition, including perception, memory, planning, physical reasoning, language, social cognition, problem solving, scientific reasoning, and even creativity. Through a series of behavioral experiments combined with machine learning models, I show how people adaptively use their mental simulations to learn new things about the world; that they choose which simulations to run based on which they think will be more informative; and that they allocate their cognitive resources to spend less time on easy problems and more time on hard problems.</p><p>
217

Algorithmic Music Composition Using Linear Algebra

Yelkenci, Serhat 08 August 2017 (has links)
<p> Sound, in its all forms, is a source of energy whose capabilities humankind is not yet fully aware of. Composition - the way of aggregating sounds into the form of music - still holds to be an unperceived methodology with lots of unknowns. Methodologies used by composers are generally seem as being innate talent, something that cannot be used or shared by others. Yet, as any other form of art, music actually is and can be interpreted with mathematics and geometry. The focus of this thesis is to propose a generative algorithm to compose structured music pieces using linear algebra as the mathematical language for the representation of music. By implementing the linear algebra as the scientific framework, a practical data structure is obtained for analysis and manipulation. Instead of defining a single structure from a certain musical canon, which is a type of limiting the frame of music, the generative algorithm proposed in this paper is capable of learning all kinds of musical structures by linear algebra operations. The algorithm is designed to build musical knowledge (influence) by analyzing music pieces and receive a new melody as the inspirational component to produce new unique and meaningful music pieces. Characteristic analysis features obtained from analyzing music pieces, serves as constraints during the composition process. The proposed algorithm has been successful in generating unique and meaningful music pieces. The process time of the algorithm varies due to complexity of the influential aspect. Yet, the free nature of the generative algorithm and the capability of matrical representation offer a practical linkage between unique and meaningful music creation and any other concept containing a mathematical foundation.</p><p>
218

A Non-Parametric Perspective on the Analysis of Massive Networks

Costa, Thiago 01 May 2017 (has links)
This dissertation develops an inferential framework for a highly non-parametric class of network models called graphons, which are the limit objects of converging sequences in the theory of dense graph limits. The theory, introduced by Lovász and co-authors, uses structural properties of very large networks to describe a notion of convergence for sequences of dense graphs. Converging sequences define a limit which can be represented by a class of random graphs that preserve many properties of the networks in the sequence. These random graphs are intuitive and have a relatively simple mathematical representation, but they are very difficult to estimate due to their non-parametric nature. Our work, which develops scalable and consistent methods for estimating graphons, offers an algorithmic framework that can be used to unlock the potential of applications of this powerful theory. To estimate graphons we use a stochastic blockmodel approximation approach that defines a notion of similarity between vertices to cluster vertices and find the blocks. We show how to compute these similarity distances from a given graph and how to properly cluster the vertices of the graph in order to form the blocks. The method is non-parametric, i.e., it uses the data to choose a convenient number of clusters. Our approach requires a careful balance between the number of blocks created, which is associated with stochastic blockmodel approximation of the graphon, and the size of the clusters, which is associated with the estimation of the stochastic blockmodel parameters. We prove insightful properties regarding the clustering mechanism and the similarity distance, and we also work with important variations of the graphon model, including a sparser type of graphon. As an application of our framework, we use the stochastic blockmodel nature of our method to improve identification of treatment response with social interaction. We show how the graph structure provided by our algorithm can be explored to design optimal experiments for assessing social effect on treatment response. / Engineering and Applied Sciences - Applied Math
219

Real-Time Adaptive Data-Driven Perception for Anomaly Priority Scoring at Scale

Miraftabzadeh, Seyed Ali 29 December 2017 (has links)
<p> With the aim of ultimately contributing to humanitarian response to disasters and violent events, detecting anomalies in daily human life is a crucial requirement for developing secured smart-home and smart-communities in the context of smart cities. In early security systems, it was necessary for a team of security experts to analyze a vast amount of surveillance data from a network of cameras, for instance to pick out patterns of human behavior identified as potential harmful threats. Now, however, in the big data era, online excavation and interpretation of streamed zettabyte data requires two automated technologies: (1) intelligent models&mdash;to extract suspicious patterns and discover latent anomalies; and, (2) agile systems&mdash;to take real-time action based on decision- making processes. As such, the two primary contributions of this dissertation are: (1) developing accurate intelligent models that perform much like human precision, and (2) proposing sub-systems of smart city infrastructure that intimately incorporate these intelligent models. </p><p> For this dissertation, pattern recognition models with applications in real-time video analysis were developed based on four computer vision tasks: (1) identity recognition, (2) object detection, (3) gesture recognition, and (4) action recognition. Applications of these models include, but are not limited to, recognition of: suspicious identities, active threats (life-threatening events i.e. bomb threats, civil unrest, criminal activity, earthquakes, evacuations, fires, hazardous materials), and suspicious packages. To perform these tasks, the intent is to have the models emulate the processes that take place within a human brain, i.e. with a close resemblance to human neuro processing, albeit in high-powered computational machines. Deep learning, the state-of-the-art concept in artificial intelligence, was the developmental basis for the proposed cognitive models. </p><p> In order to run efficiently, these computationally intensive models rely on the use of and co- ordination between high-throughput, high-performance, and many-task (parallel-task) computing- enabled machines that have a high level of computing performance compared to general-purpose computers. This variety of computational resources are served at scale in a cloud system, which serves as a central location with the core building blocks needed for compute, storage and network- ing. Nevertheless, anomaly detection and then taking real-time actions demands a faster processing speed than what is possible when communicating with cloud networks. It requires the use of phys- ical infrastructure that is closer to the edge, near the source of the data (end-device); otherwise, when the data is centrally processed and stored, there is too much bandwidth required. This edge- computing approach helps reduce latency for critical applications, lower dependence on the cloud, and better manage the massive deluge of data being generated. In addition, security and privacy can also be improved with edge computing by keeping sensitive data within the end-device. In this dissertation, these distributed and decentralized deep learning systems aimed at enabling smart city applications&mdash;spread throughout the end-device, edge, and cloud&mdash;are designed following the requirements of smart city infrastructure.</p><p>
220

A unified framework for resource-bounded autonomous agents interacting with unknown environments

Ortega, Pedro Alejandro January 2011 (has links)
The aim of this thesis is to present a mathematical framework for conceptualizing and constructing adaptive autonomous systems under resource constraints. The first part of this thesis contains a concise presentation of the foundations of classical agency: namely the formalizations of decision making and learning. Decision making includes: (a) subjective expected utility (SEU) theory, the framework of decision making under uncertainty; (b) the maximum SEU principle to choose the optimal solution; and (c) its application to the design of autonomous systems, culminating in the Bellman optimality equations. Learning includes: (a) Bayesian probability theory, the theory for reasoning under uncertainty that extends logic; and (b) Bayes-Optimal agents, the application of Bayesian probability theory to the design of optimal adaptive agents. Then, two major problems of the maximum SEU principle are highlighted: (a) the prohibitive computational costs and (b) the need for the causal precedence of the choice of the policy. The second part of this thesis tackles the two aforementioned problems. First, an information-theoretic notion of resources in autonomous systems is established. Second, a framework for resource-bounded agency is introduced. This includes: (a) a maximum bounded SEU principle that is derived from a set of axioms of utility; (b) an axiomatic model of probabilistic causality, which is applied for the formalization of autonomous systems having uncertainty over their policy and environment; and (c) the Bayesian control rule, which is derived from the maximum bounded SEU principle and the model of causality, implementing a stochastic adaptive control law that deals with the case where autonomous agents are uncertain about their policy and environment.

Page generated in 0.1064 seconds