• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5512
  • 1072
  • 768
  • 625
  • 541
  • 355
  • 145
  • 96
  • 96
  • 96
  • 96
  • 96
  • 96
  • 95
  • 83
  • Tagged with
  • 11494
  • 6047
  • 2543
  • 1989
  • 1676
  • 1419
  • 1350
  • 1317
  • 1217
  • 1136
  • 1075
  • 1037
  • 1011
  • 891
  • 877
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

The development of an artificially intuitive reasoner

Sun, Yung Chien January 2010 (has links)
This research is an exploration of the phenomenon of "intuition" in the context of artificial intelligence (AI). In this work, intuition was considered as the human capacity to make decisions under situations in which the available knowledge was usually low in quality: inconsistent and of varying levels of certainty. The objectives of this study were to characterize some of the aspects of human intuitive thought and to model these aspects in a computational approach. / This project entailed the development of a conceptual framework and a conceptual model, and, based on these, a computer system with three general parts: (1) a rule induction module for establishing the knowledge base for the reasoner; (2) the intuitive reasoner that was essentially a rule-based inference engine; (3) two learning approaches that could update the knowledge base over time for the reasoner to make better predictions. A reference reasoner based on established data analysis methods was also constructed, as a bench-mark for evaluating the intuitive reasoner. / The input and the rules drawn by the reasoner were allowed to be fuzzy, multi-valued, and of varying levels of certainty. A measure of the certainty level, Strength of Belief, was attached to each input as well as each rule. Rules for the intuitive reasoner were induced from only about 10% of the data available for the reference reasoner. Solutions were formulated through iterations of consolidating intermediate reasoning results, during which the Strength of Belief of corroborating intermediate results was combined. / The intuitive and the reference reasoners were tested to predict the value (class) of 12 target variables chosen by the author, of which six were continuous variables and the other six were discrete variables. The intuitive reasoner developed in this study matched the performance of the reference reasoner for three of six continuous target variables and achieved at least 70% of the accuracy of the reference reasoner for all six discrete target variables. / The results showed that the intuitive reasoner was able to induce rules from a sparse database and use those rules to make accurate predictions. This suggested that by consolidating numerous outputs from low-certainty rules, an "intuitive" reasoner can effectively perform prediction, or other computational tasks, on the basis of incomplete information of varying quality. / Cette étude se penche sur le phénomène de "l'intuition" dans le contexte de l'intelligence artificielle (IA). Dans cette étude, l'intuition fut considérée comme la capacité de l'être humain à en venir à une décision lorsqu'une situation se présente où les informations disponibles sont majoritairement de pauvre qualité: irrégulières et d'un niveau de certitude variable. Cette étude visa la caractérisation de certains aspects de la pensée intuitive de l'être humain et la modélisation de ces aspects par une démarche computationnelle. / Cette étude nécessita le développement d'un cadre conceptuel, et, basé sur celui-ci un système informatisé à trois volets: (1) un module fonctionnant par induction de règles servant à établir le base de connaissances du raisonneur; (2) le raisonneur intuitif, moteur essentiel d'un système d'inférences basé sur des règles; (3) deux démarches d'apprentissage permettant une mise à jour continuelle de la base de connaissances, permettant au raisonneur d'en venir à de meilleures prédictions. Afin de servir comme point de référence dans l'évaluation du raisonneur intuitif, un raisonneur de référence employant des méthodes d'analyse de données conventionnelles fut bâti. / Nous permîmes aux données d'entrée et aux règles formulées par le raisonneur d'être floues, multivaluées et de différents niveaux de certitude. Un barème du niveau de certitude, le Niveau de Confiance, fut attribué à chaque donnée d'entrée, ainsi qu'à chaque règle. Les règles induites par le raisonneur intuitif ne furent basées que sur le dixième des données disponibles au raisonneur de référence. Les solutions furent formulées à travers plusieurs itérations de consolidation des résultats de raisonnements intermédiaires, durant lesquels le Niveau de Confiance de résultats intermédiaires corroborants furent combinés. / Le raisonneur intuitif et le raisonneur de référence furent éprouvés en leur demandant de prédire la valeur (classe) de 12 variables-cibles choisies par l'auteur, dont six continues et six discrètes. Le raisonneur intuitif développé dans cette étude égala la performance du raisonneur de référence pour deux des six variables-cibles continues, et atteigna au moins 70% de la précision du raisonneur de référence pour les six variables-cibles discrètes. / Ces résultats indiquent que le raisonneur intuitif fut capable d'induire des règles à partir d'une base de données plutôt limitée, et de fournir des prédictions raisonnablement précises grâce à ces règles. Cela indique qu'en consolidant plusieurs résultats de règles de basse certitude, un raisonneur "intuitif" peut devenir un outil de prédiction efficace ou servir adéquatement à compléter d'autres tâches computationnelles, à partir de données incomplètes de qualité variable.
442

Automatic basis function construction for reinforcement learning and approximate dynamic programming

Keller, Philipp Wilhelm January 2008 (has links)
We address the problem of automatically constructing basis functions for linear approximation of the value function of a Markov decision process (MDP). Our work builds on results by Bertsekas and Casta˜non (1989) who proposed a method for automatically aggregating states to speed up value iteration. We propose to use neighbourhood component analysis , a dimensionality reduction technique created for supervised learning, in order to map a high-dimensional state space to a low-dimensional space, based on the Bellman error, or on the temporal difference (TD) error. We then place basis functions in the lower-dimensional space. These are added as new features for the linear function approximator. This approach is applied to a high-dimensional inventory control problem, and to a number of benchmark reinforcement learning problems. / Nous adressons la construction automatique de fonctions base pour l'approximation linéaire de la fonction valeur d'un processus de décision Markov. Cette thèse se base sur les résultats de Bertsekas et Castañon (1989), qui ont proposé une méthode pour automatiquement grouper des états dans le but d'accélérer la programmation dynamique. Nous proposons d'utiliser une technique récente de réduction de dimension afin de projeter des états en haute dimension dans un espace à basse dimension. Nous plaçons alors des fonctions base radiales dans ce nouvel espace. Cette technique est appliquée à plusieurs problèmes de référence standards pour les algorithmes d'apprentissage par renforcement, ainsi qu'à un problème de contrôle d'inventaire en haute dimension.
443

Autonomic Performance Optimization with Application to Self-Architecting Software Systems

Ewing, John M. 11 July 2015 (has links)
<p> Service Oriented Architectures (SOA) are an emerging software engineering discipline that builds software systems and applications by connecting and integrating well-defined, distributed, reusable software service instances. SOA can speed development time and reduce costs by encouraging reuse, but this new service paradigm presents significant challenges. Many SOA applications are dependent upon service instances maintained by vendors and/or separate organizations. Applications and composed services using disparate providers typically demonstrate limited autonomy with contemporary SOA approaches. Availability may also suffer with the proliferation of possible points of failure&mdash;restoration of functionality often depends upon intervention by human administrators. </p><p> Autonomic computing is a set of technologies that enables self-management of computer systems. When applied to SOA systems, autonomic computing can provide automatic detection of faults and take restorative action. Additionally, autonomic computing techniques possess optimization capabilities that can leverage the features of SOA (e.g., loose coupling) to enable peak performance in the SOA system's operation. This dissertation demonstrates that autonomic computing techniques can help SOA systems maintain high levels of usefulness and usability. </p><p> This dissertation presents a centralized autonomic controller framework to manage SOA systems in dynamic service environments. The centralized autonomic controller framework can be enhanced through a second meta-optimization framework that automates the selection of optimization algorithms used in the autonomic controller. A third framework for autonomic meta-controllers can study, learn, adjust, and improve the optimization procedures of the autonomic controller at run-time. Within this framework, two different types of meta-controllers were developed. The <b>Overall Best</b> meta-controller tracks overall performance of different optimization procedures. <b>Context Best</b> meta-controllers attempt to determine the best optimization procedure for the current optimization problem. Three separate Context Best meta-controllers were implemented using different machine learning techniques: 1) K-Nearest Neighbor (<b>KNN MC</b>), 2) Support Vector Machines (SVM) trained offline (<b>Offline SVM</b>), and 3) SVM trained online (<b>Online SVM</b>). </p><p> A detailed set of experiments demonstrated the effectiveness and scalability of the approaches. Autonomic controllers of SOA systems successfully maintained performance on systems with 15, 25, 40, and 65 components. The <b>Overall Best</b> meta-controller successfully identified the best optimization technique and provided excellent performance at all levels of scale. Among the <b>Context Best</b> meta-controllers, the <b>Online SVM</b> meta-controller was tested on the 40 component system and performed better than the <b>Overall Best</b> meta-controller at a 95% confidence level. Evidence indicates that the <b>Online SVM</b> was successfully learning which optimization procedures were best applied to encountered optimization problems. The <b>KNN MC</b> and <b>Offline SVM</b> were less successful. The <b>KNN MC</b> struggled because the KNN algorithm does not account for the asymmetric cost of prediction errors. The <b>Offline SVM</b> was unable to predict the correct optimization procedure with sufficient accuracy&mdash;this was likely due to the challenge of building a relevant offline training set. The meta-optimization framework, which was tested on the 65 component system, successfully improved the optimization techniques used by the autonomic controller. </p><p> The meta-optimization and meta-controller frameworks described in this dissertation have broad applicability in autonomic computing and related fields. This dissertation also details a technique for measuring the overlap of two populations of points, establishes an approach for using penalty weights to address one-sided overfitting by SVM on asymmetric data sets, and develops a set of high performance data structure and heuristic search templates for C++.</p>
444

Evolving effective micro behaviors for real-time strategy games

Liu, Siming 16 July 2015 (has links)
<p> Real-Time Strategy games have become a new frontier of artificial intelligence research. Advances in real-time strategy game AI, like with chess and checkers before, will significantly advance the state of the art in AI research. This thesis aims to investigate using heuristic search algorithms to generate effective micro behaviors in combat scenarios for real-time strategy games. <i> Macro</i> and <i>micro</i> management are two key aspects of real-time strategy games. While good macro helps a player collect more resources and build more units, good micro helps a player win skirmishes against equal numbers of opponent units or win even when outnumbered. In this research, we use influence maps and potential fields as a basis representation to evolve micro behaviors. We first compare genetic algorithms against two types of hill climbers for generating competitive unit micro management. Second, we investigated the use of case-injected genetic algorithms to quickly and reliably generate high quality micro behaviors. Then we compactly encoded micro behaviors including influence maps, potential fields, and reactive control into fourteen parameters and used genetic algorithms to search for a complete micro bot, <i> ECSLBot.</i> We compare the performance of our ECSLBot with two state of the art bots, <i>UAlbertaBot</i> and <i>Nova,</i> on several skirmish scenarios in a popular real-time strategy game <i>StarCraft. </i> The results show that the ECSLBot tuned by genetic algorithms outperforms UAlbertaBot and Nova in kiting efficiency, target selection, and fleeing. In addition, the same approach works to create competitive micro behaviors in another game <i>SeaCraft.</i> Using parallelized genetic algorithms to evolve parameters in SeaCraft we are able to speed up the evolutionary process from twenty one hours to nine minutes. We believe this work provides evidence that genetic algorithms and our representation may be a viable approach to creating effective micro behaviors for winning skirmishes in real-time strategy games.</p>
445

Reasoning with exceptions : an inheritance based approach

Al-Asady, Raad January 1993 (has links)
No description available.
446

Weaver - a hybrid artificial intelligence laboratory for modelling complex, knowledge- and data-poor domains

Hare, Matthew Peter January 1999 (has links)
Weaver is a hybrid knowledge discovery environment which fills a current gap in Artificial Intelligence (AI) applications, namely tools designed for the development and exploration of existing knowledge in <I>complex, knowledge and data-poor domains. </I>Such domains are typified by incomplete and conflicting knowledge, and data which are very hard to collect. Without the support of robust domain theory, many experimental and modelling assumptions have to be made whose impact on field work and model design are uncertain or simply unknown. Compositional modelling, experimental simulation, inductive learning, and experimental reformulation tools are integrated within a methodology analogous to Popper's scientific method of <I>critical discussion. </I>The purpose of Weaver is to provide a 'laboratory' environment in which a scientist can develop domain theory through an iterative process of <I>in silico</I> experimentation, theory proposal, criticism, and theory refinement. After refinement within Weaver, this domain theory may be used to guide field work and model design. Weaver is a pragmatic response to tool development in complex, knowledge- and data- poor domains. In the compositional modelling tool, a domain-independent algorithm for <I>dynamic multiple scale bridging </I>has been developed. The multiple perspective simulation tool provides an object class library for the construction of multiple simulations that can be flexibly and easily altered. The experimental reformulator uses a simple domain-independent heuristic search to help guide the scientist in selecting the experimental simulations that need to be carried out in order to critically test and refine the domain theory. An example of Weaver's use in an ecological domain is provided in the exploration of the possible causes of population cycles in red grouse (<I>Lagopus, lagopus scoticus</I>). The problem of AI tool validation in complex, knowledge- and data-poor domains is also discussed.
447

A process-oriented approach to representing and reasoning about naive physiology

Arana Landín, Ines January 1995 (has links)
This thesis presents the RAP system: a Reasoner About Physiology. RAP consists of two modules: knowledge representation and reasoning. The knowledge representation module describes commonsense anatomy and physiology at various levels of abstraction and detail. This representation is broad (covers several physiological systems), dense (the number of relationships between anatomical and physiological elements is high) and uniform (the same kind of formalism is used to represent anatomy, physiology and their interrelationships). These features lead to a 'natural' representation of naive physiology which is, therefore, easy to understand and use. The reasoning module performs two tasks: 1) it infers the behaviour of a complex physiological process using the behaviours of its subprocesses and the relationships between them; 2) it reasons about the effect of introducing a fault in the model. In order to reason about the behaviour of a complex process, RAP uses a mechanism which consists of the following tasks: (i) understanding how subprocesses behave; (ii) comprehending how these subprocesses affect each others behaviours; (iii) "aggregating" these behaviours together to obtain the behaviour of the top level process; (iv) giving that process a temporal context in which to act. RAP uses limited commonsense knowledge about faults to reason about the effect of a fault in the model. It discovers new processes which originate as a consequence of a fault and detects processes which misbehave due to a fault. The effects of both newly generated and misbehaving processes are then propagated throughout the model to obtain the overall effect of the fault. RAP represents and reasons about naive physiology and is a step forward in the development of systems which use commonsense knowledge.
448

Nonmonotonic inheritance of class membership

Woodhead, David A. January 1990 (has links)
This thesis describes a formal analysis of nonmonotonic inheritance. The need for such an understanding of inheritance has been apparent from the time that multiple inheritance and exceptions were mixed in the same representation with the result that the meaning of an inheritance network was no longer clear. Many attempts to deal with the problems associated with nonmonotonic multiple inheritance appeared in the literature but, probably due to the lack of clear semantics there was no general agreement on how many of the standard examples should be handled. This thesis attempts to resolve these problems by presenting a framework for a family of path based inheritance reasoners which allows the consequences of design decisions to be explored. Many of the major theorems are therefore proved without the need to make any commitment as to how conflicts between nonmonotonic chains of reasoning are to be resolved. In particular it is shown that consistent sets of conclusions, known as expansions, exist for a wide class of networks. When commitment is made to a method of choosing between conflicting arguments, particular inheritance systems are produced. The systems described in this thesis can be divided into three classes. The simplest of these, in which an arbitrary choice is made between conflicting arguments, is shown to be very closely related to default logic. The other classes each of which contain four systems, are the decoupled and coupled inheritance systems which use specificity as a guide to choosing between conflicting arguments. In a decoupled system the results relating to a particular node are not affected in any way by derived results concerning other nodes in the inheritance network, whereas in a coupled system decisions in the face of ambiguity are linked to produce expansions which are more intuitively acceptable as a consistent view of the world. A number of results concerning the relationship between these systems are given. In particular it is shown that the process of coupling will not affect the results which lie in the intersection of the expansions produced for a given network.
449

FGP : a genetic programming based tool for financial forecasting

Li, Jin January 2000 (has links)
No description available.
450

Discretization and defragmentation for decision tree learning

Ho, Colin Kok Meng January 1999 (has links)
No description available.

Page generated in 0.1068 seconds