• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2330
  • 505
  • 197
  • 196
  • 166
  • 126
  • 105
  • 67
  • 67
  • 67
  • 67
  • 67
  • 67
  • 32
  • 29
  • Tagged with
  • 4665
  • 4665
  • 1654
  • 1309
  • 1075
  • 985
  • 741
  • 737
  • 666
  • 646
  • 608
  • 540
  • 492
  • 480
  • 459
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

A new method for parametric surface registration

Tucker, Thomas Marshall 08 1900 (has links)
No description available.
152

Analysis of trumpet tone quality using machine learning and audio feature selection

Knight, Trevor January 2012 (has links)
This work examines which audio features, the components of recorded sound, are most relevant to trumpet tone quality by using classification and feature selection. A total of 10 trumpet players with a variety of experience levels were recorded playing the same notes under the same conditions. Twelve musical instrumentalists listened to the notes and provided subjective ratings of the tone quality on a seven-point Likert scale to provide training data for classification. The initial experiment verified that there is statistical agreement between human raters on tone quality and that it was possible to train a support vector machine (SVM) classifier to identify different levels of tone quality with success of 72% classification accuracy with the notes split into two classes and 46% when using seven classes. In the main experiment, different types of feature selection algorithms were applied to the 164 possible audio features to select high-performing subsets. The baseline set of all 164 audio features obtained a classification accuracy of 58.9% with seven classes tested with cross-validation. Ranking, sequential floating forward selection, and genetic search produced accuracies of 43.8%, 53.6%, and 59.6% with 20, 21, and 74 features, respectively. Future work in this field could focus on more nuanced interpretations of tone quality or on the applicability to other instruments. / Ce travail examine les caractéristique acoustique, c.-à-d. les composantes de l'enregistrement sonore, les plus pertinentes pour la qualité du timbre de trompette à l'aide de la classification automatique et de la sélection de caractéristiques. Un total de 10 joueurs de trompette de niveau varié, jouant les mêmes notes dans les mêmes conditions, a été enregistré. Douze instrumentistes de musique ont écouté les enregistrements et ont fourni des évaluations subjectives de la qualité du timbre sur une échelle de Likert à sept points afin de fournir des données d'entrainement du système de classification. La première expérience a vérifié qu'il existe une correlation statistique entre les évaluateurs humains sur la qualité du timbre et qu'il était possible de former un système de classification de type machine à vecteurs de support pour identifier les différents niveaux de qualité du timbre avec un succès de précision de la classification de 72% pour les notes quand divisées en deux classes et 46% lors de l'utilisation de sept classes. Dans l'expérience principale, différents types d'algorithmes de sélection de caractéristiques ont été appliqués aux 164 fonctions au- dio possibles pour sélectionner les sous-ensembles les plus performants. L'ensemble de toutes les 164 fonctions audio a obtenu une précision de classification de 58,9% avec sept classes testées par validation croisée. Les algorithmes de "ranking," "sequential floating forward selection," et génétiques produisent une précision respective de 43,8%, 53,6% et 59,6% avec 20, 21 et 74 caractéristiques. Les futurs travaux dans ce domaine pourraient se concentrer sur des interprétations plus nuancées de la qualité du timbre ou sur l'applicabilité à d'autres instruments.
153

Robust decision making and its applications in machine learning

Xu, Huan January 2009 (has links)
Decision making formulated as finding a strategy that maximizes a utility function depends critically on knowing the problem parameters precisely. The obtained strategy can be highly sub-optimal and/or infeasible when parameters are subject to uncertainty, a typical situation in practice. Robust optimization, and more generally robust decision making, addresses this issue by treating uncertain parameters as an arbitrary element of a pre-defined set and solving solutions based on a worst-case analysis. In this thesis we contribute to two closely related fields of robust decision making. First, we address two limitations of robust decision making. Namely, a lack of theoretical justification and conservatism in sequential decision making. Specifically, we provide an axiomatic justification of robust optimization based on the MaxMin Expected Utility framework from decision theory. Furthermore, we propose three less conservative decision criteria for sequential decision making tasks, which include: (1) In uncertain Markov decision processes we propose an alternative formulation of the parameter uncertainty -- the nested-set structured parameter uncertainty -- and find the strategy that achieves maxmin expected utility to mitigate the conservatism of the standard robust Markov decision processes. (2) We investigate uncertain Markov decision processes where each strategy is evaluated comparatively by its gap to the optimum value. Two formulations, namely minimax regret and mean-variance tradeoff of the regret, were proposed and their computational cost studied. (3) We propose a novel Kalman filter design based on trading-off the likely performance and the robustness under parameter uncertainty. Second, we apply robust decision making into machine learning both theoretically and algorithmically. Specifically, on the theoretical front, we show that the concept of robustness is essential to ''successful'' learning / La prise de décision, formulée comme trouver une stratégie qui maximise une fonction de l'utilité, dépend de manière critique sur la connaissance précise des paramètres du problem. La stratégie obtenue peut être très sous-optimale et/ou infeasible quand les paramètres sont subjets à l'incertitude – une situation typique en pratique. L'optimisation robuste, et plus genéralement, la prise de décision robuste, vise cette question en traitant le paramètre incertain comme un élement arbitraire d'un ensemble prédéfini et en trouvant une solution en suivant l'analyse du pire scénario. Dans cette thèse, nous contribuons envers deux champs intimement reliés et appartenant à la prise de décision robuste. En premier lieu, nous considérons deux limites de la prise de décision robuste: le manque de justification théorique et le conservatism dans la prise de décision séquentielle. Pour être plus spécifique, nous donnons une justifiquation axiomatique de l'optimisation robuste basée sur le cadre de l'utilité espérée MaxMin de la théorie de la prise de décision. De plus, nous proposons trois critères moins conservateurs pour la prise de décision séquentielle, incluant: (1) dans les processus incertains de décisionde Markov, nous proposons un modèle alternative de l'incertitude de paramètres –l'incertitude structurée comme des ensembles emboîtées – et trouvons une stratégie qui obtient une utilité espérée maxmin pour mitiguer le conservatisme des processus incertains de décision de Markov qui sont de norme. (2) Nous considérons les processus incertains de décision de Markov où chaque stratégie est évaluée par comparaison de l'écart avec l'optimum. Deux modèles – le regret minimax et le compromis entre l'espérance et la variance du regret – sont présentés et leurs complexités étudiées. (3)Nous proposons une nouvelle conception de filtre de Kalman b
154

The development of an artificially intuitive reasoner

Sun, Yung Chien January 2010 (has links)
This research is an exploration of the phenomenon of "intuition" in the context of artificial intelligence (AI). In this work, intuition was considered as the human capacity to make decisions under situations in which the available knowledge was usually low in quality: inconsistent and of varying levels of certainty. The objectives of this study were to characterize some of the aspects of human intuitive thought and to model these aspects in a computational approach. / This project entailed the development of a conceptual framework and a conceptual model, and, based on these, a computer system with three general parts: (1) a rule induction module for establishing the knowledge base for the reasoner; (2) the intuitive reasoner that was essentially a rule-based inference engine; (3) two learning approaches that could update the knowledge base over time for the reasoner to make better predictions. A reference reasoner based on established data analysis methods was also constructed, as a bench-mark for evaluating the intuitive reasoner. / The input and the rules drawn by the reasoner were allowed to be fuzzy, multi-valued, and of varying levels of certainty. A measure of the certainty level, Strength of Belief, was attached to each input as well as each rule. Rules for the intuitive reasoner were induced from only about 10% of the data available for the reference reasoner. Solutions were formulated through iterations of consolidating intermediate reasoning results, during which the Strength of Belief of corroborating intermediate results was combined. / The intuitive and the reference reasoners were tested to predict the value (class) of 12 target variables chosen by the author, of which six were continuous variables and the other six were discrete variables. The intuitive reasoner developed in this study matched the performance of the reference reasoner for three of six continuous target variables and achieved at least 70% of the accuracy of the reference reasoner for all six discrete target variables. / The results showed that the intuitive reasoner was able to induce rules from a sparse database and use those rules to make accurate predictions. This suggested that by consolidating numerous outputs from low-certainty rules, an "intuitive" reasoner can effectively perform prediction, or other computational tasks, on the basis of incomplete information of varying quality. / Cette étude se penche sur le phénomène de "l'intuition" dans le contexte de l'intelligence artificielle (IA). Dans cette étude, l'intuition fut considérée comme la capacité de l'être humain à en venir à une décision lorsqu'une situation se présente où les informations disponibles sont majoritairement de pauvre qualité: irrégulières et d'un niveau de certitude variable. Cette étude visa la caractérisation de certains aspects de la pensée intuitive de l'être humain et la modélisation de ces aspects par une démarche computationnelle. / Cette étude nécessita le développement d'un cadre conceptuel, et, basé sur celui-ci un système informatisé à trois volets: (1) un module fonctionnant par induction de règles servant à établir le base de connaissances du raisonneur; (2) le raisonneur intuitif, moteur essentiel d'un système d'inférences basé sur des règles; (3) deux démarches d'apprentissage permettant une mise à jour continuelle de la base de connaissances, permettant au raisonneur d'en venir à de meilleures prédictions. Afin de servir comme point de référence dans l'évaluation du raisonneur intuitif, un raisonneur de référence employant des méthodes d'analyse de données conventionnelles fut bâti. / Nous permîmes aux données d'entrée et aux règles formulées par le raisonneur d'être floues, multivaluées et de différents niveaux de certitude. Un barème du niveau de certitude, le Niveau de Confiance, fut attribué à chaque donnée d'entrée, ainsi qu'à chaque règle. Les règles induites par le raisonneur intuitif ne furent basées que sur le dixième des données disponibles au raisonneur de référence. Les solutions furent formulées à travers plusieurs itérations de consolidation des résultats de raisonnements intermédiaires, durant lesquels le Niveau de Confiance de résultats intermédiaires corroborants furent combinés. / Le raisonneur intuitif et le raisonneur de référence furent éprouvés en leur demandant de prédire la valeur (classe) de 12 variables-cibles choisies par l'auteur, dont six continues et six discrètes. Le raisonneur intuitif développé dans cette étude égala la performance du raisonneur de référence pour deux des six variables-cibles continues, et atteigna au moins 70% de la précision du raisonneur de référence pour les six variables-cibles discrètes. / Ces résultats indiquent que le raisonneur intuitif fut capable d'induire des règles à partir d'une base de données plutôt limitée, et de fournir des prédictions raisonnablement précises grâce à ces règles. Cela indique qu'en consolidant plusieurs résultats de règles de basse certitude, un raisonneur "intuitif" peut devenir un outil de prédiction efficace ou servir adéquatement à compléter d'autres tâches computationnelles, à partir de données incomplètes de qualité variable.
155

Automatic basis function construction for reinforcement learning and approximate dynamic programming

Keller, Philipp Wilhelm January 2008 (has links)
We address the problem of automatically constructing basis functions for linear approximation of the value function of a Markov decision process (MDP). Our work builds on results by Bertsekas and Casta˜non (1989) who proposed a method for automatically aggregating states to speed up value iteration. We propose to use neighbourhood component analysis , a dimensionality reduction technique created for supervised learning, in order to map a high-dimensional state space to a low-dimensional space, based on the Bellman error, or on the temporal difference (TD) error. We then place basis functions in the lower-dimensional space. These are added as new features for the linear function approximator. This approach is applied to a high-dimensional inventory control problem, and to a number of benchmark reinforcement learning problems. / Nous adressons la construction automatique de fonctions base pour l'approximation linéaire de la fonction valeur d'un processus de décision Markov. Cette thèse se base sur les résultats de Bertsekas et Castañon (1989), qui ont proposé une méthode pour automatiquement grouper des états dans le but d'accélérer la programmation dynamique. Nous proposons d'utiliser une technique récente de réduction de dimension afin de projeter des états en haute dimension dans un espace à basse dimension. Nous plaçons alors des fonctions base radiales dans ce nouvel espace. Cette technique est appliquée à plusieurs problèmes de référence standards pour les algorithmes d'apprentissage par renforcement, ainsi qu'à un problème de contrôle d'inventaire en haute dimension.
156

Reasoning with exceptions : an inheritance based approach

Al-Asady, Raad January 1993 (has links)
No description available.
157

Weaver - a hybrid artificial intelligence laboratory for modelling complex, knowledge- and data-poor domains

Hare, Matthew Peter January 1999 (has links)
Weaver is a hybrid knowledge discovery environment which fills a current gap in Artificial Intelligence (AI) applications, namely tools designed for the development and exploration of existing knowledge in <I>complex, knowledge and data-poor domains. </I>Such domains are typified by incomplete and conflicting knowledge, and data which are very hard to collect. Without the support of robust domain theory, many experimental and modelling assumptions have to be made whose impact on field work and model design are uncertain or simply unknown. Compositional modelling, experimental simulation, inductive learning, and experimental reformulation tools are integrated within a methodology analogous to Popper's scientific method of <I>critical discussion. </I>The purpose of Weaver is to provide a 'laboratory' environment in which a scientist can develop domain theory through an iterative process of <I>in silico</I> experimentation, theory proposal, criticism, and theory refinement. After refinement within Weaver, this domain theory may be used to guide field work and model design. Weaver is a pragmatic response to tool development in complex, knowledge- and data- poor domains. In the compositional modelling tool, a domain-independent algorithm for <I>dynamic multiple scale bridging </I>has been developed. The multiple perspective simulation tool provides an object class library for the construction of multiple simulations that can be flexibly and easily altered. The experimental reformulator uses a simple domain-independent heuristic search to help guide the scientist in selecting the experimental simulations that need to be carried out in order to critically test and refine the domain theory. An example of Weaver's use in an ecological domain is provided in the exploration of the possible causes of population cycles in red grouse (<I>Lagopus, lagopus scoticus</I>). The problem of AI tool validation in complex, knowledge- and data-poor domains is also discussed.
158

A process-oriented approach to representing and reasoning about naive physiology

Arana Landín, Ines January 1995 (has links)
This thesis presents the RAP system: a Reasoner About Physiology. RAP consists of two modules: knowledge representation and reasoning. The knowledge representation module describes commonsense anatomy and physiology at various levels of abstraction and detail. This representation is broad (covers several physiological systems), dense (the number of relationships between anatomical and physiological elements is high) and uniform (the same kind of formalism is used to represent anatomy, physiology and their interrelationships). These features lead to a 'natural' representation of naive physiology which is, therefore, easy to understand and use. The reasoning module performs two tasks: 1) it infers the behaviour of a complex physiological process using the behaviours of its subprocesses and the relationships between them; 2) it reasons about the effect of introducing a fault in the model. In order to reason about the behaviour of a complex process, RAP uses a mechanism which consists of the following tasks: (i) understanding how subprocesses behave; (ii) comprehending how these subprocesses affect each others behaviours; (iii) "aggregating" these behaviours together to obtain the behaviour of the top level process; (iv) giving that process a temporal context in which to act. RAP uses limited commonsense knowledge about faults to reason about the effect of a fault in the model. It discovers new processes which originate as a consequence of a fault and detects processes which misbehave due to a fault. The effects of both newly generated and misbehaving processes are then propagated throughout the model to obtain the overall effect of the fault. RAP represents and reasons about naive physiology and is a step forward in the development of systems which use commonsense knowledge.
159

Nonmonotonic inheritance of class membership

Woodhead, David A. January 1990 (has links)
This thesis describes a formal analysis of nonmonotonic inheritance. The need for such an understanding of inheritance has been apparent from the time that multiple inheritance and exceptions were mixed in the same representation with the result that the meaning of an inheritance network was no longer clear. Many attempts to deal with the problems associated with nonmonotonic multiple inheritance appeared in the literature but, probably due to the lack of clear semantics there was no general agreement on how many of the standard examples should be handled. This thesis attempts to resolve these problems by presenting a framework for a family of path based inheritance reasoners which allows the consequences of design decisions to be explored. Many of the major theorems are therefore proved without the need to make any commitment as to how conflicts between nonmonotonic chains of reasoning are to be resolved. In particular it is shown that consistent sets of conclusions, known as expansions, exist for a wide class of networks. When commitment is made to a method of choosing between conflicting arguments, particular inheritance systems are produced. The systems described in this thesis can be divided into three classes. The simplest of these, in which an arbitrary choice is made between conflicting arguments, is shown to be very closely related to default logic. The other classes each of which contain four systems, are the decoupled and coupled inheritance systems which use specificity as a guide to choosing between conflicting arguments. In a decoupled system the results relating to a particular node are not affected in any way by derived results concerning other nodes in the inheritance network, whereas in a coupled system decisions in the face of ambiguity are linked to produce expansions which are more intuitively acceptable as a consistent view of the world. A number of results concerning the relationship between these systems are given. In particular it is shown that the process of coupling will not affect the results which lie in the intersection of the expansions produced for a given network.
160

FGP : a genetic programming based tool for financial forecasting

Li, Jin January 2000 (has links)
No description available.

Page generated in 0.0454 seconds