• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3239
  • 565
  • 314
  • 232
  • 196
  • 127
  • 107
  • 83
  • 83
  • 83
  • 83
  • 83
  • 83
  • 32
  • 29
  • Tagged with
  • 5979
  • 5979
  • 2157
  • 1640
  • 1364
  • 1234
  • 941
  • 916
  • 796
  • 789
  • 709
  • 658
  • 622
  • 578
  • 567
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Introspective multistrategy learning : constructing a learning strategy under reasoning failure

Cox, Michael Thomas 05 1900 (has links)
No description available.
282

Spatial based learning force controller for a robotic manipulator

Heil, Phillip J. 08 1900 (has links)
No description available.
283

A new method for parametric surface registration

Tucker, Thomas Marshall 08 1900 (has links)
No description available.
284

Analysis of trumpet tone quality using machine learning and audio feature selection

Knight, Trevor January 2012 (has links)
This work examines which audio features, the components of recorded sound, are most relevant to trumpet tone quality by using classification and feature selection. A total of 10 trumpet players with a variety of experience levels were recorded playing the same notes under the same conditions. Twelve musical instrumentalists listened to the notes and provided subjective ratings of the tone quality on a seven-point Likert scale to provide training data for classification. The initial experiment verified that there is statistical agreement between human raters on tone quality and that it was possible to train a support vector machine (SVM) classifier to identify different levels of tone quality with success of 72% classification accuracy with the notes split into two classes and 46% when using seven classes. In the main experiment, different types of feature selection algorithms were applied to the 164 possible audio features to select high-performing subsets. The baseline set of all 164 audio features obtained a classification accuracy of 58.9% with seven classes tested with cross-validation. Ranking, sequential floating forward selection, and genetic search produced accuracies of 43.8%, 53.6%, and 59.6% with 20, 21, and 74 features, respectively. Future work in this field could focus on more nuanced interpretations of tone quality or on the applicability to other instruments. / Ce travail examine les caractéristique acoustique, c.-à-d. les composantes de l'enregistrement sonore, les plus pertinentes pour la qualité du timbre de trompette à l'aide de la classification automatique et de la sélection de caractéristiques. Un total de 10 joueurs de trompette de niveau varié, jouant les mêmes notes dans les mêmes conditions, a été enregistré. Douze instrumentistes de musique ont écouté les enregistrements et ont fourni des évaluations subjectives de la qualité du timbre sur une échelle de Likert à sept points afin de fournir des données d'entrainement du système de classification. La première expérience a vérifié qu'il existe une correlation statistique entre les évaluateurs humains sur la qualité du timbre et qu'il était possible de former un système de classification de type machine à vecteurs de support pour identifier les différents niveaux de qualité du timbre avec un succès de précision de la classification de 72% pour les notes quand divisées en deux classes et 46% lors de l'utilisation de sept classes. Dans l'expérience principale, différents types d'algorithmes de sélection de caractéristiques ont été appliqués aux 164 fonctions au- dio possibles pour sélectionner les sous-ensembles les plus performants. L'ensemble de toutes les 164 fonctions audio a obtenu une précision de classification de 58,9% avec sept classes testées par validation croisée. Les algorithmes de "ranking," "sequential floating forward selection," et génétiques produisent une précision respective de 43,8%, 53,6% et 59,6% avec 20, 21 et 74 caractéristiques. Les futurs travaux dans ce domaine pourraient se concentrer sur des interprétations plus nuancées de la qualité du timbre ou sur l'applicabilité à d'autres instruments.
285

Robust decision making and its applications in machine learning

Xu, Huan January 2009 (has links)
Decision making formulated as finding a strategy that maximizes a utility function depends critically on knowing the problem parameters precisely. The obtained strategy can be highly sub-optimal and/or infeasible when parameters are subject to uncertainty, a typical situation in practice. Robust optimization, and more generally robust decision making, addresses this issue by treating uncertain parameters as an arbitrary element of a pre-defined set and solving solutions based on a worst-case analysis. In this thesis we contribute to two closely related fields of robust decision making. First, we address two limitations of robust decision making. Namely, a lack of theoretical justification and conservatism in sequential decision making. Specifically, we provide an axiomatic justification of robust optimization based on the MaxMin Expected Utility framework from decision theory. Furthermore, we propose three less conservative decision criteria for sequential decision making tasks, which include: (1) In uncertain Markov decision processes we propose an alternative formulation of the parameter uncertainty -- the nested-set structured parameter uncertainty -- and find the strategy that achieves maxmin expected utility to mitigate the conservatism of the standard robust Markov decision processes. (2) We investigate uncertain Markov decision processes where each strategy is evaluated comparatively by its gap to the optimum value. Two formulations, namely minimax regret and mean-variance tradeoff of the regret, were proposed and their computational cost studied. (3) We propose a novel Kalman filter design based on trading-off the likely performance and the robustness under parameter uncertainty. Second, we apply robust decision making into machine learning both theoretically and algorithmically. Specifically, on the theoretical front, we show that the concept of robustness is essential to ''successful'' learning / La prise de décision, formulée comme trouver une stratégie qui maximise une fonction de l'utilité, dépend de manière critique sur la connaissance précise des paramètres du problem. La stratégie obtenue peut être très sous-optimale et/ou infeasible quand les paramètres sont subjets à l'incertitude – une situation typique en pratique. L'optimisation robuste, et plus genéralement, la prise de décision robuste, vise cette question en traitant le paramètre incertain comme un élement arbitraire d'un ensemble prédéfini et en trouvant une solution en suivant l'analyse du pire scénario. Dans cette thèse, nous contribuons envers deux champs intimement reliés et appartenant à la prise de décision robuste. En premier lieu, nous considérons deux limites de la prise de décision robuste: le manque de justification théorique et le conservatism dans la prise de décision séquentielle. Pour être plus spécifique, nous donnons une justifiquation axiomatique de l'optimisation robuste basée sur le cadre de l'utilité espérée MaxMin de la théorie de la prise de décision. De plus, nous proposons trois critères moins conservateurs pour la prise de décision séquentielle, incluant: (1) dans les processus incertains de décisionde Markov, nous proposons un modèle alternative de l'incertitude de paramètres –l'incertitude structurée comme des ensembles emboîtées – et trouvons une stratégie qui obtient une utilité espérée maxmin pour mitiguer le conservatisme des processus incertains de décision de Markov qui sont de norme. (2) Nous considérons les processus incertains de décision de Markov où chaque stratégie est évaluée par comparaison de l'écart avec l'optimum. Deux modèles – le regret minimax et le compromis entre l'espérance et la variance du regret – sont présentés et leurs complexités étudiées. (3)Nous proposons une nouvelle conception de filtre de Kalman b
286

An artificial intelligence language to describe extended procedural networks /

Merlo, Ettore January 1989 (has links)
Speaker-independence and large lexicon access are still two of the greatest problems in automatic speech recognition. Cognitive and information-theory approaches try to solve the recognition problem by proceeding in almost opposite directions. The former rely on knowledge representation, reasoning and perceptual analysis, while the latter is in general based on highly numerical and mathematical algorithms. / Progress arises from the integration of the two mentioned approaches. Artificial intelligence techniques are often used in the cognitive approach, but these techniques usually lack sophisticated numerical support. The Extended Procedural Network constitutes a general AI framework which supports powerful numerical strategies which include stochastic techniques. / The model has been tested on difficult problems in speech recognition, including speaker-independent letter and digit recognition, speaker-independent vowel and diphthong recognition, and access to a large lexicon. / Various experiments and comparisons have been run on a large number of speakers and the results are reported. / A discussion of further research advancements and investigations is provided.
287

The development of an artificially intuitive reasoner

Sun, Yung Chien January 2010 (has links)
This research is an exploration of the phenomenon of "intuition" in the context of artificial intelligence (AI). In this work, intuition was considered as the human capacity to make decisions under situations in which the available knowledge was usually low in quality: inconsistent and of varying levels of certainty. The objectives of this study were to characterize some of the aspects of human intuitive thought and to model these aspects in a computational approach. / This project entailed the development of a conceptual framework and a conceptual model, and, based on these, a computer system with three general parts: (1) a rule induction module for establishing the knowledge base for the reasoner; (2) the intuitive reasoner that was essentially a rule-based inference engine; (3) two learning approaches that could update the knowledge base over time for the reasoner to make better predictions. A reference reasoner based on established data analysis methods was also constructed, as a bench-mark for evaluating the intuitive reasoner. / The input and the rules drawn by the reasoner were allowed to be fuzzy, multi-valued, and of varying levels of certainty. A measure of the certainty level, Strength of Belief, was attached to each input as well as each rule. Rules for the intuitive reasoner were induced from only about 10% of the data available for the reference reasoner. Solutions were formulated through iterations of consolidating intermediate reasoning results, during which the Strength of Belief of corroborating intermediate results was combined. / The intuitive and the reference reasoners were tested to predict the value (class) of 12 target variables chosen by the author, of which six were continuous variables and the other six were discrete variables. The intuitive reasoner developed in this study matched the performance of the reference reasoner for three of six continuous target variables and achieved at least 70% of the accuracy of the reference reasoner for all six discrete target variables. / The results showed that the intuitive reasoner was able to induce rules from a sparse database and use those rules to make accurate predictions. This suggested that by consolidating numerous outputs from low-certainty rules, an "intuitive" reasoner can effectively perform prediction, or other computational tasks, on the basis of incomplete information of varying quality. / Cette étude se penche sur le phénomène de "l'intuition" dans le contexte de l'intelligence artificielle (IA). Dans cette étude, l'intuition fut considérée comme la capacité de l'être humain à en venir à une décision lorsqu'une situation se présente où les informations disponibles sont majoritairement de pauvre qualité: irrégulières et d'un niveau de certitude variable. Cette étude visa la caractérisation de certains aspects de la pensée intuitive de l'être humain et la modélisation de ces aspects par une démarche computationnelle. / Cette étude nécessita le développement d'un cadre conceptuel, et, basé sur celui-ci un système informatisé à trois volets: (1) un module fonctionnant par induction de règles servant à établir le base de connaissances du raisonneur; (2) le raisonneur intuitif, moteur essentiel d'un système d'inférences basé sur des règles; (3) deux démarches d'apprentissage permettant une mise à jour continuelle de la base de connaissances, permettant au raisonneur d'en venir à de meilleures prédictions. Afin de servir comme point de référence dans l'évaluation du raisonneur intuitif, un raisonneur de référence employant des méthodes d'analyse de données conventionnelles fut bâti. / Nous permîmes aux données d'entrée et aux règles formulées par le raisonneur d'être floues, multivaluées et de différents niveaux de certitude. Un barème du niveau de certitude, le Niveau de Confiance, fut attribué à chaque donnée d'entrée, ainsi qu'à chaque règle. Les règles induites par le raisonneur intuitif ne furent basées que sur le dixième des données disponibles au raisonneur de référence. Les solutions furent formulées à travers plusieurs itérations de consolidation des résultats de raisonnements intermédiaires, durant lesquels le Niveau de Confiance de résultats intermédiaires corroborants furent combinés. / Le raisonneur intuitif et le raisonneur de référence furent éprouvés en leur demandant de prédire la valeur (classe) de 12 variables-cibles choisies par l'auteur, dont six continues et six discrètes. Le raisonneur intuitif développé dans cette étude égala la performance du raisonneur de référence pour deux des six variables-cibles continues, et atteigna au moins 70% de la précision du raisonneur de référence pour les six variables-cibles discrètes. / Ces résultats indiquent que le raisonneur intuitif fut capable d'induire des règles à partir d'une base de données plutôt limitée, et de fournir des prédictions raisonnablement précises grâce à ces règles. Cela indique qu'en consolidant plusieurs résultats de règles de basse certitude, un raisonneur "intuitif" peut devenir un outil de prédiction efficace ou servir adéquatement à compléter d'autres tâches computationnelles, à partir de données incomplètes de qualité variable.
288

Automatic basis function construction for reinforcement learning and approximate dynamic programming

Keller, Philipp Wilhelm January 2008 (has links)
We address the problem of automatically constructing basis functions for linear approximation of the value function of a Markov decision process (MDP). Our work builds on results by Bertsekas and Casta˜non (1989) who proposed a method for automatically aggregating states to speed up value iteration. We propose to use neighbourhood component analysis , a dimensionality reduction technique created for supervised learning, in order to map a high-dimensional state space to a low-dimensional space, based on the Bellman error, or on the temporal difference (TD) error. We then place basis functions in the lower-dimensional space. These are added as new features for the linear function approximator. This approach is applied to a high-dimensional inventory control problem, and to a number of benchmark reinforcement learning problems. / Nous adressons la construction automatique de fonctions base pour l'approximation linéaire de la fonction valeur d'un processus de décision Markov. Cette thèse se base sur les résultats de Bertsekas et Castañon (1989), qui ont proposé une méthode pour automatiquement grouper des états dans le but d'accélérer la programmation dynamique. Nous proposons d'utiliser une technique récente de réduction de dimension afin de projeter des états en haute dimension dans un espace à basse dimension. Nous plaçons alors des fonctions base radiales dans ce nouvel espace. Cette technique est appliquée à plusieurs problèmes de référence standards pour les algorithmes d'apprentissage par renforcement, ainsi qu'à un problème de contrôle d'inventaire en haute dimension.
289

Autonomic Performance Optimization with Application to Self-Architecting Software Systems

Ewing, John M. 11 July 2015 (has links)
<p> Service Oriented Architectures (SOA) are an emerging software engineering discipline that builds software systems and applications by connecting and integrating well-defined, distributed, reusable software service instances. SOA can speed development time and reduce costs by encouraging reuse, but this new service paradigm presents significant challenges. Many SOA applications are dependent upon service instances maintained by vendors and/or separate organizations. Applications and composed services using disparate providers typically demonstrate limited autonomy with contemporary SOA approaches. Availability may also suffer with the proliferation of possible points of failure&mdash;restoration of functionality often depends upon intervention by human administrators. </p><p> Autonomic computing is a set of technologies that enables self-management of computer systems. When applied to SOA systems, autonomic computing can provide automatic detection of faults and take restorative action. Additionally, autonomic computing techniques possess optimization capabilities that can leverage the features of SOA (e.g., loose coupling) to enable peak performance in the SOA system's operation. This dissertation demonstrates that autonomic computing techniques can help SOA systems maintain high levels of usefulness and usability. </p><p> This dissertation presents a centralized autonomic controller framework to manage SOA systems in dynamic service environments. The centralized autonomic controller framework can be enhanced through a second meta-optimization framework that automates the selection of optimization algorithms used in the autonomic controller. A third framework for autonomic meta-controllers can study, learn, adjust, and improve the optimization procedures of the autonomic controller at run-time. Within this framework, two different types of meta-controllers were developed. The <b>Overall Best</b> meta-controller tracks overall performance of different optimization procedures. <b>Context Best</b> meta-controllers attempt to determine the best optimization procedure for the current optimization problem. Three separate Context Best meta-controllers were implemented using different machine learning techniques: 1) K-Nearest Neighbor (<b>KNN MC</b>), 2) Support Vector Machines (SVM) trained offline (<b>Offline SVM</b>), and 3) SVM trained online (<b>Online SVM</b>). </p><p> A detailed set of experiments demonstrated the effectiveness and scalability of the approaches. Autonomic controllers of SOA systems successfully maintained performance on systems with 15, 25, 40, and 65 components. The <b>Overall Best</b> meta-controller successfully identified the best optimization technique and provided excellent performance at all levels of scale. Among the <b>Context Best</b> meta-controllers, the <b>Online SVM</b> meta-controller was tested on the 40 component system and performed better than the <b>Overall Best</b> meta-controller at a 95% confidence level. Evidence indicates that the <b>Online SVM</b> was successfully learning which optimization procedures were best applied to encountered optimization problems. The <b>KNN MC</b> and <b>Offline SVM</b> were less successful. The <b>KNN MC</b> struggled because the KNN algorithm does not account for the asymmetric cost of prediction errors. The <b>Offline SVM</b> was unable to predict the correct optimization procedure with sufficient accuracy&mdash;this was likely due to the challenge of building a relevant offline training set. The meta-optimization framework, which was tested on the 65 component system, successfully improved the optimization techniques used by the autonomic controller. </p><p> The meta-optimization and meta-controller frameworks described in this dissertation have broad applicability in autonomic computing and related fields. This dissertation also details a technique for measuring the overlap of two populations of points, establishes an approach for using penalty weights to address one-sided overfitting by SVM on asymmetric data sets, and develops a set of high performance data structure and heuristic search templates for C++.</p>
290

Evolving effective micro behaviors for real-time strategy games

Liu, Siming 16 July 2015 (has links)
<p> Real-Time Strategy games have become a new frontier of artificial intelligence research. Advances in real-time strategy game AI, like with chess and checkers before, will significantly advance the state of the art in AI research. This thesis aims to investigate using heuristic search algorithms to generate effective micro behaviors in combat scenarios for real-time strategy games. <i> Macro</i> and <i>micro</i> management are two key aspects of real-time strategy games. While good macro helps a player collect more resources and build more units, good micro helps a player win skirmishes against equal numbers of opponent units or win even when outnumbered. In this research, we use influence maps and potential fields as a basis representation to evolve micro behaviors. We first compare genetic algorithms against two types of hill climbers for generating competitive unit micro management. Second, we investigated the use of case-injected genetic algorithms to quickly and reliably generate high quality micro behaviors. Then we compactly encoded micro behaviors including influence maps, potential fields, and reactive control into fourteen parameters and used genetic algorithms to search for a complete micro bot, <i> ECSLBot.</i> We compare the performance of our ECSLBot with two state of the art bots, <i>UAlbertaBot</i> and <i>Nova,</i> on several skirmish scenarios in a popular real-time strategy game <i>StarCraft. </i> The results show that the ECSLBot tuned by genetic algorithms outperforms UAlbertaBot and Nova in kiting efficiency, target selection, and fleeing. In addition, the same approach works to create competitive micro behaviors in another game <i>SeaCraft.</i> Using parallelized genetic algorithms to evolve parameters in SeaCraft we are able to speed up the evolutionary process from twenty one hours to nine minutes. We believe this work provides evidence that genetic algorithms and our representation may be a viable approach to creating effective micro behaviors for winning skirmishes in real-time strategy games.</p>

Page generated in 0.1223 seconds