271 |
Final version : uncertainty in artificial intelligenceAlDrobi, Molham Rateb January 1993 (has links)
Reasoning with uncertain information has received a great deal of attention recently, as this issue has to be addressed when developing many expert systems. / In this thesis we study the literature of uncertainty in AI. The approaches taken by the researchers in this field can be classified into two categories: non-numeric approaches and numeric approaches. From non-numeric methods, we summarize The Theory of Endorsements, and non-monotonic logics. From numeric methods, we elaborate on MYCIN certainty Factors, Dempster-Shafer Theory, Fuzzy Logic, and Probabilistic Approach. We point out that probability theory is an adequate approach if we interpret probability values as beliefs and not only as frequencies. / We first discuss broad and more thoroughly researched areas. We then focus more on integrating probability and logic as we believe this is a crucial approach to build up a setting for reasoning with uncertain information based on strong local foundations. Some key works in that area are traced back to 1913 when Lukasiewics published his paper on Logical Foundation of Probability. Comparisons between Nilsson's probabilistic logic and the related work of Quinlan, Grosof, McLeish, Chen, and Bacchus are given. We conclude the thesis by our remarks and suggestions for possible future research topics.
|
272 |
Automated discovery of options in reinforcement learningStolle, Martin January 2004 (has links)
AI planning benefits greatly from the use of temporally-extended or macro-actions. Macro-actions allow for faster and more efficient planning as well as the reuse of knowledge from previous solutions. In recent years, a significant amount of research has been devoted to incorporating macro-actions in learned controllers, particularly in the context of Reinforcement Learning. One general approach is the use of options (temporally-extended actions) in Reinforcement Learning [22]. While the properties of options are well understood, it is not clear how to find new options automatically. In this thesis we propose two new algorithms for discovering options and compare them to one algorithm from the literature. We also contribute a new algorithm for learning with options which improves on the performance of two widely used learning algorithms. Extensive experiments are used to demonstrate the effectiveness of the proposed algorithms.
|
273 |
Metric learning revisited: new approaches for supervised and unsupervised metric learning with analysis and algorithmsAbou-Moustafa, Karim January 2012 (has links)
In machine learning one is usually given a data set of real high dimensional vectors X, based on which it is desired to select a hypothesis θ from the space of hypotheses Θ using a learning algorithm. An immediate assumption that is usually imposed on X is that it is a subset from the very general embedding space Rp which makes the Euclidean distance ∥•∥2 to become the default metric for the elements of X. Since various learning algorithms assume that the input space is Rp with its endowed metric ∥•∥2 as a (dis)similarity measure, it follows that selecting hypothesis θ becomes intrinsically tied to the Euclidean distance. Metric learning is the problem of selecting a specific metric dX from a certain family of metrics D based on the properties of the elements in the set X. Under some performance measure, the metric dX is expected to perform better on X than any other metric d 2 D. If the learning algorithm replaces the very general metric ∥•∥2 with the metric dX , then selecting hypothesis θ will be tied to the more specific metric dX which carries all the information on the properties of the elements in X. In this thesis I propose two algorithms for learning the metric dX ; the first for supervised learning settings, and the second for unsupervised, as well as for supervised and semi-supervised settings. In particular, I propose algorithms that take into consideration the structure and geometry of X on one hand, and the characteristics of real world data sets on the other. However, if we are also seeking dimensionality reduction, then under some mild assumptions on the topology of X, and based on the available a priori information, one can learn an embedding for X into a low dimensional Euclidean space Rp0, p0 << p, where the Euclidean distance better reveals the similarities between the elements of X and their groupings (clusters). That is, as a by-product, we obtain dimensionality reduction together with metric learning. In the supervised setting, I propose PARDA, or Pareto discriminant analysis for discriminative linear dimensionality reduction. PARDA is based on the machinery of multi-objective optimization; simultaneously optimizing multiple, possibly conflicting, objective functions. This allows PARDA to adapt to the class topology in the lower dimensional space, and naturally handles the class masking problem that is inherent in Fisher's discriminant analysis framework for multiclass problems. As a result, PARDA yields significantly better classification results when compared with modern techniques for discriminative dimensionality reduction. In the unsupervised setting, I propose an algorithmic framework, denoted by ?? (note the different notation), that encapsulates spectral manifold learning algorithms and gears them for metric learning. The framework ?? captures the local structure and the local density information from each point in a data set, and hence it carries all the information on the varying sample density in the input space. The structure of ?? induces two distance metrics for its elements, the Bhattacharyya-Riemann metric dBR and the Jeffreys-Riemann metric dJR. Both metrics reorganize the proximity between the points in X based on the local structure and density around each point. As a result, when combining the metric space (??, dBR) or (??, dJR) with spectral clustering and Euclidean embedding, they yield significant improvements in clustering accuracies and error rates for a large variety of clustering and classification tasks. / Dans cette thèse, je propose deux algorithmes pour l'apprentissage de la métrique dX; le premier pour l'apprentissage supervisé, et le deuxième pour l'apprentissage non-supervisé, ainsi que pour l'apprentissage supervisé et semi-supervisé. En particulier, je propose des algorithmes qui prennent en considération la structure et la géométrie de X d'une part, et les caractéristiques des ensembles de données du monde réel d'autre part. Cependant, si on cherche également la réduction de dimension, donc sous certaines hypothèses légères sur la topologie de X, et en même temps basé sur des informations disponibles a priori, on peut apprendre une intégration de X dans un espace Euclidien de petite dimension Rp0 p0 << p, où la distance Euclidienne révèle mieux les ressemblances entre les éléments de X et leurs groupements (clusters). Alors, comme un sous-produit, on obtient simultanément une réduction de dimension et un apprentissage métrique. Pour l'apprentissage supervisé, je propose PARDA, ou Pareto discriminant analysis, pour la discriminante réduction linéaire de dimension. PARDA est basé sur le mécanisme d'optimisation à multi-objectifs; optimisant simultanément plusieurs fonctions objectives, éventuellement des fonctions contradictoires. Cela permet à PARDA de s'adapter à la topologie de classe dans un espace dimensionnel plus petit, et naturellement gère le problème de masquage de classe associé au discriminant Fisher dans le cadre d'analyse de problèmes à multi-classes. En conséquence, PARDA permet des meilleurs résultats de classification par rapport aux techniques modernes de réduction discriminante de dimension. Pour l'apprentissage non-supervisés, je propose un cadre algorithmique, noté par ??, qui encapsule les algorithmes spectraux d'apprentissage formant an algorithme d'apprentissage de métrique. Le cadre ?? capture la structure locale et la densité locale d'information de chaque point dans un ensemble de données, et donc il porte toutes les informations sur la densité d'échantillon différente dans l'espace d'entrée. La structure de ?? induit deux métriques de distance pour ses éléments: la métrique Bhattacharyya-Riemann dBR et la métrique Jeffreys-Riemann dJR. Les deux mesures réorganisent la proximité entre les points de X basé sur la structure locale et la densité autour de chaque point. En conséquence, lorsqu'on combine l'espace métrique (??, dBR) ou (??, dJR) avec les algorithmes de "spectral clustering" et "Euclidean embedding", ils donnent des améliorations significatives dans les précisions de regroupement et les taux d'erreur pour une grande variété de tâches de clustering et de classification.
|
274 |
Text classification using labels derived from structured knowledge representationsPerreault, Mathieu January 2012 (has links)
Structured knowledge representations are becoming central to the area of Information Science. Search engines companies have said that constructing an entity graph is the key to classifying their enormous corpus of documents in order to provide more relevant results to their users. Our work presents WikiLabel, a novel approach to text classification using ontological knowledge. We match a document's terms to Wikipedia entities and use, amongst other measures, the path-length shortest distance from each entity to a given Wikipedia category to determine which label should be associated with the document. In the second part of our work, we use the obtained labels to train a supervised machine learning text classification algorithm, an approach we call SuperWikiLabel. We gather a dataset of news articles and obtain high-confidence labels from human coders to evaluate the performance of WikiLabel and SuperWikiLabel. We find that WikiLabel's performance is on par with other methods, and SuperWikiLabel is comparable to the performance of a traditional supervised method, where the document corpus is coded by humans. Our work suggests that it may be possible to largely eliminate the human coding efforts in a given text classification task, and we claim that our approach is more flexible and convenient than the usual methods of obtaining a labeled training document set, which often comes at great expense. / Les représentations de savoir structurées telles que Wikipedia sont devenues un élément important dans le domaine des sciences de l'information. Les compagnies d'engins de recherche ont dit que construire un réseau d'entités est pour eux la clé pour faire la classification de leurs énormes bases de données remplies de documents. Notre document présente WikiLabel, une approche nouvelle à la classification de texte en utilisant du savoir obtenu de ces sources de savoir structurées. Elle reconnaît les entités de Wikipedia dans un document et utilise, parmi d'autres mesures, la mesure de la plus courte distance entre chaque entité et des catégories de Wikipedia. Ceci permet de déterminer quelle catégorie est davantage associée avec le document sous observation. La deuxième partie de notre travail utilise les classifications obtenues en utilisant WikiLabel et entraîne une intelligence artificielle pour classifier des documents, une approche appelée SuperWikiLabel. Nous obtenons des articles de nouvelles ainsi que des classements de haute qualité effectuées par des humains pour évaluer la performance de WikiLabel et SuperWikiLabel. Nous trouvons que la performance de WikiLabel est comparable à d'autres mesures, et que celle de SuperWikiLabel est aussi comparable à une approche traditionnelle d'intelligence artificielle, où les documents sont classés par des humains plutôt que par WikiLabel. Notre travail indique qu'il pourrait être possible d'éliminer en grande partie le classement de documents par des humains, et nous croyons que notre approche est beaucoup plus flexible et pratique que les méthodes habituelles qui doivent obtenir un groupe de documents classés par des humains, qui est parfois coûteux en termes de ressources.
|
275 |
Mobile robot localisation using learned landmarksSim, Robert. January 1998 (has links)
We present an approach to vision-based mobile robot localisation. That is, the task of obtaining a precise position estimate for a robot in a previously explored environment, even without an a priori estimate. Our approach combines the strengths of statistical and feature-based methods. This is accomplished by learning a set of visual features called landmarks, each of which is detected as a local extremum of a measure of uniqueness and represented by an appearance-based encoding. Localisation is performed using a method that matches observed landmarks to learned prototypes and generates independent position estimates for each match. The independent estimates are then combined to obtain a final position estimate, with an associated uncertainty. Experimental evidence shows that an estimate accurate to a fraction of the environment sampling density can be obtained for a wide range of parameterisations, even under scaling of the explored region, and changes in sampling density.
|
276 |
Speech understanding system using classification treesYi, Kwan. January 1997 (has links)
The goal of Speech Understanding Systems (SUS) is to extract meanings from a sequence of hypothetical words generated by a speech recognizer. Recently SUSs tend to rely on robust matchers to perform this task. This thesis describes a new method using classification trees acting as a robust matcher for speech understanding. Classification trees are used as a learning method to learn rules automatically from training data. This thesis investigates uses of classification trees in speech system and some general algorithms applied on classification trees. The linguistic approach requires more human time because of the overhead associated with handling a large number of rules, whereas the proposed approach eliminates the need to handcode and debug the rules. Also, this approach is highly resistant to errors by the speaker or by the speech recognizer by depending on some semantically important words rather than entire word sequence. Furthermore, by re-training classification trees on a new set of training data later, system improvement is done easily and automatically. The thesis discusses a speech understanding system built at McGill University using the DARPA-sponsored Air Travel Information System (ATIS) task as training corpus and testbed.
|
277 |
A probabilistic min-max tree /Kamoun, Olivier January 1992 (has links)
MIN-MAX trees have been studied for thirty years as models of game trees in artificial intelligence. Judea Pearl introduced a popular probabilistic model that assigns random independent and identically distributed values to the leaves. Among the dependent models, incremental models assume that terminal values are computed as sums of edge values on the path from the root to a leaf. We study a special case called the scSUM model where the edge values follow a Bernoulli distribution with mean p. Let $V sb{n}$ be the root's value of a complete b-ary, n-level scSUM tree. We prove the E$V sb{n}$/n tends to a uniformly continuous function ${ cal V}(p)$. Surprisingly, ${ cal V}(p)$ is very nonlinear and has some flat parts. More formally, for all b, there exist $ alpha, beta in$ (0, 1) such that, cases{${ rm if} p in lbrack0, alpha rbrack$&:E$V sb{n}$ has a finite limit cr ${ rm if} p in lbrack1- alpha,1 rbrack$&:$n-{ rm E}V sb{n}$ has a finite limit cr ${ rm if} p in lbrack beta,1- beta rbrack$&:E$V sb{n}/n$ tends to 1/2 cr} inally $ beta$ and $ alpha$ tend to zero when b tends to infinity.
|
278 |
A modular connectionist approach to concept induction /Strigler, David January 1990 (has links)
One or more hypotheses may be induced from any set of exemplars and non exemplars. Incremental Modification of Hypothesis Fragments (IMHF) is a new algorithm for processing instances of concepts incrementally, discovering a consistent hypothesis after presentation of each example. / A modular connectionist approach using the back-propagation learning algorithm was taken to implement the IMHF algorithm. A shell called Parallel Unconnected Neural Networks (PUNN) was developed to give back-propagation the additional power of modularity and provided for the needed complexity to model IMHF. The PUNN implementation of the IMHF algorithm yielded a model of human induction of hypotheses from examples.
|
279 |
Computational Intelligent Systems: Evolving Dynamic Bayesian NetworksOsunmakinde, Isaac 01 December 2009 (has links)
Dynamic Bayesian Networks (DBNs) are temporal probabilistic models for reasoning over time. They often formulate the core reasoning component of intelligent systems in the field of machine learning. Recent studies have focused on the development of some DBNs such as Hidden Markov Models (HMMs) and their variants, which are explicitly represented by highly skilled users and have gained popularity in speech recognition. These varieties of HMMs represented as DBNs have contributed to the baseline of temporal modelling. However they are limited in their expressive power as they are approximated and pose difficult challenges for users when choosing the appropriate model for diverse real-life applications. To worsen the situation further, researchers and practitioners have also stressed that applications often have difficulties when evolving (or learning) such network models from environments captured as massive datasets, due to the ongoing predominance of computational intensity (or nondeterministic polynomial (NP) time hard). Finding solutions to these challenges is a difficult task.
In this thesis, a new class of temporal probabilistic modelling, called evolving dynamic Bayesian networks (EDBN), is proposed and demonstrated to make technology easier so as to accommodate both experts and non-experts, such as industrial practitioners, decision-makers, researchers, etc. Dynamic Bayesian Networks (DBNs) are ideally suited to achieve situation awareness, in which elements in the environment must be perceived within a volume of time and space, their meaning understood, and their status predicted in the near future. The use of Dynamic Bayesian Networks in achieving situation awareness has been poorly explored in current research efforts. This research completely evolves DBNs automatically from any environment captured as multivariate time series (MTS) which minimizes the approximations and mitigates the challenges of choice of models. This potentially accommodates both highly skilled users and non-expert practitioners, and attracts diverse real-world application areas for DBNs. The architecture of our EDBN uses a combined strategy as it resolves two orthogonal issues to address the challenging problems: (1) evolving DBNs in the absence of domain experts and (2) mitigating computational intensity (or NP-hard) problems with economic scalability.
Most notably, the major contributions of this thesis are as follows: the development of a new class of temporal probabilistic modeling (EDBN), whose architecture facilitates the demonstration of its emergent situation awareness (ESA) and emergent future situation awareness (EFSA) technologies. The ESA and its variant reveal hidden patterns over current and future time steps respectively. Among other contributions are the development and integration of an economic scalable framework called dynamic memory management in adaptive learning (DMMAL) into the architecture of the EDBN to emerge such network models from environments captured as massive datasets; the design of configurable agent actuators; adaptive operators; representative partitioning algorithms which facilitate the scalability framework; formal development and optimization of genetic algorithm (GA) to emerge optimal Bayesian networks from datasets, with emphasis on backtracking avoidance; and diverse applications of EDBN technologies such as business intelligence, revealing trends of insulin dose to medical patients, water quality management, project profitability analysis, sensor networks, etc. To ensure the universality and reproducibility of our architecture, we methodically conducted experiments using varied real-life datasets and publicly available machine learning datasets mostly from the University of California Irvine (UCI) repository.
|
280 |
Prediction and recommendation in online mediaYin, Dawei 06 December 2013 (has links)
<p> With billions of internet users, online media services have become commonplace. Prediction and recommendation for online media are fundamental problems in various applications, including recommender systems and information retrieval. As an example, accurately predicting user behaviors improves user experiences through more intelligent user interfaces. On the other hand, user behavior prediction in online media is also strongly related to behavior targeting and online advertisement which is the major business for most consumer internet services. Estimating and understanding users' click behaviors is a critical problem in online advertising. In this dissertation, we investigate the prediction and recommendation problems in various online media. We find a number of challenges: high order relations, temporal dynamics, complexity of network structure, high data sparsity and coupled social media activities. We consider user behavior understanding and prediction in four areas: tag prediction in a social tagging system, link prediction in microblogging services, multi-context modeling in online social media and click prediction in sponsored search. In such topics, based on real world data, we analyze user behaviors and discover patterns, properties and challenges. Subsequently, we design specific models for online user behavior prediction in various online media: a probabilistic model for personalized tag prediction, a user-tag-specific temporal interests model for tracking users' interests over time in tagging systems, a personalized structure based link prediction model for micro-blogging systems, a generalized latent factor model and Bayesian treatment for modeling across multiple contexts in online social media, a context-aware click model and framework for estimating ad group performance in sponsored search. Our extensive experiments on large-scale real-world datasets show our novel models advance the state-of-the-art.</p>
|
Page generated in 0.0935 seconds