• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5500
  • 1071
  • 768
  • 625
  • 541
  • 355
  • 143
  • 96
  • 96
  • 96
  • 96
  • 96
  • 96
  • 95
  • 82
  • Tagged with
  • 11471
  • 6028
  • 2537
  • 1977
  • 1672
  • 1419
  • 1340
  • 1313
  • 1215
  • 1132
  • 1074
  • 1035
  • 1008
  • 886
  • 876
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

Continuous function identification with fuzzy cellular automata

Ratitch, Bohdana. January 1998 (has links)
Thus far, cellular automata have been used primarily to model the systems consisting of a number of elements which interact with one another only locally; in addition these interactions can be naturally modeled with a discrete type of computation. In this thesis, we will investigate the possibility of a cellular automata application to a problem involving continuous computations, in particular, a problem of continuous function identification from a set of examples. A fuzzy model of cellular automata is introduced and used throughout the study. Two main issues in the context of this application are addressed: representation of real values on a cellular automata structure and a technique which provides cellular automata with a capacity for learning. An algorithm for encoding real values into cellular automata state configurations and a gradient descent type learning algorithm for fuzzy cellular automata are proposed in this thesis. A fuzzy cellular automaton learning system was implemented in a software and its performance studied by means of the experiments. The effects of several system's parameters on its performance were examined. Fuzzy cellular automata demonstrated the capabilities of carrying out complex continuous computations and performing supervised gradient based learning.
422

Building a model for a 3D object classs in a low dimensional space for object detection

Gill, Gurman January 2009 (has links)
Modeling 3D object classes requires accounting for intra-class variations in an object's appearance under different viewpoints, scale and illumination conditions. Therefore, detecting instances of 3D object classes in the presence of background clutter is difficult. This thesis presents a novel approach to model generic 3D object classes and an algorithm to detect multiple instances of an object class in an arbitrary image. Motivated by the parts-based representation, the proposed approach divides the object into different spatial regions. Each spatial region is associated with an object part whose appearance is represented by a dense set of overlapping SIFT features. The distribution of these features is then described in a lower dimensional space using supervised Locally Linear Embedding. Each object part is essentially represented by a spatial cluster in the embedding space. For viewpoint invariance, the view-sphere comprising the 3D object is divided into a discrete number of view segments. Several spatial clusters represent the object in each view segment. This thesis provides a framework for representing these clusters in either single or multiple embedding spaces. A novel aspect of the proposed approach is that all object parts and the background class are represented in the same lower dimensional space. Thus the detection algorithm can explicitly label features in an image as belonging to an object part or background. Additionally, spatial relationships between object parts are established and employed during the detection stage to localize instances of the object class in a novel image. It is shown that detecting objects based on measuring spatial consistency between object parts is superior to a bag-of-words model that ignores all spatial information. Since generic object classes can be characterized by shape or appearance, this thesis has formulated a method to combine these attributes to enhance the object model. Class-specific local contour featur / La modélisation de classes d'objets 3D nécessite la prise en compte des variations à l'intérieur d'une même classe de l'apparence d'un objet sous différents points de vue, échelles et conditions d'illumination. Par conséquent, la détection de tels objets en présence d'un arrière-plan complexe est difficile. Cette thèse présente une approche nouvelle pour la modélisation générique de classes d'objets 3D, ainsi qu’un algorithme pouvant détecter plusieurs objets d'une classe dans une image.Motivé par la représentation par parties, l'approche proposée divise l'objet en différentes régions spatiales. Chaque région est associée à la partie d'un objet dont l'apparence est représentée par un ensemble dense de caractéristiques SIFT superposées. La distribution de ces caractéristiques est alors projetée dans un espace dimensionnel inférieur à l'aide d'un algorithme supervisé de Locally Linear Embedding. Chaque partie de l'objet est essentiellement représentée par un regroupement spatial dans l'espace englobant. Pour l'invariance de point de vue, la sphère contenant l'objet 3D est divisée en un nombre discret de segments. Plusieurs regroupements spatiaux représentent l'objet dans chaque segment. Cette thèse propose une manière de représenter ces regroupements aussi bien dans des espaces englobants uniques que multiples. Un aspect innovateur de l'approche proposée est que toutes les parties d'objets et les éléments d'arrière-plan sont représentés dans le même espace dimensionnel inférieur. Ainsi, l'algorithme de détection peut explicitement étiqueter des éléments d'une image comme appartenant à une partie d'objet ou à l'arrière-plan. De plus, les relations spatiales entre les parties d'un objet sont déterminées pendant l'étape de détection et employées pour localiser des éléments d'une classe d'objet dans une nouvelle image. Il est démontré que la détection d'objets basée sur la mesure de la consistanc
423

Spectral models for color vision

Skaff, Sandra January 2009 (has links)
This thesis introduces a maximum entropy approach to model surface reflectance spectra. A reflectance spectrum is the amount of light, relative to the incident light, reflected from a surface at each wavelength. While the color of a surface can be in 3D vector form such as RGB, CMY, or YIQ, this thesis takes the surface reflectance spectrum to be the color of a surface. A reflectance spectrum is a physical property of a surface and does not vary with the different interactions a surface may undergo with its environment. Therefore, models of reflectance spectra can be used to fuse camera sensor responses from different images of the same surface or multiple surfaces of the same scene. This fusion improves the spectral estimates that can be obtained, and thus leads to better estimates of surface colors. The motivation for using a maximum entropy approach stems from the fact that surfaces observed in our everyday life surroundings typically have broad and therefore high entropy spectra. The maximum entropy approach, in addition, imposes the fewest constraints as it estimates surface reflectance spectra given only camera sensor responses. This is a major advantage over the widely used linear basis function spectral representations, which require a prespecified set of basis functions. Experimental results show that surface spectra of Munsell and construction paper patches can be successfully estimated using the maximum entropy approach in the case of three different surface interactions with the environment. First, in the case of changes in illumination, the thesis shows that the spectral models estimated are comparable to those obtained from the best approach which computes spectral models in the literature. Second, in the case of changes in the positions of surfaces with respect to each other, interreflections between the surfaces arise. Results show that the fusion of sensor responses from interreflection / Cette thèse introduit une approche par entropie maximale pour la modélisation des spectres de réflectance de surface. Un spectre de réflectance est la quantité de lumière, relative à la lumière incidente, réfléchie d'une surface à chaque longueur d'onde. Bien que la couleur d'une surface puisse prendre la forme d'un vecteur 3D tel que RGB, CMY ou YIQ, cette thèse prend le spectre de réflectance de surface comme étant la couleur d'une surface. Un spectre de réflectance est une propriété physique d'une surface et ne varie pas avec les différentes interactions que peut subir une surface avec son environnement. Par conséquent, les modèles de spectres de réflectance peuvent être utilisés pour fusionner les réponses de senseurs de caméra provenant de différentes images d'une même surface ou de multiples surfaces de la même scène. Cette fusion améliore les estimés spectraux qui peuvent être obtenus et mène donc à de meilleurs estimés de couleurs de surfaces.La motivation pour l'utilisation d'une approche par entropie maximale provient du fait que les surfaces observées dans notre environnement habituel ont typiquement un spectre large et donc à haute entropie. De plus, l'approche par entropie maximale impose le moins de contraintes puisqu'elle estime les spectres de réflectance de surface à l'aide seulement des réponses de senseurs de caméra. Ceci est un avantage majeur par rapport aux très répandues représentations spectrales par fonctions de base linéaires qui requièrent une série pré-spécifiée de fonctions de base.Les résultats expérimentaux montrent que les spectres de surface de taches de surface de Munsell et de papier de construction peuvent être estimés avec succès en utilisant l'approche par entropie maximal dans le cas de trois différentes interactions de surfaces avec l'environnement. D'abord, dans le cas de changements dans l'illumination, la t
424

Final version : uncertainty in artificial intelligence

AlDrobi, Molham Rateb January 1993 (has links)
Reasoning with uncertain information has received a great deal of attention recently, as this issue has to be addressed when developing many expert systems. / In this thesis we study the literature of uncertainty in AI. The approaches taken by the researchers in this field can be classified into two categories: non-numeric approaches and numeric approaches. From non-numeric methods, we summarize The Theory of Endorsements, and non-monotonic logics. From numeric methods, we elaborate on MYCIN certainty Factors, Dempster-Shafer Theory, Fuzzy Logic, and Probabilistic Approach. We point out that probability theory is an adequate approach if we interpret probability values as beliefs and not only as frequencies. / We first discuss broad and more thoroughly researched areas. We then focus more on integrating probability and logic as we believe this is a crucial approach to build up a setting for reasoning with uncertain information based on strong local foundations. Some key works in that area are traced back to 1913 when Lukasiewics published his paper on Logical Foundation of Probability. Comparisons between Nilsson's probabilistic logic and the related work of Quinlan, Grosof, McLeish, Chen, and Bacchus are given. We conclude the thesis by our remarks and suggestions for possible future research topics.
425

Automated discovery of options in reinforcement learning

Stolle, Martin January 2004 (has links)
AI planning benefits greatly from the use of temporally-extended or macro-actions. Macro-actions allow for faster and more efficient planning as well as the reuse of knowledge from previous solutions. In recent years, a significant amount of research has been devoted to incorporating macro-actions in learned controllers, particularly in the context of Reinforcement Learning. One general approach is the use of options (temporally-extended actions) in Reinforcement Learning [22]. While the properties of options are well understood, it is not clear how to find new options automatically. In this thesis we propose two new algorithms for discovering options and compare them to one algorithm from the literature. We also contribute a new algorithm for learning with options which improves on the performance of two widely used learning algorithms. Extensive experiments are used to demonstrate the effectiveness of the proposed algorithms.
426

Metric learning revisited: new approaches for supervised and unsupervised metric learning with analysis and algorithms

Abou-Moustafa, Karim January 2012 (has links)
In machine learning one is usually given a data set of real high dimensional vectors X, based on which it is desired to select a hypothesis θ from the space of hypotheses Θ using a learning algorithm. An immediate assumption that is usually imposed on X is that it is a subset from the very general embedding space Rp which makes the Euclidean distance ∥•∥2 to become the default metric for the elements of X. Since various learning algorithms assume that the input space is Rp with its endowed metric ∥•∥2 as a (dis)similarity measure, it follows that selecting hypothesis θ becomes intrinsically tied to the Euclidean distance. Metric learning is the problem of selecting a specific metric dX from a certain family of metrics D based on the properties of the elements in the set X. Under some performance measure, the metric dX is expected to perform better on X than any other metric d 2 D. If the learning algorithm replaces the very general metric ∥•∥2 with the metric dX , then selecting hypothesis θ will be tied to the more specific metric dX which carries all the information on the properties of the elements in X. In this thesis I propose two algorithms for learning the metric dX ; the first for supervised learning settings, and the second for unsupervised, as well as for supervised and semi-supervised settings. In particular, I propose algorithms that take into consideration the structure and geometry of X on one hand, and the characteristics of real world data sets on the other. However, if we are also seeking dimensionality reduction, then under some mild assumptions on the topology of X, and based on the available a priori information, one can learn an embedding for X into a low dimensional Euclidean space Rp0, p0 << p, where the Euclidean distance better reveals the similarities between the elements of X and their groupings (clusters). That is, as a by-product, we obtain dimensionality reduction together with metric learning. In the supervised setting, I propose PARDA, or Pareto discriminant analysis for discriminative linear dimensionality reduction. PARDA is based on the machinery of multi-objective optimization; simultaneously optimizing multiple, possibly conflicting, objective functions. This allows PARDA to adapt to the class topology in the lower dimensional space, and naturally handles the class masking problem that is inherent in Fisher's discriminant analysis framework for multiclass problems. As a result, PARDA yields significantly better classification results when compared with modern techniques for discriminative dimensionality reduction. In the unsupervised setting, I propose an algorithmic framework, denoted by ?? (note the different notation), that encapsulates spectral manifold learning algorithms and gears them for metric learning. The framework ?? captures the local structure and the local density information from each point in a data set, and hence it carries all the information on the varying sample density in the input space. The structure of ?? induces two distance metrics for its elements, the Bhattacharyya-Riemann metric dBR and the Jeffreys-Riemann metric dJR. Both metrics reorganize the proximity between the points in X based on the local structure and density around each point. As a result, when combining the metric space (??, dBR) or (??, dJR) with spectral clustering and Euclidean embedding, they yield significant improvements in clustering accuracies and error rates for a large variety of clustering and classification tasks. / Dans cette thèse, je propose deux algorithmes pour l'apprentissage de la métrique dX; le premier pour l'apprentissage supervisé, et le deuxième pour l'apprentissage non-supervisé, ainsi que pour l'apprentissage supervisé et semi-supervisé. En particulier, je propose des algorithmes qui prennent en considération la structure et la géométrie de X d'une part, et les caractéristiques des ensembles de données du monde réel d'autre part. Cependant, si on cherche également la réduction de dimension, donc sous certaines hypothèses légères sur la topologie de X, et en même temps basé sur des informations disponibles a priori, on peut apprendre une intégration de X dans un espace Euclidien de petite dimension Rp0 p0 << p, où la distance Euclidienne révèle mieux les ressemblances entre les éléments de X et leurs groupements (clusters). Alors, comme un sous-produit, on obtient simultanément une réduction de dimension et un apprentissage métrique. Pour l'apprentissage supervisé, je propose PARDA, ou Pareto discriminant analysis, pour la discriminante réduction linéaire de dimension. PARDA est basé sur le mécanisme d'optimisation à multi-objectifs; optimisant simultanément plusieurs fonctions objectives, éventuellement des fonctions contradictoires. Cela permet à PARDA de s'adapter à la topologie de classe dans un espace dimensionnel plus petit, et naturellement gère le problème de masquage de classe associé au discriminant Fisher dans le cadre d'analyse de problèmes à multi-classes. En conséquence, PARDA permet des meilleurs résultats de classification par rapport aux techniques modernes de réduction discriminante de dimension. Pour l'apprentissage non-supervisés, je propose un cadre algorithmique, noté par ??, qui encapsule les algorithmes spectraux d'apprentissage formant an algorithme d'apprentissage de métrique. Le cadre ?? capture la structure locale et la densité locale d'information de chaque point dans un ensemble de données, et donc il porte toutes les informations sur la densité d'échantillon différente dans l'espace d'entrée. La structure de ?? induit deux métriques de distance pour ses éléments: la métrique Bhattacharyya-Riemann dBR et la métrique Jeffreys-Riemann dJR. Les deux mesures réorganisent la proximité entre les points de X basé sur la structure locale et la densité autour de chaque point. En conséquence, lorsqu'on combine l'espace métrique (??, dBR) ou (??, dJR) avec les algorithmes de "spectral clustering" et "Euclidean embedding", ils donnent des améliorations significatives dans les précisions de regroupement et les taux d'erreur pour une grande variété de tâches de clustering et de classification.
427

Text classification using labels derived from structured knowledge representations

Perreault, Mathieu January 2012 (has links)
Structured knowledge representations are becoming central to the area of Information Science. Search engines companies have said that constructing an entity graph is the key to classifying their enormous corpus of documents in order to provide more relevant results to their users. Our work presents WikiLabel, a novel approach to text classification using ontological knowledge. We match a document's terms to Wikipedia entities and use, amongst other measures, the path-length shortest distance from each entity to a given Wikipedia category to determine which label should be associated with the document. In the second part of our work, we use the obtained labels to train a supervised machine learning text classification algorithm, an approach we call SuperWikiLabel. We gather a dataset of news articles and obtain high-confidence labels from human coders to evaluate the performance of WikiLabel and SuperWikiLabel. We find that WikiLabel's performance is on par with other methods, and SuperWikiLabel is comparable to the performance of a traditional supervised method, where the document corpus is coded by humans. Our work suggests that it may be possible to largely eliminate the human coding efforts in a given text classification task, and we claim that our approach is more flexible and convenient than the usual methods of obtaining a labeled training document set, which often comes at great expense. / Les représentations de savoir structurées telles que Wikipedia sont devenues un élément important dans le domaine des sciences de l'information. Les compagnies d'engins de recherche ont dit que construire un réseau d'entités est pour eux la clé pour faire la classification de leurs énormes bases de données remplies de documents. Notre document présente WikiLabel, une approche nouvelle à la classification de texte en utilisant du savoir obtenu de ces sources de savoir structurées. Elle reconnaît les entités de Wikipedia dans un document et utilise, parmi d'autres mesures, la mesure de la plus courte distance entre chaque entité et des catégories de Wikipedia. Ceci permet de déterminer quelle catégorie est davantage associée avec le document sous observation. La deuxième partie de notre travail utilise les classifications obtenues en utilisant WikiLabel et entraîne une intelligence artificielle pour classifier des documents, une approche appelée SuperWikiLabel. Nous obtenons des articles de nouvelles ainsi que des classements de haute qualité effectuées par des humains pour évaluer la performance de WikiLabel et SuperWikiLabel. Nous trouvons que la performance de WikiLabel est comparable à d'autres mesures, et que celle de SuperWikiLabel est aussi comparable à une approche traditionnelle d'intelligence artificielle, où les documents sont classés par des humains plutôt que par WikiLabel. Notre travail indique qu'il pourrait être possible d'éliminer en grande partie le classement de documents par des humains, et nous croyons que notre approche est beaucoup plus flexible et pratique que les méthodes habituelles qui doivent obtenir un groupe de documents classés par des humains, qui est parfois coûteux en termes de ressources.
428

Mobile robot localisation using learned landmarks

Sim, Robert. January 1998 (has links)
We present an approach to vision-based mobile robot localisation. That is, the task of obtaining a precise position estimate for a robot in a previously explored environment, even without an a priori estimate. Our approach combines the strengths of statistical and feature-based methods. This is accomplished by learning a set of visual features called landmarks, each of which is detected as a local extremum of a measure of uniqueness and represented by an appearance-based encoding. Localisation is performed using a method that matches observed landmarks to learned prototypes and generates independent position estimates for each match. The independent estimates are then combined to obtain a final position estimate, with an associated uncertainty. Experimental evidence shows that an estimate accurate to a fraction of the environment sampling density can be obtained for a wide range of parameterisations, even under scaling of the explored region, and changes in sampling density.
429

Speech understanding system using classification trees

Yi, Kwan. January 1997 (has links)
The goal of Speech Understanding Systems (SUS) is to extract meanings from a sequence of hypothetical words generated by a speech recognizer. Recently SUSs tend to rely on robust matchers to perform this task. This thesis describes a new method using classification trees acting as a robust matcher for speech understanding. Classification trees are used as a learning method to learn rules automatically from training data. This thesis investigates uses of classification trees in speech system and some general algorithms applied on classification trees. The linguistic approach requires more human time because of the overhead associated with handling a large number of rules, whereas the proposed approach eliminates the need to handcode and debug the rules. Also, this approach is highly resistant to errors by the speaker or by the speech recognizer by depending on some semantically important words rather than entire word sequence. Furthermore, by re-training classification trees on a new set of training data later, system improvement is done easily and automatically. The thesis discusses a speech understanding system built at McGill University using the DARPA-sponsored Air Travel Information System (ATIS) task as training corpus and testbed.
430

A probabilistic min-max tree /

Kamoun, Olivier January 1992 (has links)
MIN-MAX trees have been studied for thirty years as models of game trees in artificial intelligence. Judea Pearl introduced a popular probabilistic model that assigns random independent and identically distributed values to the leaves. Among the dependent models, incremental models assume that terminal values are computed as sums of edge values on the path from the root to a leaf. We study a special case called the scSUM model where the edge values follow a Bernoulli distribution with mean p. Let $V sb{n}$ be the root's value of a complete b-ary, n-level scSUM tree. We prove the E$V sb{n}$/n tends to a uniformly continuous function ${ cal V}(p)$. Surprisingly, ${ cal V}(p)$ is very nonlinear and has some flat parts. More formally, for all b, there exist $ alpha, beta in$ (0, 1) such that, cases{${ rm if} p in lbrack0, alpha rbrack$&:E$V sb{n}$ has a finite limit cr ${ rm if} p in lbrack1- alpha,1 rbrack$&:$n-{ rm E}V sb{n}$ has a finite limit cr ${ rm if} p in lbrack beta,1- beta rbrack$&:E$V sb{n}/n$ tends to 1/2 cr} inally $ beta$ and $ alpha$ tend to zero when b tends to infinity.

Page generated in 0.0786 seconds