• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2327
  • 505
  • 197
  • 196
  • 166
  • 126
  • 105
  • 67
  • 67
  • 67
  • 67
  • 67
  • 67
  • 32
  • 29
  • Tagged with
  • 4659
  • 4659
  • 1653
  • 1305
  • 1073
  • 981
  • 740
  • 737
  • 666
  • 646
  • 607
  • 540
  • 492
  • 479
  • 459
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Modeling and control of closed kinematic chains: A singular perturbation approach

Wang, Zhiyong January 2005 (has links)
Closed kinematic chains (CKCs) are constrained multibody systems that contain closed kinematic loops. Nowadays, CKCs are used in a variety of applications ranging from flight simulators to medical instruments, and are becoming increasingly popular in the machine-tool industry and haptic interfaces due to their better performance in terms of accuracy, rigidity and payload capacity as compared to open-chain mechanisms. This document intends to present a novel methodology for the modeling and control of general CKCs. The dynamics of CKCs are characterized by index-3 differential algebraic equations (DAEs). Dynamic models in the form of DAEs pose difficulties in model-based control because most existing control design techniques are devised for explicit state space models. The control methodology presented in this document is based on a singular perturbation formulation (SPF), which has attractive properties including the minimum dimension of its slow dynamics and the large validity domain that contains the entire singularity-free workspace of the CKCs. The key issue of the model approximation error is addressed under different stability conditions. Explicit error bounds are derived and sufficient conditions for the exponential convergence of the approximation errors are established. For the control of CKCs, our approach transfers the control of the original DAE system to the control of an artificially created singularly perturbed system. Compared to control methods which directly solve the nonlinear algebraic constraint equations, the proposed method uses an ODE solver to obtain the dependent coordinates, hence eliminating the need for Newton type iterations and is amenable to real-time implementation. The closed loop system, when controlled by typical open kinematic chain schemes, achieves asymptotic trajectory tracking. The efficacy of the approach is illustrated by simulating the dynamics of a CKC mechanism, the Rice Planar Delta Robot, and then by validating the simulation results with experimental data. Thus, this work establishes a framework in which the control of CKCs can be systematically addressed.
142

Feedback heuristics for hard combinatorial optimization problems

Puttlitz, Markus E. 08 1900 (has links)
No description available.
143

Adaptive control of epileptic seizures using reinforcement learning

Guez, Arthur January 2010 (has links)
This thesis presents a new methodology for automatically learning an optimal neurostimulation strategy for the treatment of epilepsy. The technical challenge is to automatically modulate neurostimulation parameters, as a function of the observed field potential recording, so as to minimize the frequency and duration of seizures. The methodology leverages recent techniques from the machine learning literature, in particular the reinforcement learning paradigm, to formalize this optimization problem. We present an algorithm which is able to learn an adaptive neurostimulation strategy directly from labeled training data acquired from animal brain tissues. Our results suggest that this methodology can be used to automatically find a stimulation strategy which effectively reduces the incidence of seizures, while also minimizing the amount of stimulation applied. This work highlights the crucial role that modern machine learning techniques can play in the optimization of treatment strategies for patients with chronic disorders such as epilepsy. / Cette thèse présente une nouvelle méthodologie pour apprendre, de façon automatique, une stratégie optimale de neurostimulation pour le traitement de l'épilepsie. Le défi technique est de moduler automatiquement les paramètres de stimulation, en fonction de l'enregistrement de potentiel de champ observé, afin de minimiser la fréquence et la durée des crises d'épilepsie. Cette méthodologie fait appel à des techniques récentes développées dans le domaine de l'apprentissage machine, en particulier le paradigme d'apprentissage par renforcement, pour formaliser ce problème d'optimisation. Nous présentons un algorithme qui est capable d'apprendre une stratégie adaptative de neurostimulation, et ce directement à partir de données d'apprentissage, étiquetées, acquises depuis des tissus de cerveaux d'animaux. Nos résultats suggèrent que cette méthodologie peut être utiliser pour trouver, automatiquement, une stratégie de stimulation qui réduit efficacement l'indicence des crises d'épilepsie tout en minimisant le nombre de stimulations appliquées. Ce travail met en évidence le rôle crucial que les techniques modernes d'apprentissage machine peuvent jouer dans l'optimisation de stratégies de traitements pour des patients souffrant de maladies chroniques telle l'épilepsie.
144

Analysis of a delay differential equation model of a neural network

Olien, Leonard January 1995 (has links)
In this thesis I examine a delay differential equation model for an artificial neural network with two neurons. Linear stability analysis is used to determine the stability region of the stationary solutions. They can lose stability through either a pitchfork or a supercritical Hopf bifurcation. It is shown that, for appropriate parameter values, an interaction takes place between the pitchfork and Hopf bifurcations. Conditions are found under which the set of initial conditions that converge to a stable stationary solution is open and dense in the function space. Analytic results are illustrated with numerical simulations.
145

Building a model for a 3D object classs in a low dimensional space for object detection

Gill, Gurman January 2009 (has links)
Modeling 3D object classes requires accounting for intra-class variations in an object's appearance under different viewpoints, scale and illumination conditions. Therefore, detecting instances of 3D object classes in the presence of background clutter is difficult. This thesis presents a novel approach to model generic 3D object classes and an algorithm to detect multiple instances of an object class in an arbitrary image. Motivated by the parts-based representation, the proposed approach divides the object into different spatial regions. Each spatial region is associated with an object part whose appearance is represented by a dense set of overlapping SIFT features. The distribution of these features is then described in a lower dimensional space using supervised Locally Linear Embedding. Each object part is essentially represented by a spatial cluster in the embedding space. For viewpoint invariance, the view-sphere comprising the 3D object is divided into a discrete number of view segments. Several spatial clusters represent the object in each view segment. This thesis provides a framework for representing these clusters in either single or multiple embedding spaces. A novel aspect of the proposed approach is that all object parts and the background class are represented in the same lower dimensional space. Thus the detection algorithm can explicitly label features in an image as belonging to an object part or background. Additionally, spatial relationships between object parts are established and employed during the detection stage to localize instances of the object class in a novel image. It is shown that detecting objects based on measuring spatial consistency between object parts is superior to a bag-of-words model that ignores all spatial information. Since generic object classes can be characterized by shape or appearance, this thesis has formulated a method to combine these attributes to enhance the object model. Class-specific local contour featur / La modélisation de classes d'objets 3D nécessite la prise en compte des variations à l'intérieur d'une même classe de l'apparence d'un objet sous différents points de vue, échelles et conditions d'illumination. Par conséquent, la détection de tels objets en présence d'un arrière-plan complexe est difficile. Cette thèse présente une approche nouvelle pour la modélisation générique de classes d'objets 3D, ainsi qu’un algorithme pouvant détecter plusieurs objets d'une classe dans une image.Motivé par la représentation par parties, l'approche proposée divise l'objet en différentes régions spatiales. Chaque région est associée à la partie d'un objet dont l'apparence est représentée par un ensemble dense de caractéristiques SIFT superposées. La distribution de ces caractéristiques est alors projetée dans un espace dimensionnel inférieur à l'aide d'un algorithme supervisé de Locally Linear Embedding. Chaque partie de l'objet est essentiellement représentée par un regroupement spatial dans l'espace englobant. Pour l'invariance de point de vue, la sphère contenant l'objet 3D est divisée en un nombre discret de segments. Plusieurs regroupements spatiaux représentent l'objet dans chaque segment. Cette thèse propose une manière de représenter ces regroupements aussi bien dans des espaces englobants uniques que multiples. Un aspect innovateur de l'approche proposée est que toutes les parties d'objets et les éléments d'arrière-plan sont représentés dans le même espace dimensionnel inférieur. Ainsi, l'algorithme de détection peut explicitement étiqueter des éléments d'une image comme appartenant à une partie d'objet ou à l'arrière-plan. De plus, les relations spatiales entre les parties d'un objet sont déterminées pendant l'étape de détection et employées pour localiser des éléments d'une classe d'objet dans une nouvelle image. Il est démontré que la détection d'objets basée sur la mesure de la consistanc
146

Spectral models for color vision

Skaff, Sandra January 2009 (has links)
This thesis introduces a maximum entropy approach to model surface reflectance spectra. A reflectance spectrum is the amount of light, relative to the incident light, reflected from a surface at each wavelength. While the color of a surface can be in 3D vector form such as RGB, CMY, or YIQ, this thesis takes the surface reflectance spectrum to be the color of a surface. A reflectance spectrum is a physical property of a surface and does not vary with the different interactions a surface may undergo with its environment. Therefore, models of reflectance spectra can be used to fuse camera sensor responses from different images of the same surface or multiple surfaces of the same scene. This fusion improves the spectral estimates that can be obtained, and thus leads to better estimates of surface colors. The motivation for using a maximum entropy approach stems from the fact that surfaces observed in our everyday life surroundings typically have broad and therefore high entropy spectra. The maximum entropy approach, in addition, imposes the fewest constraints as it estimates surface reflectance spectra given only camera sensor responses. This is a major advantage over the widely used linear basis function spectral representations, which require a prespecified set of basis functions. Experimental results show that surface spectra of Munsell and construction paper patches can be successfully estimated using the maximum entropy approach in the case of three different surface interactions with the environment. First, in the case of changes in illumination, the thesis shows that the spectral models estimated are comparable to those obtained from the best approach which computes spectral models in the literature. Second, in the case of changes in the positions of surfaces with respect to each other, interreflections between the surfaces arise. Results show that the fusion of sensor responses from interreflection / Cette thèse introduit une approche par entropie maximale pour la modélisation des spectres de réflectance de surface. Un spectre de réflectance est la quantité de lumière, relative à la lumière incidente, réfléchie d'une surface à chaque longueur d'onde. Bien que la couleur d'une surface puisse prendre la forme d'un vecteur 3D tel que RGB, CMY ou YIQ, cette thèse prend le spectre de réflectance de surface comme étant la couleur d'une surface. Un spectre de réflectance est une propriété physique d'une surface et ne varie pas avec les différentes interactions que peut subir une surface avec son environnement. Par conséquent, les modèles de spectres de réflectance peuvent être utilisés pour fusionner les réponses de senseurs de caméra provenant de différentes images d'une même surface ou de multiples surfaces de la même scène. Cette fusion améliore les estimés spectraux qui peuvent être obtenus et mène donc à de meilleurs estimés de couleurs de surfaces.La motivation pour l'utilisation d'une approche par entropie maximale provient du fait que les surfaces observées dans notre environnement habituel ont typiquement un spectre large et donc à haute entropie. De plus, l'approche par entropie maximale impose le moins de contraintes puisqu'elle estime les spectres de réflectance de surface à l'aide seulement des réponses de senseurs de caméra. Ceci est un avantage majeur par rapport aux très répandues représentations spectrales par fonctions de base linéaires qui requièrent une série pré-spécifiée de fonctions de base.Les résultats expérimentaux montrent que les spectres de surface de taches de surface de Munsell et de papier de construction peuvent être estimés avec succès en utilisant l'approche par entropie maximal dans le cas de trois différentes interactions de surfaces avec l'environnement. D'abord, dans le cas de changements dans l'illumination, la t
147

Metric learning revisited: new approaches for supervised and unsupervised metric learning with analysis and algorithms

Abou-Moustafa, Karim January 2012 (has links)
In machine learning one is usually given a data set of real high dimensional vectors X, based on which it is desired to select a hypothesis θ from the space of hypotheses Θ using a learning algorithm. An immediate assumption that is usually imposed on X is that it is a subset from the very general embedding space Rp which makes the Euclidean distance ∥•∥2 to become the default metric for the elements of X. Since various learning algorithms assume that the input space is Rp with its endowed metric ∥•∥2 as a (dis)similarity measure, it follows that selecting hypothesis θ becomes intrinsically tied to the Euclidean distance. Metric learning is the problem of selecting a specific metric dX from a certain family of metrics D based on the properties of the elements in the set X. Under some performance measure, the metric dX is expected to perform better on X than any other metric d 2 D. If the learning algorithm replaces the very general metric ∥•∥2 with the metric dX , then selecting hypothesis θ will be tied to the more specific metric dX which carries all the information on the properties of the elements in X. In this thesis I propose two algorithms for learning the metric dX ; the first for supervised learning settings, and the second for unsupervised, as well as for supervised and semi-supervised settings. In particular, I propose algorithms that take into consideration the structure and geometry of X on one hand, and the characteristics of real world data sets on the other. However, if we are also seeking dimensionality reduction, then under some mild assumptions on the topology of X, and based on the available a priori information, one can learn an embedding for X into a low dimensional Euclidean space Rp0, p0 << p, where the Euclidean distance better reveals the similarities between the elements of X and their groupings (clusters). That is, as a by-product, we obtain dimensionality reduction together with metric learning. In the supervised setting, I propose PARDA, or Pareto discriminant analysis for discriminative linear dimensionality reduction. PARDA is based on the machinery of multi-objective optimization; simultaneously optimizing multiple, possibly conflicting, objective functions. This allows PARDA to adapt to the class topology in the lower dimensional space, and naturally handles the class masking problem that is inherent in Fisher's discriminant analysis framework for multiclass problems. As a result, PARDA yields significantly better classification results when compared with modern techniques for discriminative dimensionality reduction. In the unsupervised setting, I propose an algorithmic framework, denoted by ?? (note the different notation), that encapsulates spectral manifold learning algorithms and gears them for metric learning. The framework ?? captures the local structure and the local density information from each point in a data set, and hence it carries all the information on the varying sample density in the input space. The structure of ?? induces two distance metrics for its elements, the Bhattacharyya-Riemann metric dBR and the Jeffreys-Riemann metric dJR. Both metrics reorganize the proximity between the points in X based on the local structure and density around each point. As a result, when combining the metric space (??, dBR) or (??, dJR) with spectral clustering and Euclidean embedding, they yield significant improvements in clustering accuracies and error rates for a large variety of clustering and classification tasks. / Dans cette thèse, je propose deux algorithmes pour l'apprentissage de la métrique dX; le premier pour l'apprentissage supervisé, et le deuxième pour l'apprentissage non-supervisé, ainsi que pour l'apprentissage supervisé et semi-supervisé. En particulier, je propose des algorithmes qui prennent en considération la structure et la géométrie de X d'une part, et les caractéristiques des ensembles de données du monde réel d'autre part. Cependant, si on cherche également la réduction de dimension, donc sous certaines hypothèses légères sur la topologie de X, et en même temps basé sur des informations disponibles a priori, on peut apprendre une intégration de X dans un espace Euclidien de petite dimension Rp0 p0 << p, où la distance Euclidienne révèle mieux les ressemblances entre les éléments de X et leurs groupements (clusters). Alors, comme un sous-produit, on obtient simultanément une réduction de dimension et un apprentissage métrique. Pour l'apprentissage supervisé, je propose PARDA, ou Pareto discriminant analysis, pour la discriminante réduction linéaire de dimension. PARDA est basé sur le mécanisme d'optimisation à multi-objectifs; optimisant simultanément plusieurs fonctions objectives, éventuellement des fonctions contradictoires. Cela permet à PARDA de s'adapter à la topologie de classe dans un espace dimensionnel plus petit, et naturellement gère le problème de masquage de classe associé au discriminant Fisher dans le cadre d'analyse de problèmes à multi-classes. En conséquence, PARDA permet des meilleurs résultats de classification par rapport aux techniques modernes de réduction discriminante de dimension. Pour l'apprentissage non-supervisés, je propose un cadre algorithmique, noté par ??, qui encapsule les algorithmes spectraux d'apprentissage formant an algorithme d'apprentissage de métrique. Le cadre ?? capture la structure locale et la densité locale d'information de chaque point dans un ensemble de données, et donc il porte toutes les informations sur la densité d'échantillon différente dans l'espace d'entrée. La structure de ?? induit deux métriques de distance pour ses éléments: la métrique Bhattacharyya-Riemann dBR et la métrique Jeffreys-Riemann dJR. Les deux mesures réorganisent la proximité entre les points de X basé sur la structure locale et la densité autour de chaque point. En conséquence, lorsqu'on combine l'espace métrique (??, dBR) ou (??, dJR) avec les algorithmes de "spectral clustering" et "Euclidean embedding", ils donnent des améliorations significatives dans les précisions de regroupement et les taux d'erreur pour une grande variété de tâches de clustering et de classification.
148

Text classification using labels derived from structured knowledge representations

Perreault, Mathieu January 2012 (has links)
Structured knowledge representations are becoming central to the area of Information Science. Search engines companies have said that constructing an entity graph is the key to classifying their enormous corpus of documents in order to provide more relevant results to their users. Our work presents WikiLabel, a novel approach to text classification using ontological knowledge. We match a document's terms to Wikipedia entities and use, amongst other measures, the path-length shortest distance from each entity to a given Wikipedia category to determine which label should be associated with the document. In the second part of our work, we use the obtained labels to train a supervised machine learning text classification algorithm, an approach we call SuperWikiLabel. We gather a dataset of news articles and obtain high-confidence labels from human coders to evaluate the performance of WikiLabel and SuperWikiLabel. We find that WikiLabel's performance is on par with other methods, and SuperWikiLabel is comparable to the performance of a traditional supervised method, where the document corpus is coded by humans. Our work suggests that it may be possible to largely eliminate the human coding efforts in a given text classification task, and we claim that our approach is more flexible and convenient than the usual methods of obtaining a labeled training document set, which often comes at great expense. / Les représentations de savoir structurées telles que Wikipedia sont devenues un élément important dans le domaine des sciences de l'information. Les compagnies d'engins de recherche ont dit que construire un réseau d'entités est pour eux la clé pour faire la classification de leurs énormes bases de données remplies de documents. Notre document présente WikiLabel, une approche nouvelle à la classification de texte en utilisant du savoir obtenu de ces sources de savoir structurées. Elle reconnaît les entités de Wikipedia dans un document et utilise, parmi d'autres mesures, la mesure de la plus courte distance entre chaque entité et des catégories de Wikipedia. Ceci permet de déterminer quelle catégorie est davantage associée avec le document sous observation. La deuxième partie de notre travail utilise les classifications obtenues en utilisant WikiLabel et entraîne une intelligence artificielle pour classifier des documents, une approche appelée SuperWikiLabel. Nous obtenons des articles de nouvelles ainsi que des classements de haute qualité effectuées par des humains pour évaluer la performance de WikiLabel et SuperWikiLabel. Nous trouvons que la performance de WikiLabel est comparable à d'autres mesures, et que celle de SuperWikiLabel est aussi comparable à une approche traditionnelle d'intelligence artificielle, où les documents sont classés par des humains plutôt que par WikiLabel. Notre travail indique qu'il pourrait être possible d'éliminer en grande partie le classement de documents par des humains, et nous croyons que notre approche est beaucoup plus flexible et pratique que les méthodes habituelles qui doivent obtenir un groupe de documents classés par des humains, qui est parfois coûteux en termes de ressources.
149

Computational Intelligent Systems: Evolving Dynamic Bayesian Networks

Osunmakinde, Isaac 01 December 2009 (has links)
Dynamic Bayesian Networks (DBNs) are temporal probabilistic models for reasoning over time. They often formulate the core reasoning component of intelligent systems in the field of machine learning. Recent studies have focused on the development of some DBNs such as Hidden Markov Models (HMMs) and their variants, which are explicitly represented by highly skilled users and have gained popularity in speech recognition. These varieties of HMMs represented as DBNs have contributed to the baseline of temporal modelling. However they are limited in their expressive power as they are approximated and pose difficult challenges for users when choosing the appropriate model for diverse real-life applications. To worsen the situation further, researchers and practitioners have also stressed that applications often have difficulties when evolving (or learning) such network models from environments captured as massive datasets, due to the ongoing predominance of computational intensity (or nondeterministic polynomial (NP) time hard). Finding solutions to these challenges is a difficult task. In this thesis, a new class of temporal probabilistic modelling, called evolving dynamic Bayesian networks (EDBN), is proposed and demonstrated to make technology easier so as to accommodate both experts and non-experts, such as industrial practitioners, decision-makers, researchers, etc. Dynamic Bayesian Networks (DBNs) are ideally suited to achieve situation awareness, in which elements in the environment must be perceived within a volume of time and space, their meaning understood, and their status predicted in the near future. The use of Dynamic Bayesian Networks in achieving situation awareness has been poorly explored in current research efforts. This research completely evolves DBNs automatically from any environment captured as multivariate time series (MTS) which minimizes the approximations and mitigates the challenges of choice of models. This potentially accommodates both highly skilled users and non-expert practitioners, and attracts diverse real-world application areas for DBNs. The architecture of our EDBN uses a combined strategy as it resolves two orthogonal issues to address the challenging problems: (1) evolving DBNs in the absence of domain experts and (2) mitigating computational intensity (or NP-hard) problems with economic scalability. Most notably, the major contributions of this thesis are as follows: the development of a new class of temporal probabilistic modeling (EDBN), whose architecture facilitates the demonstration of its emergent situation awareness (ESA) and emergent future situation awareness (EFSA) technologies. The ESA and its variant reveal hidden patterns over current and future time steps respectively. Among other contributions are the development and integration of an economic scalable framework called dynamic memory management in adaptive learning (DMMAL) into the architecture of the EDBN to emerge such network models from environments captured as massive datasets; the design of configurable agent actuators; adaptive operators; representative partitioning algorithms which facilitate the scalability framework; formal development and optimization of genetic algorithm (GA) to emerge optimal Bayesian networks from datasets, with emphasis on backtracking avoidance; and diverse applications of EDBN technologies such as business intelligence, revealing trends of insulin dose to medical patients, water quality management, project profitability analysis, sensor networks, etc. To ensure the universality and reproducibility of our architecture, we methodically conducted experiments using varied real-life datasets and publicly available machine learning datasets mostly from the University of California Irvine (UCI) repository.
150

Introspective multistrategy learning : constructing a learning strategy under reasoning failure

Cox, Michael Thomas 05 1900 (has links)
No description available.

Page generated in 0.0459 seconds