261 |
Adaptive control of epileptic seizures using reinforcement learningGuez, Arthur January 2010 (has links)
This thesis presents a new methodology for automatically learning an optimal neurostimulation strategy for the treatment of epilepsy. The technical challenge is to automatically modulate neurostimulation parameters, as a function of the observed field potential recording, so as to minimize the frequency and duration of seizures. The methodology leverages recent techniques from the machine learning literature, in particular the reinforcement learning paradigm, to formalize this optimization problem. We present an algorithm which is able to learn an adaptive neurostimulation strategy directly from labeled training data acquired from animal brain tissues. Our results suggest that this methodology can be used to automatically find a stimulation strategy which effectively reduces the incidence of seizures, while also minimizing the amount of stimulation applied. This work highlights the crucial role that modern machine learning techniques can play in the optimization of treatment strategies for patients with chronic disorders such as epilepsy. / Cette thèse présente une nouvelle méthodologie pour apprendre, de façon automatique, une stratégie optimale de neurostimulation pour le traitement de l'épilepsie. Le défi technique est de moduler automatiquement les paramètres de stimulation, en fonction de l'enregistrement de potentiel de champ observé, afin de minimiser la fréquence et la durée des crises d'épilepsie. Cette méthodologie fait appel à des techniques récentes développées dans le domaine de l'apprentissage machine, en particulier le paradigme d'apprentissage par renforcement, pour formaliser ce problème d'optimisation. Nous présentons un algorithme qui est capable d'apprendre une stratégie adaptative de neurostimulation, et ce directement à partir de données d'apprentissage, étiquetées, acquises depuis des tissus de cerveaux d'animaux. Nos résultats suggèrent que cette méthodologie peut être utiliser pour trouver, automatiquement, une stratégie de stimulation qui réduit efficacement l'indicence des crises d'épilepsie tout en minimisant le nombre de stimulations appliquées. Ce travail met en évidence le rôle crucial que les techniques modernes d'apprentissage machine peuvent jouer dans l'optimisation de stratégies de traitements pour des patients souffrant de maladies chroniques telle l'épilepsie.
|
262 |
Applicability of advanced computational networks to the modelling of complex geometryCôté, Brendan. January 2000 (has links)
This thesis describes a research effort directed at producing a computational model based on artificially intelligent cellular automata. This model was developed for the purpose of learning a mapping from an input space to an output space. A specific problem that occurs in the mining industry was used to develop and test the model's ability to learn the mapping between a three-dimensional input volume and a three-dimensional output volume. In this case, the mapping was a consequence of the industrial processes used in mining as well as the properties of the material being mined. / Three main computational tools were combined in this work to form the complete mine stope prediction model. The three modules are a learning module, an optimisation module, and an overall network architecture. The overall network architecture is a 3-D lattice of cellular automata (CA) and has the capability to implicitly capture the complexities in shape that render other types of models arduous or inapplicable. The learning module uses a Discrete Time Cellular Neural Network (DTCNN) to store and recall information about a given mapping. The optimisation module uses the Simulated Annealing (SA) algorithm to perform a non-linear optimisation on the set of weights used by the DTCNN. / Variations of the model, and different experiments, were performed to test and explore the model in depth. Concepts such as "Small-Worlds" and "Forgetting Factor" were investigated. The applicability of a Partial Least Squares (PLS) model as an alternative to the DTCNN transition rule was also explored.
|
263 |
Improving continuous speech recognition with automatic multiple pronunciation supportSnow, Charles. January 1997 (has links)
Conventional computer speech recognition systems use models of speech acoustics and the language of the recognition task in order to perform recognition. For all but trivial recognition tasks, sub-word units are modeled, typically phonemes. Recognizing words then requires a pronunciation dictionary ( PD) to specify how each word is pronounced in terms of the units modeled. Even if the acoustic modeling component is perfect, the recognizer will still be prone to misrecognition, most often because the speaker can use a pronunciation other than that in the PD. This different pronunciation may be due to the speaker being a non-native speaker of the language being recognized, having 'mispronounced' the word, coarticulatory effects, recognizer errors in phoneme hypothesization, or any combination of these. One way to overcome these misrecognitions is to use a dynamic PD, able to acquire new pronunciations for words as they are encountered and misrecognized. The thesis examines the following questions: can automated methods be found that produce reliable alternate pronunciations? If so, does augmenting a PD (which originally contains only canonical pronunciations) with these alternate pronunciations lead to improved recognizer performance? It shows that using even simple methods, average reductions in word error rate of at least 45% are possible, even with speakers who are not native speakers of the recognition task language.
|
264 |
On visual maps and their automatic constructionSim, Robert January 2004 (has links)
This thesis addresses the problem of automatically constructing a visual representation of an unknown environment that is useful for robotic navigation, localization and exploration. There are two main contributions. First, the concept of the visual map is developed, a representation of the visual structure of the environment, and a framework for learning this structure is provided. Second, methods for automatically constructing a visual map are presented for the case when limited information is available about the position of the camera during data collection. / The core concept of this thesis is that of the visual map, which models a set of image-domain features extracted from a scene. These are initially selected using a measure of visual saliency, and subsequently modelled and evaluated for their utility for robot pose estimation. Experiments are conducted demonstrating the feature learning process and the inferred models' reliability for pose inference. / The second part of this thesis addresses the problem of automatically collecting training images and constructing a visual map. First, it is shown that visual maps are self-organizing in nature, and the transformation between the image and pose domains is established with minimal prior pose information. Second, it is shown that visual maps can be constructed reliably in the face of uncertainty by selecting an appropriate exploration strategy. A variety of such strategies are presented and these approaches are validated experimentally in both simulated and real-world settings.
|
265 |
Detection of faulty components in object-oriented systems using design metrics and a machine learning algorithmIkonomovski, Stefan V. January 1998 (has links)
Object-Oriented (OO) technology claims faster development and higher quality of software than the procedural paradigm. The quality of the product is the single most important reason that determines its acceptance and success. The basic project management problem is "delivery of a product with targeted quality, within the budget, and on schedule". We propose a state-of-the-art approach that gets closer to the solution by improving the software development process used. An important objective in all software development is to ensure that the delivered product is as fault-free as possible. We proposed three hypotheses that relate the OO design properties---inheritance, cohesion, and coupling---and the fault-proneness as software's quality indicator. We built classification models that predict which components are likely to be faulty, based on an appropriate suite of OO design measures. The models represent empirical evidence that the aforementioned relationships exist. We used the C4.5 machine learning algorithm as a predictive modeling technique, because it is robust, reliable, and allows intelligible interpretation of the results. We defined three new measures that quantify the specific contribution of each of the metrics selected by the model(s), and also provide a deeper insight into the design structure of the product. We evaluated the quality of the predictive models using an objective set of standards. The models built have high quality.
|
266 |
Analysis of a delay differential equation model of a neural networkOlien, Leonard January 1995 (has links)
In this thesis I examine a delay differential equation model for an artificial neural network with two neurons. Linear stability analysis is used to determine the stability region of the stationary solutions. They can lose stability through either a pitchfork or a supercritical Hopf bifurcation. It is shown that, for appropriate parameter values, an interaction takes place between the pitchfork and Hopf bifurcations. Conditions are found under which the set of initial conditions that converge to a stable stationary solution is open and dense in the function space. Analytic results are illustrated with numerical simulations.
|
267 |
Cellular-automata based nonlinear adaptive controllersBolduc, Jean-Sébastien. January 1998 (has links)
An analytical approach is obviously practical only when we want to study nonlinear systems of low complexity. An alternative for more complex processes that has raised a lot of interest in recent years relies on Artificial Neural Networks (ANNs). / In this work we will explore an alternative avenue to the problems of control and identification, where Cellular Automata (CAs) will be considered in place of ANNs. CAs not only share ANNs' most valuable characteristics but they also have interesting characteristics of their own, for a structurally simpler architecture. CAs applications so far have been mainly restrained to simulating natural phenomena occuring in a finite homogeneous space. / Concepts relevant to the problems of control and identification will be introduced in the first part of our work. CAs will then be introduced, with a discussion of the issues raised by their application in the context, A working prototype of a CA-based controller is introduced in the last part of the work, that confirms the interest of using CAs to address the problem of nonlinear adaptive control. (Abstract shortened by UMI.)
|
268 |
Continuous function identification with fuzzy cellular automataRatitch, Bohdana. January 1998 (has links)
Thus far, cellular automata have been used primarily to model the systems consisting of a number of elements which interact with one another only locally; in addition these interactions can be naturally modeled with a discrete type of computation. In this thesis, we will investigate the possibility of a cellular automata application to a problem involving continuous computations, in particular, a problem of continuous function identification from a set of examples. A fuzzy model of cellular automata is introduced and used throughout the study. Two main issues in the context of this application are addressed: representation of real values on a cellular automata structure and a technique which provides cellular automata with a capacity for learning. An algorithm for encoding real values into cellular automata state configurations and a gradient descent type learning algorithm for fuzzy cellular automata are proposed in this thesis. A fuzzy cellular automaton learning system was implemented in a software and its performance studied by means of the experiments. The effects of several system's parameters on its performance were examined. Fuzzy cellular automata demonstrated the capabilities of carrying out complex continuous computations and performing supervised gradient based learning.
|
269 |
Building a model for a 3D object classs in a low dimensional space for object detectionGill, Gurman January 2009 (has links)
Modeling 3D object classes requires accounting for intra-class variations in an object's appearance under different viewpoints, scale and illumination conditions. Therefore, detecting instances of 3D object classes in the presence of background clutter is difficult. This thesis presents a novel approach to model generic 3D object classes and an algorithm to detect multiple instances of an object class in an arbitrary image. Motivated by the parts-based representation, the proposed approach divides the object into different spatial regions. Each spatial region is associated with an object part whose appearance is represented by a dense set of overlapping SIFT features. The distribution of these features is then described in a lower dimensional space using supervised Locally Linear Embedding. Each object part is essentially represented by a spatial cluster in the embedding space. For viewpoint invariance, the view-sphere comprising the 3D object is divided into a discrete number of view segments. Several spatial clusters represent the object in each view segment. This thesis provides a framework for representing these clusters in either single or multiple embedding spaces. A novel aspect of the proposed approach is that all object parts and the background class are represented in the same lower dimensional space. Thus the detection algorithm can explicitly label features in an image as belonging to an object part or background. Additionally, spatial relationships between object parts are established and employed during the detection stage to localize instances of the object class in a novel image. It is shown that detecting objects based on measuring spatial consistency between object parts is superior to a bag-of-words model that ignores all spatial information. Since generic object classes can be characterized by shape or appearance, this thesis has formulated a method to combine these attributes to enhance the object model. Class-specific local contour featur / La modélisation de classes d'objets 3D nécessite la prise en compte des variations à l'intérieur d'une même classe de l'apparence d'un objet sous différents points de vue, échelles et conditions d'illumination. Par conséquent, la détection de tels objets en présence d'un arrière-plan complexe est difficile. Cette thèse présente une approche nouvelle pour la modélisation générique de classes d'objets 3D, ainsi qu’un algorithme pouvant détecter plusieurs objets d'une classe dans une image.Motivé par la représentation par parties, l'approche proposée divise l'objet en différentes régions spatiales. Chaque région est associée à la partie d'un objet dont l'apparence est représentée par un ensemble dense de caractéristiques SIFT superposées. La distribution de ces caractéristiques est alors projetée dans un espace dimensionnel inférieur à l'aide d'un algorithme supervisé de Locally Linear Embedding. Chaque partie de l'objet est essentiellement représentée par un regroupement spatial dans l'espace englobant. Pour l'invariance de point de vue, la sphère contenant l'objet 3D est divisée en un nombre discret de segments. Plusieurs regroupements spatiaux représentent l'objet dans chaque segment. Cette thèse propose une manière de représenter ces regroupements aussi bien dans des espaces englobants uniques que multiples. Un aspect innovateur de l'approche proposée est que toutes les parties d'objets et les éléments d'arrière-plan sont représentés dans le même espace dimensionnel inférieur. Ainsi, l'algorithme de détection peut explicitement étiqueter des éléments d'une image comme appartenant à une partie d'objet ou à l'arrière-plan. De plus, les relations spatiales entre les parties d'un objet sont déterminées pendant l'étape de détection et employées pour localiser des éléments d'une classe d'objet dans une nouvelle image. Il est démontré que la détection d'objets basée sur la mesure de la consistanc
|
270 |
Spectral models for color visionSkaff, Sandra January 2009 (has links)
This thesis introduces a maximum entropy approach to model surface reflectance spectra. A reflectance spectrum is the amount of light, relative to the incident light, reflected from a surface at each wavelength. While the color of a surface can be in 3D vector form such as RGB, CMY, or YIQ, this thesis takes the surface reflectance spectrum to be the color of a surface. A reflectance spectrum is a physical property of a surface and does not vary with the different interactions a surface may undergo with its environment. Therefore, models of reflectance spectra can be used to fuse camera sensor responses from different images of the same surface or multiple surfaces of the same scene. This fusion improves the spectral estimates that can be obtained, and thus leads to better estimates of surface colors. The motivation for using a maximum entropy approach stems from the fact that surfaces observed in our everyday life surroundings typically have broad and therefore high entropy spectra. The maximum entropy approach, in addition, imposes the fewest constraints as it estimates surface reflectance spectra given only camera sensor responses. This is a major advantage over the widely used linear basis function spectral representations, which require a prespecified set of basis functions. Experimental results show that surface spectra of Munsell and construction paper patches can be successfully estimated using the maximum entropy approach in the case of three different surface interactions with the environment. First, in the case of changes in illumination, the thesis shows that the spectral models estimated are comparable to those obtained from the best approach which computes spectral models in the literature. Second, in the case of changes in the positions of surfaces with respect to each other, interreflections between the surfaces arise. Results show that the fusion of sensor responses from interreflection / Cette thèse introduit une approche par entropie maximale pour la modélisation des spectres de réflectance de surface. Un spectre de réflectance est la quantité de lumière, relative à la lumière incidente, réfléchie d'une surface à chaque longueur d'onde. Bien que la couleur d'une surface puisse prendre la forme d'un vecteur 3D tel que RGB, CMY ou YIQ, cette thèse prend le spectre de réflectance de surface comme étant la couleur d'une surface. Un spectre de réflectance est une propriété physique d'une surface et ne varie pas avec les différentes interactions que peut subir une surface avec son environnement. Par conséquent, les modèles de spectres de réflectance peuvent être utilisés pour fusionner les réponses de senseurs de caméra provenant de différentes images d'une même surface ou de multiples surfaces de la même scène. Cette fusion améliore les estimés spectraux qui peuvent être obtenus et mène donc à de meilleurs estimés de couleurs de surfaces.La motivation pour l'utilisation d'une approche par entropie maximale provient du fait que les surfaces observées dans notre environnement habituel ont typiquement un spectre large et donc à haute entropie. De plus, l'approche par entropie maximale impose le moins de contraintes puisqu'elle estime les spectres de réflectance de surface à l'aide seulement des réponses de senseurs de caméra. Ceci est un avantage majeur par rapport aux très répandues représentations spectrales par fonctions de base linéaires qui requièrent une série pré-spécifiée de fonctions de base.Les résultats expérimentaux montrent que les spectres de surface de taches de surface de Munsell et de papier de construction peuvent être estimés avec succès en utilisant l'approche par entropie maximal dans le cas de trois différentes interactions de surfaces avec l'environnement. D'abord, dans le cas de changements dans l'illumination, la t
|
Page generated in 0.0834 seconds