• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 5
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 29
  • 13
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Estimating the Market Risk Exposure through a Factor Model with Random Effects

Börjesson, Lukas January 2022 (has links)
In this thesis, we set out to model the market risk exposure for 251 stocks in the S&P 500 index, during a ten-year period between 2011-04-30 and 2021-03-31. The study brings to light a model not often mentioned in the scientific literature focused on market risk estimation, the linear mixed model. The linear mixed model makes it possible to model a time-varying market risk, as well as adding structure to the idiosyncratic risk, which is often assumed to be a stationary process. The results show that the mixed model is able to produce more accurate estimates for the market risk, compared to the baseline, which is here defined as a CAPM model. The success of the mixed model, which we in the study will refer to as the ADAPT model (adaptive APT), most certainly lies in its ability to create a hierarchical regression model. This makes it possible to not just view the set of observations as a single population, but let us group the observations into different clusters and in such a way makes it possible to construct a time-varying exposure. In the last part of the thesis, we highlight possible improvements for future works, which could make the estimation even more accurate and also more efficient.
22

BROADCASTING CORRELATED GAUSSIANS

Feng, Junfeng 10 1900 (has links)
<p>Broadcasting correlated Gaussians is one of the cases where separate source-channel coding is suboptimal. In this dissertation, we will study the distortion region of sending correlated Gaussian sources over an AWGN-BC using hybrid digital-analog coding approach, where each receiver wishes to reconstruct one source component subject to the mean squared error distortion constraint.</p> <p>First of all, the problem of transmitting m independent Gaussian source components over an AWGN-BC is studied. We show this problem setup is closely related to broadcasting correlated Gaussian sources with genie-aided receivers. Moreover, the separate source-channel coding approach is proven to be optimal in these setups.</p> <p>Second, we consider two new scenarios and find the achievable distortion regions for both cases, where three Gaussian source components are sent to three receivers. The difference is that for the first scenario, the first two source components are correlated and they are independent of the third one while for the second scenario, the last two source components are correlated and they are independent of the first one. Inner bounds based on hybrid analog-digital coding and outer bounds based on genie-aided arguments are proposed for both cases and the optimality is proven.</p> <p>Finally, we study two cases where side information is presented at one receiver. Hybrid analog-digital coding schemes are used and the optimality is proven.</p> / Master of Applied Science (MASc)
23

Reconhecimento de gestos usando segmentação de imagens dinâmicas de mãos baseada no modelo de mistura de gaussianas e cor de pele / Gesture recognizing using segmentation of dynamic hand image based on the mixture of Gaussians model and skin color

Hebert Luchetti Ribeiro 01 September 2006 (has links)
O objetivo deste trabalho é criar uma metodologia capaz de reconhecer gestos de mãos, a partir de imagens dinâmicas, para interagir com sistemas. Após a captação da imagem, a segmentação ocorre nos pixels pertencentes às mãos que são separados do fundo pela segmentação pela subtração do fundo e filtragem de cor de pele. O algoritmo de reconhecimento é baseado somente em contornos, possibilitando velocidade para se trabalhar em tempo real. A maior área da imagem segmentada é considerada como região da mão. As regiões detectadas são analisadas para determinar a posição e a orientação da mão. A posição e outros atributos das mãos são rastreados quadro a quadro para distinguir um movimento da mão em relação ao fundo e de outros objetos em movimento, e para extrair a informação do movimento para o reconhecimento de gestos. Baseado na posição coletada, movimento e indícios de postura são calculados para reconhecimento um gesto significativo. / The purpose of this paper is to develop a methodology able to recognize hand gestures from dynamic images to interact with systems. After the image capture segmentation takes place where pixels belonging to the hands are separated from the background based on skin-color segmentation and background extraction. The image preprocessing can be applied before the edge detection. The recognition algorithm uses edges only; therefore it is quick enough for real time. The largest blob from the segmented image will be considered as the hand region. The detected regions are analyzed to determine position and orientation of the hand for each frame. The position and other attributes of the hands are tracked per frame to distinguish a movement from the hand in relation to the background and from other objects in movement, and to extract the information of the movement for the recognition of dynamic gestures. Based in the collected position, movement and indications of position are calculated to recognize a significant gesture.
24

Reconhecimento de gestos usando segmentação de imagens dinâmicas de mãos baseada no modelo de mistura de gaussianas e cor de pele / Gesture recognizing using segmentation of dynamic hand image based on the mixture of Gaussians model and skin color

Ribeiro, Hebert Luchetti 01 September 2006 (has links)
O objetivo deste trabalho é criar uma metodologia capaz de reconhecer gestos de mãos, a partir de imagens dinâmicas, para interagir com sistemas. Após a captação da imagem, a segmentação ocorre nos pixels pertencentes às mãos que são separados do fundo pela segmentação pela subtração do fundo e filtragem de cor de pele. O algoritmo de reconhecimento é baseado somente em contornos, possibilitando velocidade para se trabalhar em tempo real. A maior área da imagem segmentada é considerada como região da mão. As regiões detectadas são analisadas para determinar a posição e a orientação da mão. A posição e outros atributos das mãos são rastreados quadro a quadro para distinguir um movimento da mão em relação ao fundo e de outros objetos em movimento, e para extrair a informação do movimento para o reconhecimento de gestos. Baseado na posição coletada, movimento e indícios de postura são calculados para reconhecimento um gesto significativo. / The purpose of this paper is to develop a methodology able to recognize hand gestures from dynamic images to interact with systems. After the image capture segmentation takes place where pixels belonging to the hands are separated from the background based on skin-color segmentation and background extraction. The image preprocessing can be applied before the edge detection. The recognition algorithm uses edges only; therefore it is quick enough for real time. The largest blob from the segmented image will be considered as the hand region. The detected regions are analyzed to determine position and orientation of the hand for each frame. The position and other attributes of the hands are tracked per frame to distinguish a movement from the hand in relation to the background and from other objects in movement, and to extract the information of the movement for the recognition of dynamic gestures. Based in the collected position, movement and indications of position are calculated to recognize a significant gesture.
25

Structure adaptive stylization of images and video

Kyprianidis, Jan Eric January 2013 (has links)
In the early days of computer graphics, research was mainly driven by the goal to create realistic synthetic imagery. By contrast, non-photorealistic computer graphics, established as its own branch of computer graphics in the early 1990s, is mainly motivated by concepts and principles found in traditional art forms, such as painting, illustration, and graphic design, and it investigates concepts and techniques that abstract from reality using expressive, stylized, or illustrative rendering techniques. This thesis focuses on the artistic stylization of two-dimensional content and presents several novel automatic techniques for the creation of simplified stylistic illustrations from color images, video, and 3D renderings. Primary innovation of these novel techniques is that they utilize the smooth structure tensor as a simple and efficient way to obtain information about the local structure of an image. More specifically, this thesis contributes to knowledge in this field in the following ways. First, a comprehensive review of the structure tensor is provided. In particular, different methods for integrating the minor eigenvector field of the smoothed structure tensor are developed, and the superiority of the smoothed structure tensor over the popular edge tangent flow is demonstrated. Second, separable implementations of the popular bilateral and difference of Gaussians filters that adapt to the local structure are presented. These filters avoid artifacts while being computationally highly efficient. Taken together, both provide an effective way to create a cartoon-style effect. Third, a generalization of the Kuwahara filter is presented that avoids artifacts by adapting the shape, scale, and orientation of the filter to the local structure. This causes directional image features to be better preserved and emphasized, resulting in overall sharper edges and a more feature-abiding painterly effect. In addition to the single-scale variant, a multi-scale variant is presented, which is capable of performing a highly aggressive abstraction. Fourth, a technique that builds upon the idea of combining flow-guided smoothing with shock filtering is presented, allowing for an aggressive exaggeration and an emphasis of directional image features. All presented techniques are suitable for temporally coherent per-frame filtering of video or dynamic 3D renderings, without requiring expensive extra processing, such as optical flow. Moreover, they can be efficiently implemented to process content in real-time on a GPU. / In den Anfängen der Computergrafik war die Forschung hauptsächlich von dem Anspruch getragen, realistisch aussehende synthetische Bilder zu erstellen. Im Gegensatz dazu ist die nicht-photorealistische Computergraphik, ein Untergebiet der Computergrafik, welches in den frühen 1990er Jahren gegründet wurde, vor allem motiviert durch Konzepte und Prinzipien der traditionellen Kunst wie Malerei, Illustration und Grafikdesign. Diese Arbeit beschäftigt sich mit der künstlerischen Verarbeitung von zweidimensionalen Bildinhalten und präsentiert mehrere neue automatische Verfahren für die Erstellung von vereinfachten künstlerischen Darstellungen von Farbbildern, Videos und 3D- Renderings. Wichtigste Neuerung dieser Techniken ist die Verwendung des Strukturtensors als eine einfache und effiziente Möglichkeit, Informationen über die lokale Struktur eines Bildes zu erhalten. Konkret werden die folgenden Beiträge gemacht. Erstens wird eine umfassende übersicht über den Strukturtensor gegeben. Insbesondere werden verschiedene Methoden für die Integration des kleineren Eigenvektorfeldes des geglätteten Strukturtensors entwickelt, und die Überlegenheit des geglätteten Strukturtensors gegenüber dem populären Edge-Tangent-Flow demonstriert. Zweitens werden separable Implementierungen des bilateralen Filters und des Difference of Gaussians Filters vorgestellt. Durch die Anpassung der Filter an die lokale Struktur des Bildes werden Bildfehler vermieden, wobei der Vorgang rechnerisch effizient bleibt. Zusammengenommen bieten beide Techniken eine effektive Möglichkeit, um einen Cartoon-ähnlichen Effekt zu erzielen. Drittens wird eine Verallgemeinerung des Kuwahara-Filters vorgestellt. Durch die Anpassung von Form, Umfang und Orientierung der Filter an die lokale Struktur werden Bildfehler verhindert. Außerdem werden direktionale Bildmerkmale besser berücksichtigt und betont, was zu schärferen Kanten und einem malerischen Effekt führt. Neben der single-scale Variante wird auch eine multi-scale Variante vorgestellt, welche im Stande ist, eine höhere Abstraktion zu erzielen. Viertens wird eine Technik vorgestellt, die auf der Kombination von flussgesteuerter Glättung und Schock-Filterung beruht, was zu einer intensiven Verstärkung und Betonung der direktionalen Bildmerkmale führt. Alle vorgestellten Techniken erlauben die zeitlich kohärente Verarbeitung von Einzelbildern eines Videos oder einer dynamischen 3D-Szene, ohne dass andere aufwendige Verfahren wie zum Beispiel die Berechnung des optischen Flusses, benötigt werden. Darüberhinaus können die Techniken effizient implementiert werden und ermöglichen die Verarbeitung in Echtzeit auf einem Grafikprozessor (GPU).
26

Regression Models of 3D Wakes for Propellers / Regressionsmodeller av 3D medströmsfält för propellrar

Karlsson, Christian January 2018 (has links)
In this work, regression models for the wake field entering a propeller at certain axial andnominal position have been proposed. Wakes are non-uniform flows following a body immersedin a viscous fluid. We have proposed models for the axial and tangential velocity distribution asfunctions of ship hull and propeller measures. The regression models were modelled using Fourierseries and parameter estimations based on skewed-Gaussian and sine functions. The wake fieldis an important parameter in propeller design. The regression models are based on experimentaldata provided by the Rolls-Royce Hydrodynamic Research Center in Kristinehamn. Also we havestudied the flow in the axial velocity distribution in the propeller plane using the coherent structurecoloring method. The coherent structure coloring is used to study coherent patterns by looking atfluid particle kinematics. Using this type of analysis, we observed that the velocity distributionbehaves kinematically similar in the different regions of the wake distribution, which according tothe coherent structure coloring indicate coherence. / I det här arbetet, har regressionsmodeller för medströmsfältet in i en propeller vid viss axielloch nominell position utvecklats. Medströmsfältet är ojämn strömning efter en kropp nedsänkt i enviskös vätska. Vi har föreslagit modeller för axiell och tangentiell hastighetsfördelning som funktionerför fartygsskrov-och propeller-parametrar. Regressionsmodellerna modellerades med hjälpav Fourier-serier och parameterskattning baserade på skeva Gaussfördelningar och sinusfunktioner.Medströmsfältet är en viktig parameter i propeller design. Regressionsmodellerna är baserade påexperimentella data från Rolls-Royces Hydrodynamiska Forskningscenter i Kristinehamn. Vi harockså studerat flödet i axialhastighetsfördelningen i propellplanet med hjälp av den koherenta struktureringsfärgmetoden.Den koherenta struktureringsfärgmetoden används för att studera koherentamönster genom att titta på vätskepartikelkinematik. Med hjälp av denna typ av analys observeradevi att hastighetsfördelningen uppför sig kinematiskt lika i de olika regionerna i medströmsfältet,vilket enligt koherenta strukturfärgmetoden indikerar koherens.
27

Apprentissage machine efficace : théorie et pratique

Delalleau, Olivier 03 1900 (has links)
Malgré des progrès constants en termes de capacité de calcul, mémoire et quantité de données disponibles, les algorithmes d'apprentissage machine doivent se montrer efficaces dans l'utilisation de ces ressources. La minimisation des coûts est évidemment un facteur important, mais une autre motivation est la recherche de mécanismes d'apprentissage capables de reproduire le comportement d'êtres intelligents. Cette thèse aborde le problème de l'efficacité à travers plusieurs articles traitant d'algorithmes d'apprentissage variés : ce problème est vu non seulement du point de vue de l'efficacité computationnelle (temps de calcul et mémoire utilisés), mais aussi de celui de l'efficacité statistique (nombre d'exemples requis pour accomplir une tâche donnée). Une première contribution apportée par cette thèse est la mise en lumière d'inefficacités statistiques dans des algorithmes existants. Nous montrons ainsi que les arbres de décision généralisent mal pour certains types de tâches (chapitre 3), de même que les algorithmes classiques d'apprentissage semi-supervisé à base de graphe (chapitre 5), chacun étant affecté par une forme particulière de la malédiction de la dimensionalité. Pour une certaine classe de réseaux de neurones, appelés réseaux sommes-produits, nous montrons qu'il peut être exponentiellement moins efficace de représenter certaines fonctions par des réseaux à une seule couche cachée, comparé à des réseaux profonds (chapitre 4). Nos analyses permettent de mieux comprendre certains problèmes intrinsèques liés à ces algorithmes, et d'orienter la recherche dans des directions qui pourraient permettre de les résoudre. Nous identifions également des inefficacités computationnelles dans les algorithmes d'apprentissage semi-supervisé à base de graphe (chapitre 5), et dans l'apprentissage de mélanges de Gaussiennes en présence de valeurs manquantes (chapitre 6). Dans les deux cas, nous proposons de nouveaux algorithmes capables de traiter des ensembles de données significativement plus grands. Les deux derniers chapitres traitent de l'efficacité computationnelle sous un angle différent. Dans le chapitre 7, nous analysons de manière théorique un algorithme existant pour l'apprentissage efficace dans les machines de Boltzmann restreintes (la divergence contrastive), afin de mieux comprendre les raisons qui expliquent le succès de cet algorithme. Finalement, dans le chapitre 8 nous présentons une application de l'apprentissage machine dans le domaine des jeux vidéo, pour laquelle le problème de l'efficacité computationnelle est relié à des considérations d'ingénierie logicielle et matérielle, souvent ignorées en recherche mais ô combien importantes en pratique. / Despite constant progress in terms of available computational power, memory and amount of data, machine learning algorithms need to be efficient in how they use them. Although minimizing cost is an obvious major concern, another motivation is to attempt to design algorithms that can learn as efficiently as intelligent species. This thesis tackles the problem of efficient learning through various papers dealing with a wide range of machine learning algorithms: this topic is seen both from the point of view of computational efficiency (processing power and memory required by the algorithms) and of statistical efficiency (n umber of samples necessary to solve a given learning task).The first contribution of this thesis is in shedding light on various statistical inefficiencies in existing algorithms. Indeed, we show that decision trees do not generalize well on tasks with some particular properties (chapter 3), and that a similar flaw affects typical graph-based semi-supervised learning algorithms (chapter 5). This flaw is a form of curse of dimensionality that is specific to each of these algorithms. For a subclass of neural networks, called sum-product networks, we prove that using networks with a single hidden layer can be exponentially less efficient than when using deep networks (chapter 4). Our analyses help better understand some inherent flaws found in these algorithms, and steer research towards approaches that may potentially overcome them. We also exhibit computational inefficiencies in popular graph-based semi-supervised learning algorithms (chapter 5) as well as in the learning of mixtures of Gaussians with missing data (chapter 6). In both cases we propose new algorithms that make it possible to scale to much larger datasets. The last two chapters also deal with computational efficiency, but in different ways. Chapter 7 presents a new view on the contrastive divergence algorithm (which has been used for efficient training of restricted Boltzmann machines). It provides additional insight on the reasons why this algorithm has been so successful. Finally, in chapter 8 we describe an application of machine learning to video games, where computational efficiency is tied to software and hardware engineering constraints which, although often ignored in research papers, are ubiquitous in practice.
28

Analyse de signaux d'arrêts cardiaques en cas d'intervention d'urgence avec défibrillateur automatisé : optimisation des temps de pause péri-choc et prédiction d'efficacité de défibrillation / Analysis of cardiac arrest signals in emergency response with automated defibrillator : Peri-shock pauses optimization and prediction of the efficiency of defibrillation

Ménétré, Sarah 02 November 2011 (has links)
L'arrêt cardiaque est principalement d'étiologie cardio-vasculaire. Dans le contexte actuel des arrêts cardiaques extrahospitaliers, 20 à 25% des victimes présentent une fibrillation ventriculaire. Environ 3 à 5% des personnes sont sauvées sans séquelle neurologique. La survie à un arrêt cardiaque extrahospitalier dépend d'une prise en charge précoce et rapide de la victime. Les premiers témoins actifs réalisant la réanimation cardio-pulmonaire combinée à l'utilisation d'un défibrillateur sont ainsi un maillon important pour sauver la victime.Notre objectif principal est d'améliorer le taux de survie à un arrêt cardiaque extrahospitalier. Une première voie d'investigation est de proposer un fonctionnement de défibrillateur optimal combinant judicieusement les différents modules de détection embarqués (détection de fibrillation ventriculaire, détection de massage cardiaque, détection d'interférences électromagnétiques) afin de réduire les temps de pause péri-choc durant la procédure de réanimation. En effet, pendant ces temps, dits « hands-off » en anglais, aucun geste de secours n'est administré au patient qui, lui, voit d'une part sa pression de perfusion coronarienne chuter, d'autre part la probabilité de succès des tentatives de défibrillation décroître. C'est pourquoi une deuxième voie d'investigation porte sur la prédiction de l'efficacité de choc. Dans ce contexte, nous proposons de combiner des paramètres de l'électrocardiogramme dans les domaines temporel, fréquentiel et de la dynamique non-linéaire. Un classifieur bayésien utilisant le modèle de mélange de gaussiennes a été appliqué aux vecteurs de paramètres les plus prédicteurs de l'issue de la défibrillation et l'algorithme Espérance-Maximisation a permis de mener à bien la procédure d'apprentissage des paramètres du modèle probabiliste représentant les distributions conditionnelles de classe.L'ensemble des méthodes proposées a permis d'atteindre des résultats prometteurs pour à la fois réduire les temps de pause péri-choc et prédire l'efficacité de défibrillation et ainsi espérer améliorer le taux de survie à un arrêt cardiaque / The cardiac arrest is mainly of cardiovascular etiology. In the actual context of out-of-hospital cardiac arrests, 20 to 25% of the victims present a ventricular fibrillation. About 3 to 5% of the victims are saved without neurological damage. The chance of surviving a cardiac arrest outside an hospital depends on the early and fast support of the victim. The first active witnesses performing cardiopulmonary resuscitation combined with the use of a defibrillator are an important link to save the victim.Our main objective is to improve survival rate in out-of-hospital cardiac arrest cases. A first way of investigation is to propose an optimal functioning of defibrillator combining wisely the different processes of detection embedded (ventricular fibrillation detection, chest compressions detection, electromagnetic interferences detection), in order to reduce the peri-shock pauses during the resuscitation procedure. In fact, during these pauses, known as "hands-off" pauses, no emergency action is provided to the patient, what is correlated to a drop of the coronary pression, but also to a decrease of the chance of successful defibrillation. That is the reason why, a second way of investigation is based on the prediction of the efficiency of defibrillation. In this context, we propose to use a combination of parameters extracted from electrocardiogram in time, frequency and non-linear dynamics domains. A bayesian classifier using a gaussian mixture model was applied to the vectors of parameters, which are the most predictor of the defibrillation outcome and the algorithm Expectation-Maximization allowed to learn the parameters of the probabilistic model representing the class conditional distributions.All of the proposed methods allowed to reach promising results for both reducing the peri-shock pauses and predicting the efficiency of defibrillation in hope to improve the survival rate in cardiac arrest cases
29

Apprentissage machine efficace : théorie et pratique

Delalleau, Olivier 03 1900 (has links)
Malgré des progrès constants en termes de capacité de calcul, mémoire et quantité de données disponibles, les algorithmes d'apprentissage machine doivent se montrer efficaces dans l'utilisation de ces ressources. La minimisation des coûts est évidemment un facteur important, mais une autre motivation est la recherche de mécanismes d'apprentissage capables de reproduire le comportement d'êtres intelligents. Cette thèse aborde le problème de l'efficacité à travers plusieurs articles traitant d'algorithmes d'apprentissage variés : ce problème est vu non seulement du point de vue de l'efficacité computationnelle (temps de calcul et mémoire utilisés), mais aussi de celui de l'efficacité statistique (nombre d'exemples requis pour accomplir une tâche donnée). Une première contribution apportée par cette thèse est la mise en lumière d'inefficacités statistiques dans des algorithmes existants. Nous montrons ainsi que les arbres de décision généralisent mal pour certains types de tâches (chapitre 3), de même que les algorithmes classiques d'apprentissage semi-supervisé à base de graphe (chapitre 5), chacun étant affecté par une forme particulière de la malédiction de la dimensionalité. Pour une certaine classe de réseaux de neurones, appelés réseaux sommes-produits, nous montrons qu'il peut être exponentiellement moins efficace de représenter certaines fonctions par des réseaux à une seule couche cachée, comparé à des réseaux profonds (chapitre 4). Nos analyses permettent de mieux comprendre certains problèmes intrinsèques liés à ces algorithmes, et d'orienter la recherche dans des directions qui pourraient permettre de les résoudre. Nous identifions également des inefficacités computationnelles dans les algorithmes d'apprentissage semi-supervisé à base de graphe (chapitre 5), et dans l'apprentissage de mélanges de Gaussiennes en présence de valeurs manquantes (chapitre 6). Dans les deux cas, nous proposons de nouveaux algorithmes capables de traiter des ensembles de données significativement plus grands. Les deux derniers chapitres traitent de l'efficacité computationnelle sous un angle différent. Dans le chapitre 7, nous analysons de manière théorique un algorithme existant pour l'apprentissage efficace dans les machines de Boltzmann restreintes (la divergence contrastive), afin de mieux comprendre les raisons qui expliquent le succès de cet algorithme. Finalement, dans le chapitre 8 nous présentons une application de l'apprentissage machine dans le domaine des jeux vidéo, pour laquelle le problème de l'efficacité computationnelle est relié à des considérations d'ingénierie logicielle et matérielle, souvent ignorées en recherche mais ô combien importantes en pratique. / Despite constant progress in terms of available computational power, memory and amount of data, machine learning algorithms need to be efficient in how they use them. Although minimizing cost is an obvious major concern, another motivation is to attempt to design algorithms that can learn as efficiently as intelligent species. This thesis tackles the problem of efficient learning through various papers dealing with a wide range of machine learning algorithms: this topic is seen both from the point of view of computational efficiency (processing power and memory required by the algorithms) and of statistical efficiency (n umber of samples necessary to solve a given learning task).The first contribution of this thesis is in shedding light on various statistical inefficiencies in existing algorithms. Indeed, we show that decision trees do not generalize well on tasks with some particular properties (chapter 3), and that a similar flaw affects typical graph-based semi-supervised learning algorithms (chapter 5). This flaw is a form of curse of dimensionality that is specific to each of these algorithms. For a subclass of neural networks, called sum-product networks, we prove that using networks with a single hidden layer can be exponentially less efficient than when using deep networks (chapter 4). Our analyses help better understand some inherent flaws found in these algorithms, and steer research towards approaches that may potentially overcome them. We also exhibit computational inefficiencies in popular graph-based semi-supervised learning algorithms (chapter 5) as well as in the learning of mixtures of Gaussians with missing data (chapter 6). In both cases we propose new algorithms that make it possible to scale to much larger datasets. The last two chapters also deal with computational efficiency, but in different ways. Chapter 7 presents a new view on the contrastive divergence algorithm (which has been used for efficient training of restricted Boltzmann machines). It provides additional insight on the reasons why this algorithm has been so successful. Finally, in chapter 8 we describe an application of machine learning to video games, where computational efficiency is tied to software and hardware engineering constraints which, although often ignored in research papers, are ubiquitous in practice.

Page generated in 0.045 seconds