Spelling suggestions: "subject:"energybased codels"" "subject:"energybased 2models""
1 |
Intelligent Ad ResizingBadali, Anthony Paul 15 December 2009 (has links)
Currently, online advertisements are created for specific dimensions and must be laboriously modified by advertisers to support different aspect ratios. In addition, publishers are constrained to design web pages to accommodate this limited set of sizes.
As an alternative we present a framework for automatically generating visual banners at arbitrary sizes based on individual prototype ads. This technique can be used to create flexible visual ads that can be resized to accommodate various aspect ratios. In the proposed framework image and text data are stored separately. Resizing involves selecting a sub-region of the original image and updating text parameters (size and position). This problem is posed within an optimization framework that encourages solutions which maintain important structural properties of the original ad. The method can be applied to advertisements containing a wide variety of imagery and provides significantly more flexibility than existing solutions.
|
2 |
Intelligent Ad ResizingBadali, Anthony Paul 15 December 2009 (has links)
Currently, online advertisements are created for specific dimensions and must be laboriously modified by advertisers to support different aspect ratios. In addition, publishers are constrained to design web pages to accommodate this limited set of sizes.
As an alternative we present a framework for automatically generating visual banners at arbitrary sizes based on individual prototype ads. This technique can be used to create flexible visual ads that can be resized to accommodate various aspect ratios. In the proposed framework image and text data are stored separately. Resizing involves selecting a sub-region of the original image and updating text parameters (size and position). This problem is posed within an optimization framework that encourages solutions which maintain important structural properties of the original ad. The method can be applied to advertisements containing a wide variety of imagery and provides significantly more flexibility than existing solutions.
|
3 |
Development of Energy-based Damage and Plasticity Models for Asphalt Concrete MixturesOnifade, Ibrahim January 2017 (has links)
Characterizing the full range of damage and plastic behaviour of asphalt mixtures under varying strain-rates and stress states is a complex and challenging task. One reason for this is partly due to the strain rate and temperature dependent nature of the material as well as the variation in the properties of the constituent materials that make up the composite asphalt mixture. Existing stress-based models for asphalt concrete materials are developed based on mechanics principles, but these models are, however, limited in their application for actual pavement analysis and design since rate dependency parameters are needed in the constitutive model to account for the influence of the strain rate on the stress-based yield and evolution criteria. Till date, we are yet to arrive at simple and comprehensive constitutive models that can be used to model the behaviour of asphalt mixture over a wide range of strain-rate which is experienced in the actual pavement sections. The aim of this thesis is to develop an increased understanding of the strength and deformation mechanism of asphalt mixtures through multi-scale modeling and to develop simple and comprehensive continuum models to characterize the non-linear behaviour of the material under varying stress-states and conditions. An analysis framework is developed for the evaluation of the influence of asphalt mixture morphology on its mechanical properties and response using X-Ray CT and digital image processing techniques. The procedure developed in the analysis framework is then used to investigate the existence of an invariant critical energy threshold for meso-crack initiation which serves as the basis for the development of a theory for the development of energy-based damage and plastic deformation models for asphalt mixtures. A new energy-based viscoelastic damage model is developed and proposed based on continuum damage mechanics (CDM) and the thermodynamics of irreversible processes. A second order damage variable tensor is introduced to account for the distributed damage in the material in the different principal damage directions. In this way, the material response in tension and compression can be decoupled and the effects of both tension- and compression stress states on the material behaviour can be accounted for adequately. Based on the finding from the energy-based damage model, an equivalent micro-crack stress approach is developed and proposed for the damage and fracture characterization of asphalt mixtures. The effective micro-crack stress approach takes account of the material stiffness and a critical energy threshold for micro-crack initiation in the characterization of damage and fracture properties of the mixture. The effective micro-crack stress approach is developed based on fundamental mechanics principles and it reduces to the Griffith's energy balance criterion when purely elastic materials are considered without the need for the consideration of the surface energy and a crack size in the determination of the fracture stress. A new Continuum Plasticity Mechanics (CPM) model is developed within the framework of thermodynamics to describe the plastic behaviour of asphalt concrete material with energy-based criteria derived for the initiation and evolution of plastic deformation. An internal state variable termed the "plasticity variable" is introduced to described the distributed dislocation movement in the microstructure. The CPM model unifies aspects of existing elasto-plastic and visco-plastic theories in one theory and shows particular strength in the modeling of rate-dependent plastic behaviour of materials without the need for the consideration of rate dependency parameters in the constitutive relationships. The CPM model is further extended to consider the reduction in the stiffness properties with incremental loading and to develop a unified energy-based damage and plasticity model. The models are implemented in a Finite Element (FE) analysis program for the validation of the models. The result shows that the energy-based damage and plastic deformation models are capable of predicting the behaviour of asphalt concrete mixtures under varying stress-states and strain-rate conditions. The work in this thesis provides the basis for the development of more fundamental understanding of the asphalt concrete material response and the application of sound and solid mechanics principles in the analysis and design of pavement structures. / En heltäckande karakterisering av skador och plastiska beteende hos asfaltblandningar under varierande belastningshastighet och spänningstillstånd är en komplex och svår uppgift. En orsak till detta är relaterat till materialets belastningshastighet- och temperaturberoende, såväl som variationen i materialegenskaperna hos de ingående komponenterna i den sammansatta asfaltblandningen. Befintliga spänningsbaserade modeller för asfaltbetongmaterial är utvecklade baserade på mekanikprinciper, men dessa modeller är begränsade när det gäller analys och design av verkliga asfaltsbeläggningar eftersom hastighetsberoende parametrar behövs i den konstitutiva modellen även med hänsyn till töjningshastighetens inverkan på kriterier för gränser och utveckling av spänningstillstånd. Det finns därför behov av att utveckla enkla men ändå heltäckande konstitutiva modeller som kan användas för att modellera beteendet hos asfaltmassan över ett brett spektrum av belastningshastigheter för olika av sektioner asfaltsbeläggningar. Syftet med denna avhandling är att öka förståelsen av hållfasthets- och deformationsmekanismer för asfaltblandningar genom multi-modellering. Målet är att utveckla enkla och heltäckande kontinuummodeller som karakteriserar materialets olinjära beteende under varierande spänningstillstånd och betingelser. Ett analysramverk har utvecklats för utvärdering av påverkan av asfaltmassans morfologi på dess mekaniska egenskaper och beteende med hjälp av röntgendatortomografi och digital bildbehandlingsteknik. Detta förfarande har sedan använts för att undersöka förekomsten av inneboende kritiska tröskelvärden för brottenergin för mesosprickinitiering vilket i sin tur ligger till grund för utvecklingen av en teori för modellering av energibaserade skador och plastisk deformation hos asfaltblandningar. En ny energidensitet baserad viskoelastisk skademodell utvecklas och föreslås utgå från kontinuum-skade-mekanik (CDM) och termodynamik för irreversibla processer. En andra ordningens skadevariabeltensor införs för att ta hänsyn till skadedistributionen i materialen i de olika principiella skaderiktningarna. På detta sätt kan materialets respons i drag- och tryckbelastning separeras och effekterna av spänningstillstånd i både drag och tryck kan beaktas på ett adekvat sätt. Baserat på resultaten från den energibaserade skademodellen utvecklas och föreslås en motsvarande metod för mikrosprickspänning gällande skade- och brottkarakteriseringen av asfaltblandningar. Metoden för den effektiva mikrosprickspänningen tar hänsyn till materialets styvhet och en kritisk tröskelenergi för mikrosprickinitiering för karakteriseringen av skador och brottegenskaper hos blandningen. Denna metod är utvecklad baserat på grundläggande mekanikprinciper och kan för rent elastiska material reduceras till Griffiths energibalanskriterium utan hänsyn till ytenergi och sprickstorlek vid bestämningen av brottspänningen. En ny termodynamikbaserad modell för kontinuumplasticitetsmekanik (CPM) utvecklas för att beskriva det plastiska beteendet hos asfaltbetongmaterial med energibaserade kriterier härledda för initiering och progression av plastisk deformation. En intern tillståndsvariabel kallad "plasticitetvariabeln" införs för att beskriva den fördelade dislokationsrörelsen i mikrostrukturen. CPM-modellen förenar befintliga elasto-plastiska och visko-plastiska teorier i en teori och visar sig vara särskilt effektiv i modelleringen av hastighetsberoende plastiskt beteende hos material utan att behöva beakta hastighetsberoende parametrar i de konstitutiva sambanden. CPM-modellen utvidgas ytterligare för att kunna beakta reduktionen av styvheten med stegvis ökad belastning och för att utveckla en enhetlig energibaserad skade- och plasticitetmodell. Modellerna är implementerade i ett finit element (FE)-analysprogram för validering av modellerna. Resultatet visar att de energibaserade modellerna för skador och plastisk deformation kan förutsäga beteendet hos asfaltbetongblandningar under varierande spänningstillstånd och töjningshastighetsförhållanden. Arbetet i denna avhandling utgör grunden för utvecklingen av mer grundläggande förståelse av asfaltbetongmaterialets respons och tillämpningen av sunda och robusta mekanikprinciper i analys och design av asfaltstrukturer. / <p>QC 20161220</p>
|
4 |
Are Particle-Based Methods the Future of Sampling in Joint Energy Models? A Deep Dive into SVGD and SGLDShah, Vedant Rajiv 19 August 2024 (has links)
This thesis investigates the integration of Stein Variational Gradient Descent (SVGD) with Joint Energy Models (JEMs), comparing its performance to Stochastic Gradient Langevin Dynamics (SGLD). We incorporated a generative loss term with an entropy component to enhance diversity and a smoothing factor to mitigate numerical instability issues commonly associated with the energy function in energy-based models. Experiments on the CIFAR-10 dataset demonstrate that SGLD, particularly with Sharpness-Aware Minimization (SAM), outperforms SVGD in classification accuracy. However, SVGD without SAM, despite its lower classification accuracy, exhibits lower calibration error underscoring its potential for developing well-calibrated classifiers required in safety-critical applications. Our results emphasize the importance of adaptive tuning of the SVGD smoothing factor ($alpha$) to balance generative and classification objectives. This thesis highlights the trade-offs between computational cost and performance, with SVGD demanding significant resources. Our findings stress the need for adaptive scaling and robust optimization techniques to enhance the stability and efficacy of JEMs. This thesis lays the groundwork for exploring more efficient and robust sampling techniques within the JEM framework, offering insights into the integration of SVGD with JEMs. / Master of Science / This thesis explores advanced techniques for improving machine learning models with a focus on developing well-calibrated and robust classifiers. We concentrated on two methods, Stein Variational Gradient Descent (SVGD) and Stochastic Gradient Langevin Dynamics (SGLD), to evaluate their effectiveness in enhancing classification accuracy and reliability. Our research introduced a new mathematical approach to improve the stability and performance of Joint Energy Models (JEMs). By leveraging the generative capabilities of SVGD, the model is guided to learn better data representations, which are crucial for robust classification. Using the CIFAR-10 image dataset, we confirmed prior research indicating that SGLD, particularly when combined with an optimization method called Sharpness-Aware Minimization (SAM), delivered the best results in terms of accuracy and stability. Notably, SVGD without SAM, despite yielding slightly lower classification accuracy, exhibited significantly lower calibration error, making it particularly valuable for safety-critical applications. However, SVGD required careful tuning of hyperparameters and substantial computational resources. This study lays the groundwork for future efforts to enhance the efficiency and reliability of these advanced sampling techniques, with the overarching goal of improving classifier calibration and robustness with JEMs.
|
5 |
Robot semantic place recognition based on deep belief networks and a direct use of tiny imagesHasasneh, Ahmad 23 November 2012 (has links) (PDF)
Usually, human beings are able to quickly distinguish between different places, solely from their visual appearance. This is due to the fact that they can organize their space as composed of discrete units. These units, called ''semantic places'', are characterized by their spatial extend and their functional unity. Such a semantic category can thus be used as contextual information which fosters object detection and recognition. Recent works in semantic place recognition seek to endow the robot with similar capabilities. Contrary to classical localization and mapping works, this problem is usually addressed as a supervised learning problem. The question of semantic places recognition in robotics - the ability to recognize the semantic category of a place to which scene belongs to - is therefore a major requirement for the future of autonomous robotics. It is indeed required for an autonomous service robot to be able to recognize the environment in which it lives and to easily learn the organization of this environment in order to operate and interact successfully. To achieve that goal, different methods have been already proposed, some based on the identification of objects as a prerequisite to the recognition of the scenes, and some based on a direct description of the scene characteristics. If we make the hypothesis that objects are more easily recognized when the scene in which they appear is identified, the second approach seems more suitable. It is however strongly dependent on the nature of the image descriptors used, usually empirically derived from general considerations on image coding.Compared to these many proposals, another approach of image coding, based on a more theoretical point of view, has emerged the last few years. Energy-based models of feature extraction based on the principle of minimizing the energy of some function according to the quality of the reconstruction of the image has lead to the Restricted Boltzmann Machines (RBMs) able to code an image as the superposition of a limited number of features taken from a larger alphabet. It has also been shown that this process can be repeated in a deep architecture, leading to a sparse and efficient representation of the initial data in the feature space. A complex problem of classification in the input space is thus transformed into an easier one in the feature space. This approach has been successfully applied to the identification of tiny images from the 80 millions image database of the MIT. In the present work, we demonstrate that semantic place recognition can be achieved on the basis of tiny images instead of conventional Bag-of-Word (BoW) methods and on the use of Deep Belief Networks (DBNs) for image coding. We show that after appropriate coding a softmax regression in the projection space is sufficient to achieve promising classification results. To our knowledge, this approach has not yet been investigated for scene recognition in autonomous robotics. We compare our methods with the state-of-the-art algorithms using a standard database of robot localization. We study the influence of system parameters and compare different conditions on the same dataset. These experiments show that our proposed model, while being very simple, leads to state-of-the-art results on a semantic place recognition task.
|
6 |
Algorithmes d'apprentissage pour la recommandationBisson, Valentin 09 1900 (has links)
L'ère numérique dans laquelle nous sommes entrés apporte une quantité importante de nouveaux défis à relever dans une multitude de domaines. Le traitement automatique de l'abondante information à notre disposition est l'un de ces défis, et nous allons ici nous pencher sur des méthodes et techniques adaptées au filtrage et à la recommandation à l'utilisateur d'articles adaptés à ses goûts, dans le contexte particulier et sans précédent notable du jeu vidéo multi-joueurs en ligne. Notre objectif est de prédire l'appréciation des niveaux par les joueurs. Au moyen d'algorithmes d'apprentissage machine modernes tels que les réseaux de neurones profonds avec pré-entrainement non-supervisé, que nous décrivons après une introduction aux concepts nécessaires à leur bonne compréhension, nous proposons deux architectures aux caractéristiques différentes bien que basées sur ce même concept d'apprentissage profond. La première est un réseau de neurones multi-couches pour lequel nous tentons d'expliquer les performances variables que nous rapportons sur les expériences menées pour diverses variations de profondeur, d'heuristique d'entraînement, et des méthodes de pré-entraînement non-supervisé simple, débruitant et contractant. Pour la seconde architecture, nous nous inspirons des modèles à énergie et proposons de même une explication des résultats obtenus, variables eux aussi. Enfin, nous décrivons une première tentative fructueuse d'amélioration de cette seconde architecture au moyen d'un fine-tuning supervisé succédant le pré-entrainement, puis une seconde tentative où ce fine-tuning est fait au moyen d'un critère d'entraînement semi-supervisé multi-tâches. Nos expériences montrent des performances prometteuses, notament avec l'architecture inspirée des modèles à énergie, justifiant du moins l'utilisation d'algorithmes d'apprentissage profonds pour résoudre le problème de la recommandation. / The age of information in which we have entered brings with it a whole new set of challenges to take up in many different fields. Making computers process this profuse information is one such challenge, and this thesis focuses on techniques adapted for automatically filtering and recommending to users items that will fit their tastes, in the somehow original context of an online multi-player game. Our objective is to predict players' ratings of the game's levels. We first introduce machine learning concepts necessary to understand the two architectures we then describe; both of which taking advantage of deep learning and unsupervised pre-training concepts to solve the recommendation problem. The first architecture is a multilayered neural network for which we try to explain different performances we get for different settings of depth, training heuristics and unsupervised pre-training methods, namely, straight, denoising and contrative auto-encoders. The second architecture we explore takes its roots in energy-based models. We give possible explanations for the various results it yields depending on the configurations we experimented with. Finally, we describe two successful improvements on this second architecture. The former is a supervised fine-tuning taking place after the unsupervised pre-training, and the latter is a tentative improvement of the fine-tuning phase by using a multi-tasking training criterion. Our experiments show promising results, especially with the architecture inspired from energy-based models, justifying the use of deep learning algorithms to solve the recommendation problem.
|
7 |
Robot semantic place recognition based on deep belief networks and a direct use of tiny images / Robot de reconnaissance des lieux sémantiques basée sur l'architecture profonde et une utilisation directe de mini-imagesHasasneh, Ahmad 23 November 2012 (has links)
Il est généralement facile pour les humains de distinguer rapidement différents lieux en se basant uniquement sur leur aspect visuel. . Ces catégories sémantiques peuvent être utilisées comme information contextuelle favorisant la détection et la reconnaissance d'objets. Des travaux récents en reconnaissance des lieux visent à doter les robots de capacités similaires. Contrairement aux travaux classiques, portant sur la localisation et la cartographie, cette tâche est généralement traitée comme un problème d'apprentissage supervisé.La reconnaissance de lieux sémantiques - la capacité à reconnaître la catégorie sémantique à laquelle une scène appartient – peut être considérée comme une condition essentielle en robotique autonome. Un robot autonome doit en effet pouvoir apprendre facilement l'organisation sémantique de son environnement pour pouvoir fonctionner et interagir avec succès. Pour atteindre cet objectif, différentes méthodes ont déjà été proposées. Certaines sont basées sur l'identification des objets comme une condition préalable à la reconnaissance des scènes, et d'autres fondées sur une description directe des caractéristiques de la scène. Si nous faisons l'hypothèse que les objets sont plus faciles à reconnaître quand la scène dans laquelle ils apparaissent est bien identifiée, la deuxième approche semble plus appropriée. Elle est cependant fortement dépendante de la nature des descripteurs d'images utilisées qui sont généralement dérivés empiriquement a partir des observations générales sur le codage d'images.En opposition avec ces propositions, une autre approche de codage des images, basée sur un point de vue plus théorique, a émergé ces dernières années. Les modèles d'extraction de caractéristiques fondés sur le principe de la minimisation d'une fonction d'énergie en relation avec un modèle statistique génératif expliquant au mieux les données, ont abouti à l'apparition des Machines de Boltzmann Restreintes (Rectricted Boltzmann Machines : RBMs) capables de coder une image comme la superposition d'un nombre limité de caractéristiques extraites à partir d'un plus grand alphabet. Il a été montré que ce processus peut être répété dans une architecture plus profonde, conduisant à une représentation parcimonieuse et efficace des données initiales dans l'espace des caractéristiques. Le problème complexe de la classification dans l'espace de début est ainsi remplacé par un problème plus simple dans l'espace des caractéristiques.Dans ce travail, nous montrons que la reconnaissance sémantiques des lieux peut être réalisée en considérant des mini-images au lieu d'approches plus classiques de type ''sacs-de-mots'' et par l'utilisation de réseaux profonds pour le codage des images. Après avoir realisé un codage approprié, une régression softmax dans l'espace de projection est suffisante pour obtenir des résultats de classification prometteurs. A notre connaissance, cette approche n'a pas encore été proposée pour la reconnaissance de scène en robotique autonome.Nous avons comparé nos méthodes avec les algorithmes de l'état-de-l'art en utilisant une base de données standard de localisation de robot. Nous avons étudié l'influence des paramètres du système et comparé les différentes conditions sur la même base de données. Les expériences réalisées montrent que le modèle que nous proposons, tout en étant très simple, conduit à des résultats comparables à l'état-de-l'art sur une tâche de reconnaissance de lieux sémantiques. / Usually, human beings are able to quickly distinguish between different places, solely from their visual appearance. This is due to the fact that they can organize their space as composed of discrete units. These units, called ``semantic places'', are characterized by their spatial extend and their functional unity. Such a semantic category can thus be used as contextual information which fosters object detection and recognition. Recent works in semantic place recognition seek to endow the robot with similar capabilities. Contrary to classical localization and mapping works, this problem is usually addressed as a supervised learning problem. The question of semantic places recognition in robotics - the ability to recognize the semantic category of a place to which scene belongs to - is therefore a major requirement for the future of autonomous robotics. It is indeed required for an autonomous service robot to be able to recognize the environment in which it lives and to easily learn the organization of this environment in order to operate and interact successfully. To achieve that goal, different methods have been already proposed, some based on the identification of objects as a prerequisite to the recognition of the scenes, and some based on a direct description of the scene characteristics. If we make the hypothesis that objects are more easily recognized when the scene in which they appear is identified, the second approach seems more suitable. It is however strongly dependent on the nature of the image descriptors used, usually empirically derived from general considerations on image coding.Compared to these many proposals, another approach of image coding, based on a more theoretical point of view, has emerged the last few years. Energy-based models of feature extraction based on the principle of minimizing the energy of some function according to the quality of the reconstruction of the image has lead to the Restricted Boltzmann Machines (RBMs) able to code an image as the superposition of a limited number of features taken from a larger alphabet. It has also been shown that this process can be repeated in a deep architecture, leading to a sparse and efficient representation of the initial data in the feature space. A complex problem of classification in the input space is thus transformed into an easier one in the feature space. This approach has been successfully applied to the identification of tiny images from the 80 millions image database of the MIT. In the present work, we demonstrate that semantic place recognition can be achieved on the basis of tiny images instead of conventional Bag-of-Word (BoW) methods and on the use of Deep Belief Networks (DBNs) for image coding. We show that after appropriate coding a softmax regression in the projection space is sufficient to achieve promising classification results. To our knowledge, this approach has not yet been investigated for scene recognition in autonomous robotics. We compare our methods with the state-of-the-art algorithms using a standard database of robot localization. We study the influence of system parameters and compare different conditions on the same dataset. These experiments show that our proposed model, while being very simple, leads to state-of-the-art results on a semantic place recognition task.
|
8 |
Improved training of energy-based modelsKumar, Rithesh 06 1900 (has links)
No description available.
|
9 |
Algorithmes d'apprentissage pour la recommandationBisson, Valentin 09 1900 (has links)
L'ère numérique dans laquelle nous sommes entrés apporte une quantité importante de nouveaux défis à relever dans une multitude de domaines. Le traitement automatique de l'abondante information à notre disposition est l'un de ces défis, et nous allons ici nous pencher sur des méthodes et techniques adaptées au filtrage et à la recommandation à l'utilisateur d'articles adaptés à ses goûts, dans le contexte particulier et sans précédent notable du jeu vidéo multi-joueurs en ligne. Notre objectif est de prédire l'appréciation des niveaux par les joueurs. Au moyen d'algorithmes d'apprentissage machine modernes tels que les réseaux de neurones profonds avec pré-entrainement non-supervisé, que nous décrivons après une introduction aux concepts nécessaires à leur bonne compréhension, nous proposons deux architectures aux caractéristiques différentes bien que basées sur ce même concept d'apprentissage profond. La première est un réseau de neurones multi-couches pour lequel nous tentons d'expliquer les performances variables que nous rapportons sur les expériences menées pour diverses variations de profondeur, d'heuristique d'entraînement, et des méthodes de pré-entraînement non-supervisé simple, débruitant et contractant. Pour la seconde architecture, nous nous inspirons des modèles à énergie et proposons de même une explication des résultats obtenus, variables eux aussi. Enfin, nous décrivons une première tentative fructueuse d'amélioration de cette seconde architecture au moyen d'un fine-tuning supervisé succédant le pré-entrainement, puis une seconde tentative où ce fine-tuning est fait au moyen d'un critère d'entraînement semi-supervisé multi-tâches. Nos expériences montrent des performances prometteuses, notament avec l'architecture inspirée des modèles à énergie, justifiant du moins l'utilisation d'algorithmes d'apprentissage profonds pour résoudre le problème de la recommandation. / The age of information in which we have entered brings with it a whole new set of challenges to take up in many different fields. Making computers process this profuse information is one such challenge, and this thesis focuses on techniques adapted for automatically filtering and recommending to users items that will fit their tastes, in the somehow original context of an online multi-player game. Our objective is to predict players' ratings of the game's levels. We first introduce machine learning concepts necessary to understand the two architectures we then describe; both of which taking advantage of deep learning and unsupervised pre-training concepts to solve the recommendation problem. The first architecture is a multilayered neural network for which we try to explain different performances we get for different settings of depth, training heuristics and unsupervised pre-training methods, namely, straight, denoising and contrative auto-encoders. The second architecture we explore takes its roots in energy-based models. We give possible explanations for the various results it yields depending on the configurations we experimented with. Finally, we describe two successful improvements on this second architecture. The former is a supervised fine-tuning taking place after the unsupervised pre-training, and the latter is a tentative improvement of the fine-tuning phase by using a multi-tasking training criterion. Our experiments show promising results, especially with the architecture inspired from energy-based models, justifying the use of deep learning algorithms to solve the recommendation problem.
|
10 |
Efficient and Scalable Subgraph Statistics using Regenerative Markov Chain Monte CarloMayank Kakodkar (12463929) 26 April 2022 (has links)
<p>In recent years there has been a growing interest in data mining and graph machine learning for techniques that can obtain frequencies of <em>k</em>-node Connected Induced Subgraphs (<em>k</em>-CIS) contained in large real-world graphs. While recent work has shown that 5-CISs can be counted exactly, no exact polynomial-time algorithms are known that solve this task for <em>k </em>> 5. In the past, sampling-based algorithms that work well in moderately-sized graphs for <em>k</em> ≤ 8 have been proposed. In this thesis I push this boundary up to <em>k</em> ≤ 16 for graphs containing up to 120M edges, and to <em>k</em> ≤ 25 for smaller graphs containing between a million to 20M edges. I do so by re-imagining two older, but elegant and memory-efficient algorithms -- FANMOD and PSRW -- which have large estimation errors by modern standards. This is because FANMOD produces highly correlated k-CIS samples and the cost of sampling the PSRW Markov chain becomes prohibitively expensive for k-CIS’s larger than <em>k </em>> 8.</p>
<p>In this thesis, I introduce:</p>
<p>(a) <strong>RTS:</strong> a novel regenerative Markov chain Monte Carlo (MCMC) sampling procedure on the tree, generated on-the-fly by the FANMOD algorithm. RTS is able to run on multiple cores and multiple machines (embarrassingly parallel) and compute confidence intervals of estimates, all this while preserving the memory-efficient nature of FANMOD. RTS is thus able to estimate subgraph statistics for <em>k</em> ≤ 16 for larger graphs containing up to 120M edges, and for <em>k</em> ≤ 25 for smaller graphs containing between a million to 20M edges.</p>
<p>(b) <strong>R-PSRW:</strong> which scales the PSRW algorithm to larger CIS-sizes using a rejection sampling procedure to efficiently sample transitions from the PSRW Markov chain. R-PSRW matches RTS in terms of scaling to larger CIS sizes.</p>
<p>(c) <strong>Ripple:</strong> which achieves unprecedented scalability by stratifying the R-PSRW Markov chain state-space into ordered strata via a new technique that I call <em>sequential stratified regeneration</em>. I show that the Ripple estimator is consistent, highly parallelizable, and scales well. Ripple is able to <em>count</em> CISs of size up to <em>k </em>≤ 12 in real world graphs containing up to 120M edges.</p>
<p>My empirical results show that the proposed methods offer a considerable improvement over the state-of-the-art. Moreover my methods are able to run at a scale that has been considered unreachable until now, not only by prior MCMC-based methods but also by other sampling approaches. </p>
<p><strong>Optimization of Restricted Boltzmann Machines. </strong>In addition, I also propose a regenerative transformation of MCMC samplers of Restricted Boltzmann Machines RBMs. My approach, Markov Chain Las Vegas (MCLV) gives statistical guarantees in exchange for random running times. MCLV uses a stopping set built from the training data and has a maximum number of Markov chain step-count <em>K</em> (referred as MCLV-<em>K</em>). I present a MCLV-<em>K</em> gradient estimator (LVS-<em>K</em>) for RBMs and explore the correspondence and differences between LVS-<em>K</em> and Contrastive Divergence (CD-<em>K</em>). LVS-<em>K</em> significantly outperforms CD-<em>K</em> in the task of training RBMs over the MNIST dataset, indicating MCLV to be a promising direction in learning generative models.</p>
|
Page generated in 0.0513 seconds