• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • Tagged with
  • 8
  • 8
  • 8
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Modeling Structured Data with Invertible Generative Models

Lu, You 01 February 2022 (has links)
Data is complex and has a variety of structures and formats. Modeling datasets is a core problem in modern artificial intelligence. Generative models are machine learning models, which model datasets with probability distributions. Deep generative models combine deep learning with probability theory, so that can model complicated datasets with flexible models. They have become one of the most popular models in machine learning, and have been applied to many problems. Normalizing flows are a novel class of deep generative models that allow efficient exact likelihood calculation, exact latent variable inference and sampling. They are constructed using functions whose inverse and Jacobian determinant can be efficiently computed. In this paper, we develop normalizing flow based generative models to model complex datasets. In general, data can be categorized to unlabeled data, labeled data, and weakly labeled data. We develop models for these three types of data, respectively. First, we develop Woodbury transformations, which are flow layers for general unsupervised normalizing flows, and can improve the flexibility and scalability of current flow based models. Woodbury transformations achieve efficient invertibility via Woodbury matrix identity and efficient determinant calculation via Sylvester's determinant identity. In contrast with other operations used in state-of-the-art normalizing flows, Woodbury transformations enable (1) high-dimensional interactions, (2) efficient sampling, and (3) efficient likelihood evaluation. Other similar operations, such as 1x1 convolutions, emerging convolutions, or periodic convolutions allow at most two of these three advantages. In our experiments on multiple image datasets, we find that Woodbury transformations allow learning of higher-likelihood models than other flow architectures while still enjoying their efficiency advantages. Second, we propose conditional Glow (c-Glow), a conditional generative flow for structured output learning, which is an advanced variant of supervised learning with structured labels. Traditional structured prediction models try to learn a conditional likelihood, i.e., p(y|x), to capture the relationship between the structured output y and the input features x. For many models, computing the likelihood is intractable. These models are therefore hard to train, requiring the use of surrogate objectives or variational inference to approximate likelihood. C-Glow benefits from the ability of flow-based models to compute p(y|x) exactly and efficiently. Learning with c-Glow does not require a surrogate objective or performing inference during training. Once trained, we can directly and efficiently generate conditional samples. We develop a sample-based prediction method, which can use this advantage to do efficient and effective inference. In our experiments, we test c-Glow on five different tasks. C-Glow outperforms the state-of-the-art baselines in some tasks and predicts comparable outputs in the other tasks. The results show that c-Glow is applicable to many different structured prediction problems. Third, we develop label learning flows (LLF), which is a general framework for weakly supervised learning problems. Our method is a generative model based on normalizing flows. The main idea of LLF is to optimize the conditional likelihoods of all possible labelings of the data within a constrained space defined by weak signals. We develop a training method for LLF that trains the conditional flow inversely and avoids estimating the labels. Once a model is trained, we can make predictions with a sampling algorithm. We apply LLF to three weakly supervised learning problems. Experiment results show that our method outperforms many state-of-the-art alternatives. Our research shows the advantages and versatility of normalizing flows. / Doctor of Philosophy / Data is now more affordable and accessible. At the same time, datasets are more and more complicated. Modeling data is a key problem in modern artificial intelligence and data analysis. Deep generative models combine deep learning and probability theory, and are now a major way to model complex datasets. In this dissertation, we focus on a novel class of deep generative model--normalizing flows. They are becoming popular because of their abilities to efficiently compute exact likelihood, infer exact latent variables, and draw samples. We develop flow-based generative models for different types of data, i.e., unlabeled data, labeled data, and weakly labeled data. First, we develop Woodbury transformations for unsupervised normalizing flows, which improve the flexibility and expressiveness of flow based models. Second, we develop conditional generative flows for an advanced supervised learning problem -- structured output learning, which removes the need of approximations, and surrogate objectives in traditional (deep) structured prediction models. Third, we develop label learning flows, which is a general framework for weakly supervised learning problems. Our research improves the performance of normalizing flows, and extend the applications of them to many supervised and weakly supervised problems.
2

Deep generative models for natural language processing

Miao, Yishu January 2017 (has links)
Deep generative models are essential to Natural Language Processing (NLP) due to their outstanding ability to use unlabelled data, to incorporate abundant linguistic features, and to learn interpretable dependencies among data. As the structure becomes deeper and more complex, having an effective and efficient inference method becomes increasingly important. In this thesis, neural variational inference is applied to carry out inference for deep generative models. While traditional variational methods derive an analytic approximation for the intractable distributions over latent variables, here we construct an inference network conditioned on the discrete text input to provide the variational distribution. The powerful neural networks are able to approximate complicated non-linear distributions and grant the possibilities for more interesting and complicated generative models. Therefore, we develop the potential of neural variational inference and apply it to a variety of models for NLP with continuous or discrete latent variables. This thesis is divided into three parts. Part I introduces a <b>generic variational inference framework</b> for generative and conditional models of text. For continuous or discrete latent variables, we apply a continuous reparameterisation trick or the REINFORCE algorithm to build low-variance gradient estimators. To further explore Bayesian non-parametrics in deep neural networks, we propose a family of neural networks that parameterise categorical distributions with continuous latent variables. Using the stick-breaking construction, an unbounded categorical distribution is incorporated into our deep generative models which can be optimised by stochastic gradient back-propagation with a continuous reparameterisation. Part II explores <b>continuous latent variable models for NLP</b>. Chapter 3 discusses the Neural Variational Document Model (NVDM): an unsupervised generative model of text which aims to extract a continuous semantic latent variable for each document. In Chapter 4, the neural topic models modify the neural document models by parameterising categorical distributions with continuous latent variables, where the topics are explicitly modelled by discrete latent variables. The models are further extended to neural unbounded topic models with the help of stick-breaking construction, and a truncation-free variational inference method is proposed based on a Recurrent Stick-breaking construction (RSB). Chapter 5 describes the Neural Answer Selection Model (NASM) for learning a latent stochastic attention mechanism to model the semantics of question-answer pairs and predict their relatedness. Part III discusses <b>discrete latent variable models</b>. Chapter 6 introduces latent sentence compression models. The Auto-encoding Sentence Compression Model (ASC), as a discrete variational auto-encoder, generates a sentence by a sequence of discrete latent variables representing explicit words. The Forced Attention Sentence Compression Model (FSC) incorporates a combined pointer network biased towards the usage of words from source sentence, which significantly improves the performance when jointly trained with the ASC model in a semi-supervised learning fashion. Chapter 7 describes the Latent Intention Dialogue Models (LIDM) that employ a discrete latent variable to learn underlying dialogue intentions. Additionally, the latent intentions can be interpreted as actions guiding the generation of machine responses, which could be further refined autonomously by reinforcement learning. Finally, Chapter 8 summarizes our findings and directions for future work.
3

Modèles génératifs profonds pour la génération interactive de musique symbolique / Interactive deep generative models for symbolic music

Hadjeres, Gaëtan 07 June 2018 (has links)
Ce mémoire traite des modèles génératifs profonds appliqués à la génération automatique de musique symbolique. Nous nous attacherons tout particulièrement à concevoir des modèles génératifs interactifs, c'est-à-dire des modèles instaurant un dialogue entre un compositeur humain et la machine au cours du processus créatif. En effet, les récentes avancées en intelligence artificielle permettent maintenant de concevoir de puissants modèles génératifs capables de générer du contenu musical sans intervention humaine. Il me semble cependant que cette approche est stérile pour la production artistique dans le sens où l'intervention et l'appréciation humaines en sont des piliers essentiels. En revanche, la conception d'assistants puissants, flexibles et expressifs destinés aux créateurs de contenus musicaux me semble pleine de sens. Que ce soit dans un but pédagogique ou afin de stimuler la créativité artistique, le développement et le potentiel de ces nouveaux outils de composition assistée par ordinateur sont prometteurs. Dans ce manuscrit, je propose plusieurs nouvelles architectures remettant l'humain au centre de la création musicale. Les modèles proposés ont en commun la nécessité de permettre à un opérateur de contrôler les contenus générés. Afin de rendre cette interaction aisée, des interfaces utilisateurs ont été développées ; les possibilités de contrôle se manifestent sous des aspects variés et laissent entrevoir de nouveaux paradigmes compositionnels. Afin d'ancrer ces avancées dans une pratique musicale réelle, je conclue cette thèse sur la présentation de quelques réalisations concrètes (partitions, concerts) résultant de l'utilisation de ces nouveaux outils. / This thesis discusses the use of deep generative models for symbolic music generation. We will be focused on devising interactive generative models which are able to create new creative processes through a fruitful dialogue between a human composer and a computer. Recent advances in artificial intelligence led to the development of powerful generative models able to generate musical content without the need of human intervention. I believe that this practice cannot be thriving in the future since the human experience and human appreciation are at the crux of the artistic production. However, the need of both flexible and expressive tools which could enhance content creators' creativity is patent; the development and the potential of such novel A.I.-augmented computer music tools are promising. In this manuscript, I propose novel architectures that are able to put artists back in the loop. The proposed models share the common characteristic that they are devised so that a user can control the generated musical contents in a creative way. In order to create a user-friendly interaction with these interactive deep generative models, user interfaces were developed. I believe that new compositional paradigms will emerge from the possibilities offered by these enhanced controls. This thesis ends on the presentation of genuine musical projects like concerts featuring these new creative tools.
4

An Overview of Probabilistic Latent Variable Models with anApplication to the Deep Unsupervised Learning of ChromatinStates

Farouni, Tarek 01 September 2017 (has links)
No description available.
5

GENERATIVE MODELS IN NATURAL LANGUAGE PROCESSING AND COMPUTER VISION

Talafha, Sameerah M 01 August 2022 (has links)
Generative models are broadly used in many subfields of DL. DNNs have recently developed a core approach to solving data-centric problems in image classification, translation, etc. The latest developments in parameterizing these models using DNNs and stochastic optimization algorithms have allowed scalable modeling of complex, high-dimensional data, including speech, text, and image. This dissertation proposal presents our state-the-art probabilistic bases and DL algorithms for generative models, including VAEs, GANs, and RNN-based encoder-decoder. The proposal also discusses application areas that may benefit from deep generative models in both NLP and computer vision. In NLP, we proposed an Arabic poetry generation model with extended phonetic and semantic embeddings (Phonetic CNN_subword embeddings). Extensive quantitative experiments using BLEU scores and Hamming distance show notable enhancements over strong baselines. Additionally, a comprehensive human evaluation confirms that the poems generated by our model outperform the base models in criteria including meaning, coherence, fluency, and poeticness. We proposed a generative video model using a hybrid VAE-GAN model in computer vision. Besides, we integrate two attentional mechanisms with GAN to get the essential regions of interest in a video, focused on enhancing the visual implementation of the human motion in the generated output. We have considered quantitative and qualitative experiments, including comparisons with other state-of-the-arts for evaluation. Our results indicate that our model enhances performance compared with other models and performs favorably under different quantitive metrics PSNR, SSIM, LPIPS, and FVD.Recently, mimicking biologically inspired learning in generative models based on SNNs has been shown their effectiveness in different applications. SNNs are the third generation of neural networks, in which neurons communicate through binary signals known as spikes. Since SNNs are more energy-efficient than DNNs. Moreover, DNN models have been vulnerable to small adversarial perturbations that cause misclassification of legitimate images. This dissertation shows the proposed ``VAE-Sleep'' that combines ideas from VAE and the sleep mechanism leveraging the advantages of deep and spiking neural networks (DNN--SNN).On top of that, we present ``Defense–VAE–Sleep'' that extended work of ``VAE-Sleep'' model used to purge adversarial perturbations from contaminated images. We demonstrate the benefit of sleep in improving the generalization performance of the traditional VAE when the testing data differ in specific ways even by a small amount from the training data. We conduct extensive experiments, including comparisons with the state–of–the–art on different datasets.
6

Deep geometric probabilistic models

Xu, Minkai 10 1900 (has links)
La géométrie moléculaire, également connue sous le nom de conformation, est la représentation la plus intrinsèque et la plus informative des molécules. Cependant, prédire des conformations stables à partir de graphes moléculaires reste un problème difficile et fondamental en chimie et en biologie computationnelles. Les méthodes expérimentales et computationelles traditionnelles sont généralement coûteuses et chronophages. Récemment, nous avons assisté à des progrès considérables dans l'utilisation de l'apprentissage automatique, en particulier des modèles génératifs, pour accélérer cette procédure. Cependant, les approches actuelles basées sur les données n'ont généralement pas la capacité de modéliser des distributions complexes et ne tiennent pas compte de caractéristiques géométriques importantes. Dans cette thèse, nous cherchons à construire des modèles génératifs basés sur des principes pour la génération de conformation moléculaire qui peuvent surmonter les problèmes ci-dessus. Plus précisément, nous avons proposé des modèles de diffusion basés sur les flux, sur l'énergie et de débruitage pour la génération de structures moléculaires. Cependant, il n'est pas trivial d'appliquer ces modèles à cette tâche où la vraisemblance des géométries devrait avoir la propriété importante d'invariance par rotation par de translation. Inspirés par les progrès récents de l'apprentissage des représentations géométriques, nous fournissons à la fois une justification théorique et une mise en œuvre pratique sur la manière d'imposer cette propriété aux modèles. Des expériences approfondies sur des jeux de données de référence démontrent l'efficacité de nos approches proposées par rapport aux méthodes de référence existantes. / Molecular geometry, also known as conformation, is the most intrinsic and informative representation of molecules. However, predicting stable conformations from molecular graphs remains a challenging and fundamental problem in computational chemistry and biology. Traditional experimental and computational methods are usually expensive and time-consuming. Recently, we have witnessed considerable progress in using machine learning, especially generative models, to accelerate this procedure. However, current data-driven approaches usually lack the capacity for modeling complex distributions and fail to take important geometric features into account. In this thesis, we seek to build principled generative models for molecular conformation generation that can overcome the above problems. Specifically, we proposed flow-based, energy-based, and denoising diffusion models for molecular structure generation. However, it's nontrivial to apply these models to this task where the likelihood of the geometries should have the important property of rotational and translation invariance. Inspired by the recent progress of geometric representation learning, we provide both theoretical justification and practical implementation about how to impose this property into the models. Extensive experiments on common benchmark datasets demonstrate the effectiveness of our proposed approaches over existing baseline methods.
7

Scene Reconstruction From 4D Radar Data with GAN and Diffusion : A Hybrid Method Combining GAN and Diffusion for Generating Video Frames from 4D Radar Data / Scenrekonstruktion från 4D-radardata med GAN och Diffusion : En Hybridmetod för Generation av Bilder och Video från 4D-radardata med GAN och Diffusionsmodeller

Djadkin, Alexandr January 2023 (has links)
4D Imaging Radar is increasingly becoming a critical component in various industries due to beamforming technology and hardware advancements. However, it does not replace visual data in the form of 2D images captured by an RGB camera. Instead, 4D radar point clouds are a complementary data source that captures spatial information and velocity in a Doppler dimension that cannot be easily captured by a camera's view alone. Some discriminative features of the scene captured by the two sensors are hypothesized to have a shared representation. Therefore, a more interpretable visualization of the radar output can be obtained by learning a mapping from the empirical distribution of the radar to the distribution of images captured by the camera. To this end, the application of deep generative models to generate images conditioned on 4D radar data is explored. Two approaches that have become state-of-the-art in recent years are tested, generative adversarial networks and diffusion models. They are compared qualitatively through visual inspection and by two quantitative metrics: mean squared error and object detection count. It is found that it is easier to control the generative adversarial network's generative process through conditioning than in a diffusion process. In contrast, the diffusion model produces samples of higher quality and is more stable to train. Furthermore, their combination results in a hybrid sampling method, achieving the best results while simultaneously speeding up the diffusion process. / 4D bildradar får en alltmer betydande roll i olika industrier tack vare utveckling inom strålformningsteknik och hårdvara. Det ersätter dock inte visuell data i form av 2D-bilder som fångats av en RGB-kamera. Istället utgör 4D radar-punktmoln en kompletterande datakälla som representerar spatial information och hastighet i form av en Doppler-dimension. Det antas att vissa beskrivande egenskaper i den observerade miljön har en abstrakt representation som de två sensorerna delar. Därmed kan radar-datan visualiseras mer intuitivt genom att lära en transformation från fördelningen över radar-datan till fördelningen över bilderna. I detta syfte utforskas tillämpningen av djupa generativa modeller för bilder som är betingade av 4D radar-data. Två metoder som har blivit state-of-the-art de senaste åren testas: generativa antagonistiska nätverk och diffusionsmodeller. De jämförs kvalitativt genom visuell inspektion och med kvantitativa metriker: medelkvadratfelet och antalet korrekt detekterade objekt i den genererade bilden. Det konstateras att det är lättare att styra den generativa processen i generativa antagonistiska nätverk genom betingning än i en diffusionsprocess. Å andra sidan är diffusionsmodellen stabil att träna och producerar generellt bilder av högre kvalité. De bästa resultaten erhålls genom en hybrid: båda metoderna kombineras för att dra nytta av deras respektive styrkor. de identifierade begränsningarna i de enskilda modellerna och kurera datan för att jämföra hur dessa modeller skalar med större datamängder och mer variation.
8

Augmenting High-Dimensional Data with Deep Generative Models / Högdimensionell dataaugmentering med djupa generativa modeller

Nilsson, Mårten January 2018 (has links)
Data augmentation is a technique that can be performed in various ways to improve the training of discriminative models. The recent developments in deep generative models offer new ways of augmenting existing data sets. In this thesis, a framework for augmenting annotated data sets with deep generative models is proposed together with a method for quantitatively evaluating the quality of the generated data sets. Using this framework, two data sets for pupil localization was generated with different generative models, including both well-established models and a novel model proposed for this purpose. The unique model was shown both qualitatively and quantitatively to generate the best data sets. A set of smaller experiments on standard data sets also revealed cases where this generative model could improve the performance of an existing discriminative model. The results indicate that generative models can be used to augment or replace existing data sets when training discriminative models. / Dataaugmentering är en teknik som kan utföras på flera sätt för att förbättra träningen av diskriminativa modeller. De senaste framgångarna inom djupa generativa modeller har öppnat upp nya sätt att augmentera existerande dataset. I detta arbete har ett ramverk för augmentering av annoterade dataset med hjälp av djupa generativa modeller föreslagits. Utöver detta så har en metod för kvantitativ evaulering av kvaliteten hos genererade data set tagits fram. Med hjälp av detta ramverk har två dataset för pupillokalisering genererats med olika generativa modeller. Både väletablerade modeller och en ny modell utvecklad för detta syfte har testats. Den unika modellen visades både kvalitativt och kvantitativt att den genererade de bästa dataseten. Ett antal mindre experiment på standardiserade dataset visade exempel på fall där denna generativa modell kunde förbättra prestandan hos en existerande diskriminativ modell. Resultaten indikerar att generativa modeller kan användas för att augmentera eller ersätta existerande dataset vid träning av diskriminativa modeller.

Page generated in 0.1169 seconds