• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 4
  • 2
  • 2
  • Tagged with
  • 87
  • 87
  • 68
  • 41
  • 37
  • 34
  • 29
  • 29
  • 28
  • 27
  • 27
  • 26
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A study about Active Semi-Supervised Learning for Generative Models / En studie om Aktivt Semi-Övervakat Lärande för Generativa Modeller

Fernandes de Almeida Quintino, Elisio January 2023 (has links)
In many relevant scenarios, there is an imbalance between abundant unlabeled data and scarce labeled data to train predictive models. Semi-Supervised Learning and Active Learning are two distinct approaches to deal with this issue. The first one directly uses the unlabeled data to improve model parameter learning, while the second performs a smart choice of unlabeled points to be sent to an annotator, or oracle, which can label these points and increase the labeled training set. In this context, Generative Models are highly appropriate, since they internally represent the data generating process, naturally benefiting from data samples independently of the presence of labels. This Thesis proposes Expectation-Maximization with Density-Weighted Entropy, a novel active semi-supervised learning framework tailored towards generative models. The method is theoretically explored and experiments are conducted to evaluate its application to Gaussian Mixture Models and Multinomial Mixture Models. Based on its partial success, several questions are raised and discussed as to identify possible improvements and decide which shortcomings need to be dealt with before the method is considered robust and generally applicable. / I många relevanta scenarier finns det en obalans mellan god tillgång på oannoterad data och sämre tillgång på annoterad data för att träna prediktiva modeller. Semi-Övervakad Inlärning och Aktiv Inlärning är två distinkta metoder för att hantera denna fråga. Den första använder direkt oannoterad data för att förbättra inlärningen av modellparametrar, medan den andra utför ett smart val av oannoterade punkter som ska skickas till en annoterare eller ett orakel, som kan annotera dessa punkter och öka det annoterade träningssetet. I detta sammanhang är Generativa Modeller mycket lämpliga eftersom de internt representerar data-genereringsprocessen och naturligt gynnas av dataexempel oberoende av närvaron av etiketter. Denna Masteruppsats föreslår Expectation-Maximization med Density-Weighted Entropy, en ny aktiv semi-övervakad inlärningsmetod som är skräddarsydd för generativa modeller. Metoden utforskas teoretiskt och experiment genomförs för att utvärdera dess tillämpning på Gaussiska Mixturmodeller och Multinomiala Mixturmodeller. Baserat på dess partiella framgång ställs och diskuteras flera frågor för att identifiera möjliga förbättringar och avgöra vilka brister som måste hanteras innan metoden anses robust och allmänt tillämplig.
52

Neural probabilistic path prediction : skipping paths for acceleration

Peng, Bowen 10 1900 (has links)
La technique de tracé de chemins est la méthode Monte Carlo la plus populaire en infographie pour résoudre le problème de l'illumination globale. Une image produite par tracé de chemins est beaucoup plus photoréaliste que les méthodes standard tel que le rendu par rasterisation et même le lancer de rayons. Mais le tracé de chemins est coûteux et converge lentement, produisant une image bruitée lorsqu'elle n'est pas convergée. De nombreuses méthodes visant à accélérer le tracé de chemins ont été développées, mais chacune présente ses propres défauts et contraintes. Dans les dernières avancées en apprentissage profond, en particulier dans le domaine des modèles génératifs conditionnels, il a été démontré que ces modèles sont capables de bien apprendre, modéliser et tirer des échantillons à partir de distributions complexes. Comme le tracé de chemins dépend également d'un tel processus sur une distribution complexe, nous examinons les similarités entre ces deux problèmes et modélisons le processus de tracé de chemins comme un processus génératif. Ce processus peut ensuite être utilisé pour construire un estimateur efficace avec un réseau neuronal afin d'accélérer le temps de rendu sans trop d'hypothèses sur la scène. Nous montrons que notre estimateur neuronal (NPPP), utilisé avec le tracé de chemins, peut améliorer les temps de rendu d'une manière considérable sans beaucoup compromettre sur la qualité du rendu. Nous montrons également que l'estimateur est très flexible et permet à un utilisateur de contrôler et de prioriser la qualité ou le temps de rendu, sans autre modification ou entraînement du réseau neuronal. / Path tracing is one of the most popular Monte Carlo methods used in computer graphics to solve the problem of global illumination. A path traced image is much more photorealistic compared to standard rendering methods such as rasterization and even ray tracing. Unfortunately, path tracing is expensive to compute and slow to converge, resulting in noisy images when unconverged. Many methods aimed to accelerate path tracing have been developed, but each has its own downsides and limitiations. Recent advances in deep learning, especially with conditional generative models, have shown to be very capable at learning, modeling, and sampling from complex distributions. As path tracing is also dependent on sampling from complex distributions, we investigate the similarities between the two problems and model the path tracing process itself as a conditional generative process. It can then be used to build an efficient neural estimator that allows us to accelerate rendering time with as few assumptions as possible. We show that our neural estimator (NPPP) used along with path tracing can improve rendering time by a considerable amount without compromising much in rendering quality. The estimator is also shown to be very flexible and allows a user to control and prioritize quality or rendering time, without any further training or modifications to the neural network.
53

Controllable music performance synthesis via hierarchical modelling

Wu, Yusong 08 1900 (has links)
L’expression musicale requiert le contrôle sur quelles notes sont jouées ainsi que comment elles se jouent. Les synthétiseurs audios conventionnels offrent des contrôles expressifs détaillés, cependant au détriment du réalisme. La synthèse neuronale en boîte noire des audios et les échantillonneurs concaténatifs sont capables de produire un son réaliste, pourtant, nous avons peu de mécanismes de contrôle. Dans ce travail, nous introduisons MIDI-DDSP, un modèle hiérarchique des instruments musicaux qui permet tant la synthèse neuronale réaliste des audios que le contrôle sophistiqué de la part des utilisateurs. À partir des paramètres interprétables de synthèse provenant du traitement différentiable des signaux numériques (Differentiable Digital Signal Processing, DDSP), nous inférons les notes musicales et la propriété de haut niveau de leur performance expressive (telles que le timbre, le vibrato, l’intensité et l’articulation). Ceci donne naissance à une hiérarchie de trois niveaux (notes, performance, synthèse) qui laisse aux individus la possibilité d’intervenir à chaque niveau, ou d’utiliser la distribution préalable entraînée (notes étant donné performance, synthèse étant donné performance) pour une assistance créative. À l’aide des expériences quantitatives et des tests d’écoute, nous démontrons que cette hiérarchie permet de reconstruire des audios de haute fidélité, de prédire avec précision les attributs de performance d’une séquence de notes, mais aussi de manipuler indépendamment les attributs étant donné la performance. Comme il s’agit d’un système complet, la hiérarchie peut aussi générer des audios réalistes à partir d’une nouvelle séquence de notes. En utilisant une hiérarchie interprétable avec de multiples niveaux de granularité, MIDI-DDSP ouvre la porte aux outils auxiliaires qui renforce la capacité des individus à travers une grande variété d’expérience musicale. / Musical expression requires control of both what notes are played, and how they are performed. Conventional audio synthesizers provide detailed expressive controls, but at the cost of realism. Black-box neural audio synthesis and concatenative samplers can produce realistic audio, but have few mechanisms for control. In this work, we introduce MIDI-DDSP a hierarchical model of musical instruments that enables both realistic neural audio synthesis and detailed user control. Starting from interpretable Differentiable Digital Signal Processing (DDSP) synthesis parameters, we infer musical notes and high-level properties of their expressive performance (such as timbre, vibrato, dynamics, and articulation). This creates a 3-level hierarchy (notes, performance, synthesis) that affords individuals the option to intervene at each level, or utilize trained priors (performance given notes, synthesis given performance) for creative assistance. Through quantitative experiments and listening tests, we demonstrate that this hierarchy can reconstruct high-fidelity audio, accurately predict performance attributes for a note sequence, independently manipulate the attributes of a given performance, and as a complete system, generate realistic audio from a novel note sequence. By utilizing an interpretable hierarchy, with multiple levels of granularity, MIDI-DDSP opens the door to assistive tools to empower individuals across a diverse range of musical experience.
54

Efficient Adaptation of Deep Vision Models

Ze Wang (15354715) 27 April 2023 (has links)
<p>Deep neural networks have made significant advances in computer vision. However, several challenges limit their real-world applications. For example, domain shifts in vision data degrade model performance; visual appearance variances affect model robustness; it is also non-trivial to extend a model trained on one task to novel tasks; and in many applications, large-scale labeled data are not even available for learning powerful deep models from scratch. This research focuses on improving the transferability of deep features and the efficiency of deep vision model adaptation, leading to enhanced generalization and new capabilities on computer vision tasks. Specifically, we approach these problems from the following two directions: architectural adaptation and label-efficient transferable feature learning. From an architectural perspective, we investigate various schemes that permit network adaptation to be parametrized by multiple copies of sub-structures, distributions of parameter subspaces, or functions that infer parameters from data. We also explore how model adaptation can bring new capabilities, such as continuous and stochastic image modeling, fast transfer to new tasks, and dynamic computation allocation based on sample complexity. From the perspective of feature learning, we show how transferable features emerge from generative modeling with massive unlabeled or weakly labeled data. Such features enable both image generation under complex conditions and downstream applications like image recognition and segmentation. By combining both perspectives, we achieve improved performance on computer vision tasks with limited labeled data, enhanced transferability of deep features, and novel capabilities beyond standard deep learning models.</p>
55

Generating Synthetic CT Images Using Diffusion Models / Generering av sCT bilder med en generativ diffusionsmodell

Saleh, Salih January 2023 (has links)
Magnetic resonance (MR) images together with computed tomography (CT) images are used in many medical practices, such as radiation therapy. To capture those images, patients have to undergo two separate scans: one for the MR image, which involves using strong magnetic fields, and one for the CT image which involves using radiation (x-rays). Another approach is to generate synthetic CT (sCT) images from MR images, thus the patients only have to take one image (the MR image), making the whole process easier and more effcient. One way of generating sCT images is by using generative diffusion models which are a relatively new class in generative models. To this end, this project aims to enquire whether generative diffusion models are capable of generating viable and realistic sCT images from MR images. Firstly, a denoising diffusion probabilistic model (DDPM) with a U-Net backbone neural network is implemented and tested on the MNIST dataset, then it is implemented on a pelvis dataset consisting of 41600 pairs of images, where each pair is made up of an MR image with its respective CT image. The MR images were added at each sampling step in order to condition the sampled sCT images on the MR images. After successful implementation and training, the developed diffusion model got a Fréchet inception distance (FID) score of 14.45, and performed as good as the current state-of-the-art model without any major optimizations to the hyperparameters or to the model itself. The results are very promising and demonstrate the capabilities of this new generative modelling framework.
56

Generating Synthetic Training Data with Stable Diffusion

Rynell, Rasmus, Melin, Oscar January 2023 (has links)
The usage of image classification in various industries has grown significantly in recentyears. There are however challenges concerning the data used to train such models. Inmany cases the data used in training is often difficult and expensive to obtain. Furthermore,dealing with image data may come with additional problems such as privacy concerns. Inrecent years, synthetic image generation models such as Stable Diffusion has seen signifi-cant improvement. Solely using a textual description, Stable Diffusion is able to generate awide variety of photorealistic images. In addition to textual descriptions, other condition-ing models such as ControlNet has enabled the possibility of additional grounding infor-mation, such as canny edge and segmentation images. This thesis investigates if syntheticimages generated by Stable Diffusion can be used effectively in training an image classifier.To find the most effective method for generating training data, multiple conditioning meth-ods are investigated and evaluated. The results show that it is possible to generate high-quality training data using several conditioning techniques. The best performing methodwas using canny edge grounded images to augment already existing data. Extending twoclasses with additional synthetic data generated by the best performing method, achievedthe highest average F1-score increase of 0.85 percentage points compared with a baselinesolely trained on real images.
57

Latent data augmentation and modular structure for improved generalization

Lamb, Alexander 08 1900 (has links)
This thesis explores the nature of generalization in deep learning and several settings in which it fails. In particular, deep neural networks can struggle to generalize in settings with limited data, insufficient supervision, challenging long-range dependencies, or complex structure and subsystems. This thesis explores the nature of these challenges for generalization in deep learning and presents several algorithms which seek to address these challenges. In the first article, we show how training with interpolated hidden states can improve generalization and calibration in deep learning. We also introduce a theory showing how our algorithm, which we call Manifold Mixup, leads to a flattening of the per-class hidden representations, which can be seen as a compression of the information in the hidden states. The second article is related to the first and shows how interpolated examples can be used for semi-supervised learning. In addition to interpolating the input examples, the model’s interpolated predictions are used as targets for these examples. This improves results on standard benchmarks as well as classic 2D toy problems for semi-supervised learning. The third article studies how a recurrent neural network can be divided into multiple modules with different parameters and well separated hidden states, as well as a competition mechanism restricting updating of the hidden states to a subset of the most relevant modules on a specific time-step. This improves systematic generalization when the pattern distribution is changed between the training and evaluation phases. It also improves generalization in reinforcement learning. In the fourth article, we show that attention can be used to control the flow of information between successive layers in deep networks. This allows each layer to only process the subset of the previously computed layers’ outputs which are most relevant. This improves generalization on relational reasoning tasks as well as standard benchmark classification tasks. / Cette thèse explore la nature de la généralisation dans l’apprentissage en profondeur et plusieurs contextes dans lesquels elle échoue. En particulier, les réseaux de neurones profonds peuvent avoir du mal à se généraliser dans des contextes avec des données limitées, une supervision insuffisante, des dépendances à longue portée difficiles ou une structure et des sous-systèmes complexes. Cette thèse explore la nature de ces défis pour la généralisation en apprentissage profond et présente plusieurs algorithmes qui cherchent à relever ces défis. Dans le premier article, nous montrons comment l’entraînement avec des états cachés interpolés peut améliorer la généralisation et la calibration en apprentissage profond. Nous introduisons également une théorie montrant comment notre algorithme, que nous appelons Manifold Mixup, conduit à un aplatissement des représentations cachées par classe, ce qui peut être vu comme une compression de l’information dans les états cachés. Le deuxième article est lié au premier et montre comment des exemples interpolés peuvent être utilisés pour un apprentissage semi-supervisé. Outre l’interpolation des exemples d’entrée, les prédictions interpolées du modèle sont utilisées comme cibles pour ces exemples. Cela améliore les résultats sur les benchmarks standard ainsi que sur les problèmes de jouets 2D classiques pour l’apprentissage semi-supervisé. Le troisième article étudie comment un réseau de neurones récurrent peut être divisé en plusieurs modules avec des paramètres différents et des états cachés bien séparés, ainsi qu’un mécanisme de concurrence limitant la mise à jour des états cachés à un sous-ensemble des modules les plus pertinents sur un pas de temps spécifique. . Cela améliore la généralisation systématique lorsque la distribution des modèles est modifiée entre les phases de entraînement et d’évaluation. Il améliore également la généralisation dans l’apprentissage par renforcement. Dans le quatrième article, nous montrons que l’attention peut être utilisée pour contrôler le flux d’informations entre les couches successives des réseaux profonds. Cela permet à chaque couche de ne traiter que le sous-ensemble des sorties des couches précédemment calculées qui sont les plus pertinentes. Cela améliore la généralisation sur les tâches de raisonnement relationnel ainsi que sur les tâches de classification de référence standard.
58

Believable and Manipulable Facial Behaviour in a Robotic Platform using Normalizing Flows / Trovärda och Manipulerbara Ansiktsuttryck i en Robotplattform med Normaliserande Flöde

Alias, Kildo January 2021 (has links)
Implicit communication is important in interaction because it plays a role in conveying the internal mental states of an individual. For example, emotional expressions that are shown through unintended facial gestures can communicate underlying affective states. People can infer mental states from implicit cues and have strong expectations of what those cues mean. This is true for human-human interactions, as well as human-robot interactions. A Normalizing flow model is used as a generative model that can produce facial gestures and head movements. The invertible nature of the Normalizing flow model makes it possible to manipulate attributes of the generated gestures. The model in this work is capable of generating facial expressions that look real and human-like. Furthermore, the model can manipulate the generated output to change the perceived affective state of the facial expressions. / Implicit kommunikation är viktig i interaktioner eftersom den spelar en roll för att förmedla individens inre mentala tillstånd. Till exempel kan känslomässiga uttryck som visas genom oavsiktliga ansiktsgester kommunicera underliggande affektiva tillstånd. Människor kan härleda mentala tillstånd från implicita ledtrådar och har starka förväntningar på vad dessa ledtrådar betyder. Detta gäller för interaktion mellan människor, liksom interaktion mellan människa och robot. En normaliserande flödesmodell används som en generativ modell som kan producera ansiktsgester och huvudrörelser. Den inverterbara naturen hos normaliseringsflödesmodellen gör det också möjligt att manipulera det genererade ansiktsuttrycken. Utgången manipuleras i två dimensioner som vanligtvis används för att beskriva affektivt tillstånd, valens och upphetsning. Modellen i detta arbete kan generera ansiktsuttryck som ser verkliga och mänskliga ut och kan manipuleras for att ändra det affektiva tillstånd.
59

Synthetic Data Generation for the Financial Industry Using Generative Adversarial Networks / Generering av Syntetisk Data för Finansbranchen med Generativa Motstridande Nätverk

Ljung, Mikael January 2021 (has links)
Following the introduction of new laws and regulations to ensure data protection in GDPR and PIPEDA, interests in technologies to protect data privacy have increased. A promising research trajectory in this area is found in Generative Adversarial Networks (GAN), an architecture trained to produce data that reflects the statistical properties of its underlying dataset without compromising the integrity of the data subjects. Despite the technology’s young age, prior research has made significant progress in the generation process of so-called synthetic data, and the current models can generate images with high-quality. Due to the architecture’s success with images, it has been adapted to new domains, and this study examines its potential to synthesize financial tabular data. The study investigates a state-of-the-art model within tabular GANs, called CTGAN, together with two proposed ideas to enhance its generative ability. The results indicate that a modified training dynamic and a novel early stopping strategy improve the architecture’s capacity to synthesize data. The generated data presents realistic features with clear influences from its underlying dataset, and the inferred conclusions on subsequent analyses are similar to those based on the original data. Thus, the conclusion is that GANs has great potential to generate tabular data that can be considered a substitute for sensitive data, which could enable organizations to have more generous data sharing policies. / Med striktare förhållningsregler till hur data ska hanteras genom GDPR och PIPEDA har intresset för anonymiseringsmetoder för att censurera känslig data aktualliserats. En lovande teknik inom området återfinns i Generativa Motstridande Nätverk, en arkitektur som syftar till att generera data som återspeglar de statiska egenskaperna i dess underliggande dataset utan att äventyra datasubjektens integritet. Trots forskningsfältet unga ålder har man gjort stora framsteg i genereringsprocessen av så kallad syntetisk data, och numera finns det modeller som kan generera bilder av hög realistisk karaktär. Som ett steg framåt i forskningen har arkitekturen adopterats till nya domäner, och den här studien syftar till att undersöka dess förmåga att syntatisera finansiell tabelldata. I studien undersöks en framträdande modell inom forskningsfältet, CTGAN, tillsammans med två föreslagna idéer i syfte att förbättra dess generativa förmåga. Resultaten indikerar att en förändrad träningsdynamik och en ny optimeringsstrategi förbättrar arkitekturens förmåga att generera syntetisk data. Den genererade datan håller i sin tur hög kvalité med tydliga influenser från dess underliggande dataset, och resultat på efterföljande analyser mellan datakällorna är av jämförbar karaktär. Slutsatsen är således att GANs har stor potential att generera tabulär data som kan betrakatas som substitut till känslig data, vilket möjliggör för en mer frikostig delningspolitik av data inom organisationer.
60

Generating Geospatial Trip DataUsing Deep Neural Networks

Alhasan, Ahmed January 2022 (has links)
Synthetic data provides a good alternative to real data when the latter is not sufficientor limited by privacy requirements. In spatio-temporal applications, generating syntheticdata is generally more complex due to the existence of both spatial and temporal dependencies.Recently, with the advent of deep generative modeling such as GenerativeAdversarial Networks (GAN), synthetic data generation has seen a lot of development andsuccess. This thesis uses a GAN model based on two Recurrent Neural Networks (RNN)as a generator and a discriminator to generate new trip data for transport vehicles, wherethe data is represented as a time series. This model is compared with a standalone RNNnetwork that does not have an adversarial counterpart. The result shows that the RNNmodel (without the adversarial counterpart) performed better than the GAN model dueto the difficulty that involves training and tuning GAN models.

Page generated in 0.1165 seconds