• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • Tagged with
  • 7
  • 7
  • 7
  • 7
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Disocclusion Inpainting using Generative Adversarial Networks

Aftab, Nadeem January 2020 (has links)
The old methods used for images inpainting of the Depth Image Based Rendering (DIBR) process are inefficient in producing high-quality virtual views from captured data. From the viewpoint of the original image, the generated data’s structure seems less distorted in the virtual view obtained by translation but when then the virtual view involves rotation, gaps and missing spaces become visible in the DIBR generated data. The typical approaches for filling the disocclusion tend to be slow, inefficient, and inaccurate. In this project, a modern technique Generative Adversarial Network (GAN) is used to fill the disocclusion. GAN consists of two or more neural networks that compete against each other and get trained. This study result shows that GAN can inpaint the disocclusion with a consistency of the structure. Additionally, another method (Filling) is used to enhance the quality of GAN and DIBR images. The statistical evaluation of results shows that GAN and filling method enhance the quality of DIBR images.
2

TwinLossGAN: Domain Adaptation Learning for Semantic Segmentation

Song, Yuehua 19 August 2022 (has links)
Most semantic segmentation methods based on Convolutional Neural Networks (CNNs) rely on supervised pixel-level labelling, but because pixel-level labelling is time-consuming and laborious, synthetic images are generated by software, and their label information is already embedded inside the data; therefore, labelling can be done automatically. This advantage makes synthetic datasets widely used in training deep learning models for real-world cases. Still, compared to supervised learning with real-world labelled images, the accuracy of the models trained using synthetic datasets is not high when applied to real-world data. So, researchers have turned their interest to Unsupervised Domain Adaptation (UDA), which is mainly used to transfer knowledge learned from one domain to another. That is why we can use synthetic data to train the model. Then, the model can use what it learned to deal with real-world problems. UDA is an essential part of transfer learning. It aims to make two domain feature distributions as close as possible. In other words, UDA is mainly used to migrate the learned knowledge from one domain to another, so the knowledge and distribution learned from the source domain feature space can be migrated to the target space to improve the prediction accuracy of the target domain. However, compared with the traditional supervised learning model, the accuracy of UDA is not high when the trained UDA is used for scene segmentation of real images. The reason for the low accuracy of UDA is that the domain gap between the source and target domains is too large. The image distribution information learned by the model from the source domain cannot be applied to the target domain, which limits the development of UDA. Therefore we propose a new UDA model called TwinLossGAN, which will reduce the domain gap in two steps. The first step is to mix images from the source and target domains. The purpose is to allow the model to learn the features of images from both domains well. Mixing is performed by selecting a synthetic image on the source domain and then selecting a real-world image on the target domain. The two selected images are input to the segmenter to obtain semantic segmentation results separately. Then, the segmentation results are fed into the mixing module. The mixing model uses the ClassMix method to copy and paste some segmented objects from one image into another using segmented masks. Additionally, it generates inter-domain composite images and the corresponding pseudo-label. Then, in the second step, we modify a Generative Adversarial Network (GAN) to reduce the gap between domains further. The original GAN network has two main parts: generator and discriminator. In our proposed TwinLossGAN, the generator performs semantic segmentation on the source domain images and the target domain images separately. Segmentations are trained in parallel. The source domain synthetic images are segmented, and the loss is computed using synthetic labels. At the same time, the generated inter-domain composite images are fed to the segmentation module. The module compares its semantic segmentation results with the pseudo-label and calculates the loss. These calculated twin losses are used as generator loss for the GAN cycle for iterations. The GAN discriminator examines whether the semantic segmentation results originate from the source or target domain. The premise was that we retrieved data from GTA5 and SYNTHIA as the source domain data and images from CityScapes as the target domain data. The result was that the accuracy indicated by the TwinLossGAN that we proposed was much higher than the base UDA models.
3

Scenario Generation for Stress Testing Using Generative Adversarial Networks : Deep Learning Approach to Generate Extreme but Plausible Scenarios

Gustafsson, Jonas, Jonsson, Conrad January 2023 (has links)
Central Clearing Counterparties play a crucial role in financial markets, requiring robust risk management practices to ensure operational stability. A growing emphasis on risk analysis and stress testing from regulators has led to the need for sophisticated tools that can model extreme but plausible market scenarios. This thesis presents a method leveraging Wasserstein Generative Adversarial Networks with Gradient Penalty (WGAN-GP) to construct an independent scenario generator capable of modeling and generating return distributions for financial markets. The developed method utilizes two primary components: the WGAN-GP model and a novel scenario selection strategy. The WGAN-GP model approximates the multivariate return distribution of stocks, generating plausible return scenarios. The scenario selection strategy employs lower and upper bounds on Euclidean distance calculated from the return vector to identify, and select, extreme scenarios suitable for stress testing clearing members' portfolios. This approach enables the extraction of extreme yet plausible returns. This method was evaluated using 25 years of historical stock return data from the S&P 500. Results demonstrate that the WGAN-GP model effectively approximates the multivariate return distribution of several stocks, facilitating the generation of new plausible returns. However, the model requires extensive training to fully capture the tails of the distribution. The Euclidean distance-based scenario selection strategy shows promise in identifying extreme scenarios, with the generated scenarios demonstrating comparable portfolio impact to historical scenarios. These results suggest that the proposed method offers valuable tools for Central Clearing Counterparties to enhance their risk management. / Centrala motparter spelar en avgörande roll i dagens finansmarknad, vilket innebär att robusta riskhanteringsrutiner är nödvändiga för att säkerställa operativ stabilitet. Ökande regulatoriskt tryck för riskanalys och stresstestning från tillsynsmyndigheter har lett till behovet av avancerade verktyg som kan modellera extrema men troliga marknadsscenarier. I denna uppsats presenteras en metod som använder Wasserstein Generative Adversarial Networks med Gradient Penalty (WGAN-GP) för att skapa en oberoende scenariogenerator som kan modellera och generera avkastningsfördelningar för finansmarknader. Den framtagna metoden består av två huvudkomponenter: WGAN-GP-modellen och en scenariourvalstrategi. WGAN-GP-modellen approximerar den multivariata avkastningsfördelningen för aktier och genererar möjliga avkastningsscenarier. Urvalsstrategin för scenarier använder nedre och övre gränser för euklidiskt avstånd, beräknat från avkastningsvektorn, för att identifiera och välja extrema scenarier som kan användas för att stresstesta clearingmedlemmars portföljer. Denna strategi gör det möjligt att erhålla nya extrema men troliga avkastningar. Metoden utvärderas med 25 års historisk aktieavkastningsdata från S&P 500. Resultaten visar att WGAN-GP-modellen effektivt kan approximera den multivariata avkastningsfördelningen för flera aktier och därmed generera nya möjliga avkastningar. Modellen kan dock kräva en omfattande mängd träningscykler (epochs) för att fullt ut fånga fördelningens svansar. Scenariurvalet baserat på euklidiskt avstånd visade lovande resultat som ett urvalskriterium för extrema scenarier. De genererade scenarierna visar en jämförbar påverkan på portföljer i förhållande till de historiska scenarierna. Dessa resultat tyder på att den föreslagna metoden kan erbjuda värdefulla verktyg för centrala motparter att förbättra sin riskhantering.
4

Geospatial Trip Data Generation Using Deep Neural Networks / Generering av Geospatiala Resedata med Hjälp av Djupa Neurala Nätverk

Deepak Udapudi, Aditya January 2022 (has links)
Development of deep learning methods is dependent majorly on availability of large amounts of high quality data. To tackle the problem of data scarcity one of the workarounds is to generate synthetic data using deep learning methods. Especially, when dealing with trajectory data there are added challenges that come in to the picture such as high dependencies of the spatial and temporal component, geographical context sensitivity, privacy laws that protect an individual from being traced back to them based on their mobility patterns etc. This project is an attempt to overcome these challenges by exploring the capabilities of Generative Adversarial Networks (GANs) to generate synthetic trajectories which have characteristics close to the original trajectories. A naive model is designed as a baseline in comparison with a Long Short Term Memorys (LSTMs) based GAN. GANs are generally associated with image data and that is why Convolutional Neural Network (CNN) based GANs are very popular in recent studies. However, in this project an LSTM-based GAN was chosen to work with in order to explore its capabilities and strength of handling long-term dependencies sequential data well. The methods are evaluated using qualitative metrics of visually inspecting the trajectories on a real-world map as well as quantitative metrics by calculating the statistical distance between the underlying data distributions of the original and synthetic trajectories. Results indicate that the baseline method implemented performed better than the GAN model. The baseline model generated trajectories that had feasible spatial and temporal components, whereas the GAN model was able to learn the spatial component of the data well and not the temporal component. Conditional map information could be added as part of training the networks and this can be a research question for future work. / Utveckling av metoder för djupinlärning är till stor del beroende av tillgången på stora mängder data av hög kvalitet. För att ta itu med problemet med databrist är en av lösningarna att generera syntetisk data med hjälp av djupinlärning. Speciellt när man hanterar bana data finns det ytterligare utmaningar som kommer in i bilden såsom starka beroenden av den rumsliga och tidsmässiga komponenten, geografiska känsliga sammanhang, samt integritetslagar som skyddar en individ från att spåras tillbaka till dem baserat på deras mobilitetsmönster etc. Detta projekt är ett försök att överkomma dessa utmaningar genom att utforska kapaciteten hos generativa motståndsnätverk (GAN) för att generera syntetiska banor som har egenskaper nära de ursprungliga banorna. En naiv modell är utformad som en baslinje i jämförelse med en LSTM-baserad GAN. GAN:er är i allmänhet förknippade med bilddata och det är därför som CNN-baserade GAN:er är mycket populära i nya studier. I det här projektet valdes dock en LSTM-baserad GAN att arbeta med för att utforska dess förmåga och styrka att hantera långsiktiga beroenden och sekventiella data på ett bra sätt. Metoderna utvärderas med hjälp av kvalitativa mått för att visuellt inspektera banorna på en verklig världskarta samt kvantitativa mått genom att beräkna det statistiska avståndet mellan de underliggande datafördelningarna för de ursprungliga och syntetiska banorna. Resultaten indikerar att den implementerade baslinjemetoden fungerade bättre än GAN-modellen. Baslinjemodellen genererade banor som hade genomförbara rumsliga och tidsmässiga komponenter, medan GAN-modellen kunde lära sig den rumsliga komponenten av data väl men inte den tidsmässiga komponenten. Villkorskarta skulle kunna läggas till som en del av träningen av nätverken och detta kan vara en forskningsfråga för framtida arbete.
5

GENERATIVE MODELS IN NATURAL LANGUAGE PROCESSING AND COMPUTER VISION

Talafha, Sameerah M 01 August 2022 (has links)
Generative models are broadly used in many subfields of DL. DNNs have recently developed a core approach to solving data-centric problems in image classification, translation, etc. The latest developments in parameterizing these models using DNNs and stochastic optimization algorithms have allowed scalable modeling of complex, high-dimensional data, including speech, text, and image. This dissertation proposal presents our state-the-art probabilistic bases and DL algorithms for generative models, including VAEs, GANs, and RNN-based encoder-decoder. The proposal also discusses application areas that may benefit from deep generative models in both NLP and computer vision. In NLP, we proposed an Arabic poetry generation model with extended phonetic and semantic embeddings (Phonetic CNN_subword embeddings). Extensive quantitative experiments using BLEU scores and Hamming distance show notable enhancements over strong baselines. Additionally, a comprehensive human evaluation confirms that the poems generated by our model outperform the base models in criteria including meaning, coherence, fluency, and poeticness. We proposed a generative video model using a hybrid VAE-GAN model in computer vision. Besides, we integrate two attentional mechanisms with GAN to get the essential regions of interest in a video, focused on enhancing the visual implementation of the human motion in the generated output. We have considered quantitative and qualitative experiments, including comparisons with other state-of-the-arts for evaluation. Our results indicate that our model enhances performance compared with other models and performs favorably under different quantitive metrics PSNR, SSIM, LPIPS, and FVD.Recently, mimicking biologically inspired learning in generative models based on SNNs has been shown their effectiveness in different applications. SNNs are the third generation of neural networks, in which neurons communicate through binary signals known as spikes. Since SNNs are more energy-efficient than DNNs. Moreover, DNN models have been vulnerable to small adversarial perturbations that cause misclassification of legitimate images. This dissertation shows the proposed ``VAE-Sleep'' that combines ideas from VAE and the sleep mechanism leveraging the advantages of deep and spiking neural networks (DNN--SNN).On top of that, we present ``Defense–VAE–Sleep'' that extended work of ``VAE-Sleep'' model used to purge adversarial perturbations from contaminated images. We demonstrate the benefit of sleep in improving the generalization performance of the traditional VAE when the testing data differ in specific ways even by a small amount from the training data. We conduct extensive experiments, including comparisons with the state–of–the–art on different datasets.
6

Data driven approach to detection of quantum phase transitions

Contessi, Daniele 19 July 2023 (has links)
Phase transitions are fundamental phenomena in (quantum) many-body systems. They are associated with changes in the macroscopic physical properties of the system in response to the alteration in the conditions controlled by one or more parameters, like temperature or coupling constants. Quantum phase transitions are particularly intriguing as they reveal new insights into the fundamental nature of matter and the laws of physics. The study of phase transitions in such systems is crucial in aiding our understanding of how materials behave in extreme conditions, which are difficult to replicate in laboratory, and also the behavior of exotic states of matter with unique and potentially useful properties like superconductors and superfluids. Moreover, this understanding has other practical applications and can lead to the development of new materials with specific properties or more efficient technologies, such as quantum computers. Hence, detecting the transition point from one phase of matter to another and constructing the corresponding phase diagram is of great importance for examining many-body systems and predicting their response to external perturbations. Traditionally, phase transitions have been identified either through analytical methods like mean field theory or numerical simulations. The pinpointing of the critical value normally involves the measure of specific quantities such as local observables, correlation functions, energy gaps, etc. reflecting the changes in the physics through the transition. However, the latter approach requires prior knowledge of the system to calculate the order parameter of the transition, which is uniquely associated to its universality class. Recently, another method has gained more and more attention in the physics community. By using raw and very general representative data of the system, one can resort to machine learning techniques to distinguish among patterns within the data belonging to different phases. The relevance of these techniques is rooted in the ability of a properly trained machine to efficiently process complex data for the sake of pursuing classification tasks, pattern recognition, generating brand new data and even developing decision processes. The aim of this thesis is to explore phase transitions from this new and promising data-centric perspective. On the one hand, our work is focused on the developement of new machine learning architectures using state-of-the-art and interpretable models. On the other hand, we are interested in the study of the various possible data which can be fed to the artificial intelligence model for the mapping of a quantum many-body system phase diagram. Our analysis is supported by numerical examples obtained via matrix-product-states (MPS) simulations for several one-dimensional zero-temperature systems on a lattice such as the XXZ model, the Extended Bose-Hubbard model (EBH) and the two-species Bose Hubbard model (BH2S). In Part I, we provide a general introduction to the background concepts for the understanding of the physics and the numerical methods used for the simulations and the analysis with deep learning. In Part II, we first present the models of the quantum many-body systems that we study. Then, we discuss the machine learning protocol to identify phase transitions, namely anomaly detection technique, that involves the training of a model on a dataset of normal behavior and use it to recognize deviations from this behavior on test data. The latter can be applied for our purpose by training in a known phase so that, at test-time, all the other phases of the system are marked as anomalies. Our method is based on Generative Adversarial Networks (GANs) and improves the networks adopted by the previous works in the literature for the anomaly detection scheme taking advantage of the adversarial training procedure. Specifically, we train the GAN on a dataset composed of bipartite entanglement spectra (ES) obtained from Tensor Network simulations for the three aforementioned quantum systems. We focus our study on the detection of the elusive Berezinskii-Kosterlitz-Thouless (BKT) transition that have been object of intense theoretical and experimental studies since its first prediction for the classical two-dimensional XY model. The absence of an explicit symmetry breaking and its gappless-to-gapped nature which characterize such a transition make the latter very subtle to be detected, hence providing a challenging testing ground for the machine-driven method. We train the GAN architecture on the ES data in the gapless side of BKT transition and we show that the GAN is able to automatically distinguish between data from the same phase and beyond the BKT. The protocol that we develop is not supposed to become a substitute to the traditional methods for the phase transitions detection but allows to obtain a qualitative map of a phase diagram with almost no prior knowledge about the nature and the arrangement of the phases -- in this sense we refer to it as agnostic -- in an automatic fashion. Furthermore, it is very general and it can be applied in principle to all kind of representative data of the system coming both from experiments and numerics, as long as they have different patterns (even hidden to the eye) in different phases. Since the kind of data is crucially linked with the success of the detection, together with the ES we investigate another candidate: the probability density function (PDF) of a globally U(1) conserved charge in an extensive sub-portion of the system. The full PDF is one of the possible reductions of the ES which is known to exhibit relations and degeneracies reflecting very peculiar aspects of the physics and the symmetries of the system. Its patterns are often used to tell different kinds of phases apart and embed information about non-local quantum correlations. However, the PDF is measurable, e.g. in quantum gas microscopes experiments, and it is quite general so that it can be considered not only in the cases of the study but also in other systems with different symmetries and dimensionalities. Both the ES and the PDF can be extracted from the simulation of the ground state by dividing the one-dimensional chain into two complementary subportions. For the EBH we calculate the PDF of the bosonic occupation number in a wide range of values of the couplings and we are able to reproduce the very rich phase diagram containing several phases (superfluid, Mott insulator, charge density wave, phase separation of supersolid and superfluid and the topological Haldane insulator) just with an educated gaussian fit of the PDF. Even without resorting to machine learning, this analysis is instrumental to show the importance of the experimentally accessible PDF for the task. Moreover, we highlight some of its properties according to the gapless and gapped nature of the ground state which require a further investigation and extension beyond zero-temperature regimes and one-dimensional systems. The last chapter of the results contains the description of another architecture, namely the Concrete Autoencoder (CAE) which can be used for detecting phase transitions with the anomaly detection scheme while being able to automatically learn what the most relevant components of the input data are. We show that the CAE can recognize the important eigenvalues out of the entire ES for the EBH model in order to characterize the gapless phase. Therefore the latter architecture can be used to provide not only a more compact version of the input data (dimensionality reduction) -- which can improve the training -- but also some meaningful insights in the spirit of machine learning interpretability. In conclusion, in this thesis we describe two advances in the solution to the problem of phase recognition in quantum many-body systems. On one side, we improve the literature standard anomaly detection protocol for an automatic and agnostic identification of the phases by employing the GAN network. Moreover, we implement and test an explainable model which can make the interpretation of the results easier. On the other side we put the focus on the PDF as a new candidate quantity for the scope of discerning phases of matter. We show that it contains a lot of information about the many-body state being very general and experimentally accessible.
7

Apprentissage profond pour la description sémantique des traits visuels humains / Deep learning for semantic description of visual human traits

Antipov, Grigory 15 December 2017 (has links)
Les progrès récents des réseaux de neurones artificiels (plus connus sous le nom d'apprentissage profond) ont permis d'améliorer l’état de l’art dans plusieurs domaines de la vision par ordinateur. Dans cette thèse, nous étudions des techniques d'apprentissage profond dans le cadre de l’analyse du genre et de l’âge à partir du visage humain. En particulier, deux problèmes complémentaires sont considérés : (1) la prédiction du genre et de l’âge, et (2) la synthèse et l’édition du genre et de l’âge.D’abord, nous effectuons une étude détaillée qui permet d’établir une liste de principes pour la conception et l’apprentissage des réseaux de neurones convolutifs (CNNs) pour la classification du genre et l’estimation de l’âge. Ainsi, nous obtenons les CNNs les plus performants de l’état de l’art. De plus, ces modèles nous ont permis de remporter une compétition internationale sur l’estimation de l’âge apparent. Nos meilleurs CNNs obtiennent une précision moyenne de 98.7% pour la classification du genre et une erreur moyenne de 4.26 ans pour l’estimation de l’âge sur un corpus interne particulièrement difficile.Ensuite, afin d’adresser le problème de la synthèse et de l’édition d’images de visages, nous concevons un modèle nommé GA-cGAN : le premier réseau de neurones génératif adversaire (GAN) qui produit des visages synthétiques réalistes avec le genre et l’âge souhaités. Enfin, nous proposons une nouvelle méthode permettant d’employer GA-cGAN pour le changement du genre et de l’âge tout en préservant l’identité dans les images synthétiques. Cette méthode permet d'améliorer la précision d’un logiciel sur étagère de vérification faciale en présence d’écarts d’âges importants. / The recent progress in artificial neural networks (rebranded as deep learning) has significantly boosted the state-of-the-art in numerous domains of computer vision. In this PhD study, we explore how deep learning techniques can help in the analysis of gender and age from a human face. In particular, two complementary problem settings are considered: (1) gender/age prediction from given face images, and (2) synthesis and editing of human faces with the required gender/age attributes.Firstly, we conduct a comprehensive study which results in an empirical formulation of a set of principles for optimal design and training of gender recognition and age estimation Convolutional Neural Networks (CNNs). As a result, we obtain the state-of-the-art CNNs for gender/age prediction according to the three most popular benchmarks, and win an international competition on apparent age estimation. On a very challenging internal dataset, our best models reach 98.7% of gender classification accuracy and an average age estimation error of 4.26 years.In order to address the problem of synthesis and editing of human faces, we design and train GA-cGAN, the first Generative Adversarial Network (GAN) which can generate synthetic faces of high visual fidelity within required gender and age categories. Moreover, we propose a novel method which allows employing GA-cGAN for gender swapping and aging/rejuvenation without losing the original identity in synthetic faces. Finally, in order to show the practical interest of the designed face editing method, we apply it to improve the accuracy of an off-the-shelf face verification software in a cross-age evaluation scenario.

Page generated in 0.1436 seconds