• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 1
  • 1
  • Tagged with
  • 17
  • 17
  • 17
  • 12
  • 11
  • 10
  • 9
  • 8
  • 7
  • 7
  • 5
  • 5
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Semantic-Driven Unsupervised Image-to-Image Translation for Distinct Image Domains

Ackerman, Wesley 15 September 2020 (has links)
We expand the scope of image-to-image translation to include more distinct image domains, where the image sets have analogous structures, but may not share object types between them. Semantic-Driven Unsupervised Image-to-Image Translation for Distinct Image Domains (SUNIT) is built to more successfully translate images in this setting, where content from one domain is not found in the other. Our method trains an image translation model by learning encodings for semantic segmentations of images. These segmentations are translated between image domains to learn meaningful mappings between the structures in the two domains. The translated segmentations are then used as the basis for image generation. Beginning image generation with encoded segmentation information helps maintain the original structure of the image. We qualitatively and quantitatively show that SUNIT improves image translation outcomes, especially for image translation tasks where the image domains are very distinct.
12

GAN-based Automatic Segmentation of Thoracic Aorta from Non-contrast-Enhanced CT Images / GAN-baserad automatisk segmentering avthoraxorta från icke-kontrastförstärkta CT-bilder

Xu, Libo January 2021 (has links)
The deep learning-based automatic segmentation methods have developed rapidly in recent years to give a promising performance in the medical image segmentation tasks, which provide clinical medicine with an accurate and fast computer-aided diagnosis method. Generative adversarial networks and their extended frameworks have achieved encouraging results on image-to-image translation problems. In this report, the proposed hybrid network combined cycle-consistent adversarial networks, which transformed contrast-enhanced images from computed tomography angiography to the conventional low-contrast CT scans, with the segmentation network and trained them simultaneously in an end-to-end manner. The trained segmentation network was tested on the non-contrast-enhanced CT images. The synthetic process and the segmentation process were also implemented in a two-stage manner. The two-stage process achieved a higher Dice similarity coefficient than the baseline U-Net did on test data, but the proposed hybrid network did not outperform the baseline due to the field of view difference between the two training data sets.
13

Representation learning in unsupervised domain translation

Lavoie-Marchildon, Samuel 12 1900 (has links)
Ce mémoire s'adresse au problème de traduction de domaine non-supervisée. La traduction non-supervisée cherche à traduire un domaine, le domaine source, à un domaine cible sans supervision. Nous étudions d'abord le problème en utilisant le formalisme du transport optimal. Dans un second temps, nous étudions le problème de transfert de sémantique à haut niveau dans les images en utilisant les avancés en apprentissage de représentations et de transfert d'apprentissages développés dans la communauté d'apprentissage profond. Le premier chapitre est dévoué à couvrir les bases des concepts utilisés dans ce travail. Nous décrivons d'abord l'apprentissage de représentation en incluant la description de réseaux de neurones et de l'apprentissage supervisé et non supervisé. Ensuite, nous introduisons les modèles génératifs et le transport optimal. Nous terminons avec des notions pertinentes sur le transfert d'apprentissages qui seront utiles pour le chapitre 3. Le deuxième chapitre présente \textit{Neural Wasserstein Flow}. Dans ce travail, nous construisons sur la théorie du transport optimal et démontrons que les réseaux de neurones peuvent être utilisés pour apprendre des barycentres de Wasserstein. De plus, nous montrons que les réseaux de neurones peuvent amortir n'importe quel barycentre, permettant d'apprendre une interpolation continue. Nous montrons aussi comment utiliser ces concepts dans le cadre des modèles génératifs. Finalement, nous montrons que notre approche permet d'interpoler des formes et des couleurs. Dans le troisième chapitre, nous nous attaquons au problème de transfert de sémantique haut niveau dans les images. Nous montrons que ceci peut être obtenu simplement avec un GAN conditionné sur la représentation apprise par un réseau de neurone. Nous montrons aussi comment ce processus peut être rendu non-supervisé si la représentation apprise est un regroupement. Finalement, nous montrons que notre approche fonctionne sur la tâche de transfert de MNIST à SVHN. Nous concluons en mettant en relation les deux contributions et proposons des travaux futures dans cette direction. / This thesis is concerned with the problem of unsupervised domain translation. Unsupervised domain translation is the task of transferring one domain, the source domain, to a target domain. We first study this problem using the formalism of optimal transport. Next, we study the problem of high-level semantic image to image translation using advances in representation learning and transfer learning. The first chapter is devoted to reviewing the background concepts used in this work. We first describe representation learning including a description of neural networks and supervised and unsupervised representation learning. We then introduce generative models and optimal transport. We finish with the relevant notions of transfer learning that will be used in chapter 3. The second chapter presents Neural Wasserstein Flow. In this work, we build on the theory of optimal transport and show that deep neural networks can be used to learn a Wasserstein barycenter of distributions. We further show how a neural network can amortize any barycenter yielding a continuous interpolation. We also show how this idea can be used in the generative model framework. Finally, we show results on shape interpolation and colour interpolation. In the third chapter, we tackle the task of high level semantic image to image translation. We show that high level semantic image to image translation can be achieved by simply learning a conditional GAN with the representation learned from a neural network. We further show that we can make this process unsupervised if the representation learning is a clustering. Finally, we show that our approach works on the task of MNIST to SVHN.
14

[pt] ASSIMILAÇÃO DE DADOS INTEGRADA A TÉCNICAS DE TRADUÇÃO IMAGEM-IMAGEM APLICADA A MODELOS DE RESERVATÓRIOS / [en] DATA ASSIMILATION INTEGRATED WITH IMAGE-TO-IMAGE TRANSLATION NETWORKS APPLIED TO RESERVOIR MODELS.

VITOR HESPANHOL CORTES 22 June 2023 (has links)
[pt] A incorporação de dados de produção a modelos de reservatórios é uma etapa fundamental para se estimar adequadamente a recuperação de uma jazida de petróleo e, na última década, o método ensemble smoother with multiple data assimilation (ES-MDA) tem se destacado dentre as estratégias disponíveis para realizar tal tarefa. Entretanto, este é um método que apresenta melhores resultados quando os parâmetros a serem ajustados no modelo são caracterizados por uma distribuição de probabilidades próxima à gaussiana, apresentando um desempenho reduzido ao lidar com o ajuste de parâmetros categóricos, como por exemplo as fácies geológicas. Uma proposta para lidar com esse problema é recorrer a redes de aprendizado profundo, em particular redes para tradução imagem-imagem (I2I), valendo-se da analogia existente entre a representação matricial de imagem e a estrutura em malha das propriedades de um modelo de reservatórios. Assim, é possível adaptar a arquitetura de redes I2I disponíveis e treiná-las para, a partir de uma matriz de parâmetros contínuos que serão ajustados pelo método ES-MDA (como porosidade e permeabilidade), gerar a representação matricial do parâmetro categórico correspondente (fácies), de forma similar à tarefa de segmentação semântica no contexto de imagens. Portanto, o parâmetro categórico seria atualizado de maneira indireta pelo método ES-MDA, sendo a sua reconstrução realizada pela rede I2I. / [en] Reservoir model data assimilation is a key step to properly estimate the final recovery of an oil field and, in the last decade, the ensemble smoother with multiple data assimilation method (ES-MDA) has stood out among all available strategies to perform this task. However, this method achieves better results when model parameters are described by an approximately Gaussian distribution and hence presents reduced performance when dealing with categorical parameters, such as geological facies. An alternative to deal with this issue is to adopt a deep learning based approach, particularly using image-to-image translation (I2I) networks and taking into account the analogy between the matrix representation of images and the reservoir model grid properties. Thus, it is possible to adapt I2I network architectures, training them to generate the categorical parameter (facies) from its correlated continuous properties modified by the ES-MDA method (such as porosity and permeability), similar to semantic segmentation tasks in an image translation context. Therefore, the categorical parameter would be indirectly updated by the ES-MDA method, with its reconstruction carried out by the I2I network.
15

Generation of layouts for living spaces using conditional generative adversarial networks : Designing floor plans that respect both a boundary and high-level requirements / Generering av layouts för boendeytor med conditional generative adversarial networks : Design av planritningar som respekterar både en gräns och krav på hög nivå

Chen, Anton January 2022 (has links)
Architectural design is a complex subject involving many different aspects that need to be considered. Drafting a floor plan from a blank slate can require iterating over several designs in the early phases of planning, and it is likely an even more daunting task for non-architects to tackle. This thesis investigates the opportunities of using conditional generative adversarial networks to generate floor plans for living spaces. The pix2pixHD method is used to learn a mapping between building boundaries and color-mapped floor plan layouts from the RPLAN dataset consisting of over 80k images. Previous work has mainly focused on either preserving an input boundary or generating layouts based on a set of conditions. To give potential users more control over the generation process, it would be useful to generate floor plans that respect both an input boundary and some high-level client requirements. By encoding requirements about desired room types and their locations in colored centroids, and stacking this image with the boundary input, we are able to train a model to synthesize visually plausible floor plan images that adapt to the given conditions. This model is compared to another model trained on only the building boundary images that acts as a baseline. Results from visual inspection, image properties, and expert evaluation show that the model trained with centroid conditions generates samples with superior image quality to the baseline model. Feeding additional information to the networks is therefore not only a way to involve the user in the design process, but it also has positive effects on the model training. The results from this thesis demonstrate that floor plan generation with generative adversarial networks can respect different kinds of conditions simultaneously, and can be a source of inspiration for future work seeking to make computer-aided design a more collaborative process between users and models. / Arkitektur och design är komplexa områden som behöver ta hänsyn till ett flertal olika aspekter. Att skissera en planritning helt från början kan kräva flera iterationer av olika idéer i de tidiga stadierna av planering, och det är troligtvis en ännu mer utmanande uppgift för en icke-arkitekt att angripa. Detta examensarbete syftar till att undersöka möjligheterna till att använda conditional generative adversarial networks för att generera planritningar för boendeytor. Pix2pixHD-metoden används för att lära en modell ett samband mellan gränsen av en byggnad och en färgkodad planritning från datasamlingen RPLAN bestående av över 80 tusen bilder. Tidigare arbeten har främst fokuserat på att antingen bevara en given byggnadsgräns eller att generera layouts baserat på en mängd av villkor. För att ge potentiella slutanvändare mer kontroll över genereringsprocessen skulle det vara användbart att generera planritningar som respekterar både en given byggnadsgräns och några klientbehov på en hög nivå. Genom att koda krav relaterade till önskade rumstyper och deras placering som färgade centroider, och sedan kombinera denna bild med byggnadsgränsen, kan vi träna en modell som kan framställa visuellt rimliga bilder på planritningar som kan anpassa sig till de givna villkoren. Denna modell jämförs med en annan modell som tränas endast på byggnadsgränser och som kan agera som en baslinje. Resultat från inspektion av genererade bilder och deras egenskaper, samt expertevaluering visar att modellen som tränas med centroidvillkor genererar bilder med högre bildkvalitet jämfört med baslinjen. Att ge mer information till modellen kan därmed både involvera användaren mer i designprocessen och bidra till positiva effekter på träningen av modellen. Resultaten från detta examensarbete visar att generering av planritningar med generative adversarial networks kan respektera olika typer av villkor samtidigt, och kan vara en källa till inspiration för framtida arbete som syftar till att göra datorstödd design en mer kollaborativ process mellan användare och modeller.
16

Generative Image-to-Image Translation with Applications in Computational Pathology

Fangda Li (17272816) 24 October 2023 (has links)
<p dir="ltr">Generative Image-to-Image Translation (I2IT) involves transforming an input image from one domain to another. Typically, this transformation retains the content in the input image while adjusting the domain-dependent style elements. Generative I2IT finds utility in a wide range of applications, yet its effectiveness hinges on adaptations to the unique characteristics of the data at hand. This dissertation pushes the boundaries of I2IT by applying it to stain-related problems in computational pathology. Particularly, the main contributions span two major applications of stain translation: H&E-to-H&E and H&E-to-IHC, each with its unique requirements and challenges. More specifically, the first contribution addresses the generalization challenge posed by the high variability in H&E stain appearances to any task-specific machine learning models. To this end, the Generative Stain Augmentation Network (G-SAN) is introduced to augment the training images in any downstream task with random and diverse H&E stain appearances. Experimental results demonstrate G-SAN’s ability to enhance model generalization across stain variations in downstream tasks. The second key contribution in this dissertation focuses on H&E-to-IHC stain translation. The major challenge in learning accurate H&E-to-IHC stain translation is the frequent and sometimes severe inconsistencies in the groundtruth H&E-IHC image pairs. To make training more robust to these inconsistencies, a novel contrastive learning based loss, named the Adaptive Supervised PatchNCE (ASP) loss is presented. Experimental results suggest that the proposed ASP-based framework outperforms the state-of-the-art in H&E-to-IHC stain translation by significant margins. Additionally, a new dataset for H&E-to-IHC translation – the Multi-IHC Stain Translation (MIST) dataset, is released to the public, featuring paired images from H&E to four different IHC stains. For future directions of generative I2IT in stain translation problems, a proof-of-concept study of applying the latest diffusion model based I2IT methods to the problem of virtual H&E staining is presented.</p>
17

Generative Adversarial Networks for Image-to-Image Translation on Street View and MR Images

Karlsson, Simon, Welander, Per January 2018 (has links)
Generative Adversarial Networks (GANs) is a deep learning method that has been developed for synthesizing data. One application for which it can be used for is image-to-image translations. This could prove to be valuable when training deep neural networks for image classification tasks. Two areas where deep learning methods are used are automotive vision systems and medical imaging. Automotive vision systems are expected to handle a broad range of scenarios which demand training data with a high diversity. The scenarios in the medical field are fewer but the problem is instead that it is difficult, time consuming and expensive to collect training data. This thesis evaluates different GAN models by comparing synthetic MR images produced by the models against ground truth images. A perceptual study is also performed by an expert in the field. It is shown by the study that the implemented GAN models can synthesize visually realistic MR images. It is also shown that models producing more visually realistic synthetic images not necessarily have better results in quantitative error measurements, when compared to ground truth data. Along with the investigations on medical images, the thesis explores the possibilities of generating synthetic street view images of different resolution, light and weather conditions. Different GAN models have been compared, implemented with our own adjustments, and evaluated. The results show that it is possible to create visually realistic images for different translations and image resolutions.

Page generated in 0.1389 seconds