1 |
Effects of Transfer Learning on Data Augmentation with Generative Adversarial Networks / Effekten av transferlärande på datautökning med generativt adversarialt nätverkBerglöf, Olle, Jacobs, Adam January 2019 (has links)
Data augmentation is a technique that acquires more training data by augmenting available samples, where the training data is used to fit model parameters. Data augmentation is utilized due to a shortage of training data in certain domains and to reduce overfitting. Augmenting a training dataset for image classification with a Generative Adversarial Network (GAN) has been shown to increase classification accuracy. This report investigates if transfer learning within a GAN can further increase classification accuracy when utilizing the augmented training dataset. The method section describes a specific GAN architecture for the experiments that includes a label condition. When using transfer learning within the specific GAN architecture, a statistical analysis shows a statistically significant increase in classification accuracy for a classification problem with the EMNIST dataset, which consists of images of handwritten alphanumeric characters. In the discussion section, the authors analyze the results and motivates other use cases for the proposed GAN architecture. / Datautökning är en metod som skapar mer träningsdata genom att utöka befintlig träningsdata, där träningsdatan används för att anpassa modellers parametrar. Datautökning används på grund av en brist på träningsdata inom vissa områden samt för att minska overfitting. Att utöka ett träningsdataset för att genomföra bildklassificering med ett generativt adversarialt nätverk (GAN) har visats kunna öka precisionen av klassificering av bilder. Denna rapport undersöker om transferlärande inom en GAN kan vidare öka klassificeringsprecisionen när ett utökat träningsdataset används. Metoden beskriver en specific GANarkitektur som innehåller ett etikettvillkor. När transferlärande används inom den utvalda GAN-arkitekturen visar en statistisk analys en statistiskt säkerställd ökning av klassificeringsprecisionen för ett klassificeringsproblem med EMNIST datasetet, som innehåller bilder på handskrivna bokstäver och siffror. I diskussionen diskuteras orsakerna bakom resultaten och fler användningsområden nämns.
|
2 |
Generating Synthetic Training Data with Stable DiffusionRynell, Rasmus, Melin, Oscar January 2023 (has links)
The usage of image classification in various industries has grown significantly in recentyears. There are however challenges concerning the data used to train such models. Inmany cases the data used in training is often difficult and expensive to obtain. Furthermore,dealing with image data may come with additional problems such as privacy concerns. Inrecent years, synthetic image generation models such as Stable Diffusion has seen signifi-cant improvement. Solely using a textual description, Stable Diffusion is able to generate awide variety of photorealistic images. In addition to textual descriptions, other condition-ing models such as ControlNet has enabled the possibility of additional grounding infor-mation, such as canny edge and segmentation images. This thesis investigates if syntheticimages generated by Stable Diffusion can be used effectively in training an image classifier.To find the most effective method for generating training data, multiple conditioning meth-ods are investigated and evaluated. The results show that it is possible to generate high-quality training data using several conditioning techniques. The best performing methodwas using canny edge grounded images to augment already existing data. Extending twoclasses with additional synthetic data generated by the best performing method, achievedthe highest average F1-score increase of 0.85 percentage points compared with a baselinesolely trained on real images.
|
Page generated in 0.1645 seconds