Spelling suggestions: "subject:"[een] UNSUPERVISED LEARNING"" "subject:"[enn] UNSUPERVISED LEARNING""
221 |
Segmentation in Tomography Data: Exploring Data Augmentation for Supervised and Unsupervised Voxel Classification with Neural NetworksWagner, Franz 23 September 2024 (has links)
Computed Tomography (CT) imaging provides invaluable insight into internal structures of objects and organisms, which is critical for applications ranging from materials science to medical diagnostics. In CT data, an object is represented by a 3D reconstruction that is generated by combining multiple 2D X-ray images taken from various angles around the object. Each voxel, a volumetric pixel, within the reconstructed volume represents a small cubic element, allowing for detailed spatial representation. To extract meaningful information from CT imaging data and facilitate analysis and interpretation, accurate segmentation of internal structures is essential. However, this can be challenging due to various artifacts introduced by the physics of a CT scan and the properties of the object being imaged.
This dissertation directly addresses this challenge by using deep learning techniques. Specifically, Convolutional Neural Networks (CNNs) are used for segmentation. However, they face the problem of limited training data. Data scarcity is addressed by data augmentation through the unsupervised generation of synthetic training data and the use of 2D and 3D data augmentation methods. A combination of these augmentation strategies allows for streamlining segmentation in voxel data and effectively addresses data scarcity. Essentially, the work aims to simplify training of CNNs, using minimal or no labeled data. To enhance accessibility to the results of this thesis, two user-friendly software solutions, unpAIred and AiSeg, have been developed. These platforms enable the generation of training data, data augmentation, as well as training, analysis, and application of CNNs.
This cumulative work first examines simpler but efficient conventional data augmentation methods, such as radiometric and geometric image manipulations, which are already widely used in literature. However, these methods are usually randomly applied and do not follow a specific order. The primary focus of the first paper is to investigate this approach and to develop both online and offline data augmentation pipelines that allow for systematic sequencing of these operations. Offline augmentation involves augmenting training data stored on a drive, while online augmentation is performed dynamically at runtime, just before images are fed to the CNN. It is successfully shown that random data augmentation methods are inferior to the new pipelines.
A careful comparison of 3D CNNs is then performed to identify optimal models for specific segmentation tasks, such as carbon and pore segmentation in CT scans of Carbon Reinforced Concrete (CRC). Through an evaluation of eight 3D CNN models on six datasets, tailored recommendations are provided for selecting the most effective model based on dataset characteristics. The analysis highlights the consistent performance of the 3D U-Net, one of the CNNs, and its residual variant, which excel at roving (a bundle of carbon fibers) and pore segmentation tasks.
Based on the augmentation pipelines and the results of the 3D CNN comparison, the pipelines are extended to 3D, specifically targeting the segmentation of carbon in CT scans of CRC. A comparative analysis of different 3D augmentation strategies, including both offline and online augmentation variants, provides insight into their effectiveness. While offline augmentation results in fewer artifacts, it can only segment rovings already present in the training data, while online augmentation is essential for effectively segmenting different types of rovings contained in CT scans. However, constraints such as limited diversity of the dataset and overly aggressive augmentation that resulted in segmentation artifacts require further investigation to address data scarcity.
Recognizing the need for a larger and more diverse dataset, this thesis extends the results of the three former papers by introducing a deep learning-based augmentation using a Generative Adversarial Network (GAN), called Contrastive Unpaired Translation (CUT), for synthetic training data generation. By combining the GAN with augmentation pipelines, semi-supervised and unsupervised end-to-end training methods are introduced and the successful generation of training data for 2D pore segmentation is demonstrated. However, challenges remain in achieving a stable 3D CUT implementation, which warrants further research and development efforts.
In summary, the results of this dissertation address the challenges of accurate CT data segmentation in materials science through deep learning techniques and novel 2D and 3D online and offline augmentation pipelines. By evaluating different 3D CNN models, tailored recommendations for specific segmentation tasks are provided. Furthermore, the exploration of deep learning-based augmentation using CUT shows promising results in the generating synthetic training data.
Future work will include the development of a stable implementation of a 3D CUT version, the exploration of new model architectures, and the development of sub-voxel accurate segmentation techniques. These have the potential for significant advances in segmentation in tomography data.:Abstract IV
Zusammenfassung VI
1 Introduction 1
1.1 Thesis Structure 2
1.2 Scientific Context 3
1.2.1 Developments in the Segmentation in Tomography Data 3
1.2.2 3D Semantic Segmentation using Machine Learning 5
1.2.3 Data Augmentation 6
2 Developed Software Solutions: AiSeg and unpAIred 9
2.1 Software Design 10
2.2 Installation 11
2.3 AiSeg 11
2.4 unpAIred 12
2.5 Limitations 12
3 Factors Affecting Image Quality in Computed Tomography 13
3.1 From CT Scan to Reconstruction 13
3.2 X-ray Tube and Focal Spot 14
3.3 Beam Hardening 14
3.4 Absorption, Scattering and Pairing 15
3.5 X-ray Detector 16
3.6 Geometric Calibration 17
3.7 Reconstruction Algorithm 17
3.8 Artifact corrections 18
4 On the Development of Augmentation Pipelines for Image Segmentation 19
4.0 Abstract 20
4.1 Introduction 20
4.2 Methods 21
4.2.1 Data Preparation 21
4.2.2 Augmentation 21
4.2.3 Networks 24
4.2.4 Training and Metrics 25
4.3 Experimental Design 26
4.3.1 Hardware 26
4.3.2 Workflow 26
4.3.3 Test on Cityscapes 26
4.4 Results and Discussion 26
4.4.1 Stage 1: Crating a Baseline 27
4.4.2 Stage 2: Using Offline Augmentation 27
4.4.3 Stage 3: Using Online Augmentation 27
4.4.4 Test on Cityscapes 29
4.4.5 Future Work – A New Online Augmentation 30
4.5 Conclusion 31
4.6 Appendix 31
4.6.1 Appendix A. List of All Networks 31
4.6.2 Appendix B. Augmentation Methods 32
4.6.3 Appendix C. Used RIWA Online Augmentation Parameters 36
4.6.4 Appendix D. Used Cityscapes Online Augmentation Parameters 36
4.6.5 Appendix E. Comparison of CNNs with best Backbones on RIWA 37
4.6.6 Appendix F. Segmentation Results 38
4.7 References 39
5 Comparison of 3D CNNs for Volume Segmentation 43
5.0 Abstract 44
5.1 Introduction 44
5.2 Datasets 44
5.2.1 Carbon Rovings 45
5.2.2 Concrete Pores 45
5.2.3 Polyethylene Fibers 45
5.2.4 Brain Mitochondria 45
5.2.5 Brain Tumor Segmentation Challenge (BraTS) 46
5.2.6 Head and Neck Cancer 46
5.3 Methods 46
5.3.1 Data Preprocessing 46
5.3.2 Hyperparameters 46
5.3.3 Metrics 47
5.3.4 Experimental Design 48
5.4 Results and Discussion 48
5.4.1 Impact of Initial Random States (Head and Neck Cancer Dataset) 48
5.4.2 Carbon Rovings 48
5.4.3 Concrete Pores 49
5.4.4 Polyethylene Fibers 49
5.4.5 Brain Mitochondria 50
5.4.6 BraTS 51
5.5 Conclusion 51
5.6 References 52
6 Segmentation of Carbon in CRC Using 3D Augmentation 55
6.0 Abstract 56
6.1 Introduction 56
6.2 Materials and Methods 58
6.2.1 Specimens 58
6.2.2 Microtomography 59
6.2.3 AI-Based Segmentation 60
6.2.4 Roving Extraction 64
6.2.5 Multiscale Modeling 65
6.2.6 Scaled Boundary Isogeometric Analysis 66
6.2.7 Parameterized RVE and Definition of Characteristic Geometric Properties 67
6.3 Results and Discussion 70
6.3.1 Microtomography 70
6.3.2 Deep Learning 71
6.3.3 Roving Extraction 74
6.3.4 Parameterized RVE and Definition of Characteristic Geometric Properties 75
6.4 Conclusion 79
6.5 References 80
7 Image-to-Image Translation for Semi-Supervised Semantic Segmentation 85
7.1 Introduction 85
7.2 Methods 86
7.2.1 Generative Adversarial Networks 87
7.2.2 Contrastive Unpaired Translation 87
7.2.3 Fréchet Inception Distance 89
7.2.4 Datasets 89
7.3 Experimental Design 92
7.4 Results and Discussion 94
7.4.1 Training and Inference of CUT 94
7.4.2 End-to-End Training for Semantic Segmentation 99
7.5 Conclusion 104
7.5.1 Future Work 104
8 Synthesis 107
8.1 Research Summary 107
8.1.1 Augmentation Pipelines 107
8.1.2 3D CNN Comparison 108
8.1.3 3D Data Augmentation for the Segmentation of Carbon Rovings 108
8.1.4 Synthetic Training Data Generation 109
8.2 Future Developments 109
8.2.1 Augmentation 109
8.2.2 Pre-trained 3D Encoder 111
8.2.3 On the Quality Control of Carbon Reinforced Concrete 111
8.2.4 Subvoxel Accurate Segmentation 113
8.2.5 Towards Volume-to-Volume Translation 114
8.3 Conclusion 114
References 117
List of Tables 125
List of Figures 127
List of Abbreviations 131 / Computertomographie (CT) bietet wertvolle Einblicke in die inneren Strukturen von Objekten und Organismen, was für Anwendungen von der Materialwissenschaft bis zur medizinischen Diagnostik von entscheidender Bedeutung ist. In CT-Daten ist ein Objekt durch eine 3D-Rekonstruktion dargestellt, die durch die Kombination mehrerer 2D-Röntgenbilder aus verschiedenen Winkeln um das Objekt herum erstellt wird. Jedes Voxel, ein Volumen Pixel, innerhalb des rekonstruierten Volumens stellt ein kleines kubisches Element dar und ermöglicht eine detaillierte räumliche Darstellung. Um aussagekräftige Informationen aus CT-Bilddaten zu extrahieren und eine Analyse und Interpretation zu ermöglichen, ist eine genaue Segmentierung der inneren Strukturen unerlässlich. Dies kann jedoch aufgrund verschiedener Artefakte, die durch die Physik eines CT-Scans und Eigenschaften des abgebildeten Objekts verursacht werden, eine Herausforderung darstellen.
Diese Dissertation befasst sich direkt mit dieser Herausforderung, indem sie Techniken des Deep Learnings einsetzt. Konkret werden für die Segmentierung Convolutional Neural Networks (CNNs) verwendet, welche jedoch mit dem Problem begrenzter Trainingsdaten konfrontiert sind. Der Datenknappheit wird dabei durch Datenerweiterung begegnet, indem unbeaufsichtigt synthetische Trainingsdaten erzeugt und 2D- und 3D-Augmentierungssmethoden eingesetzt werden. Eine Kombination dieser Vervielfältigungsstrategien erlaubt eine Vereinfachung der Segmentierung in Voxeldaten und behebt effektiv die Datenknappheit. Im Wesentlichen zielt diese Arbeit darauf ab, das Training von CNNs zu vereinfachen, wobei wenige oder gar keine gelabelten Daten benötigt werden. Um die Ergebnisse dieser Arbeit Forschenden zugänglicher zu machen, wurden zwei benutzerfreundliche Softwarelösungen, unpAIred und AiSeg, entwickelt. Diese ermöglichen die Generierung von Trainingsdaten, die Augmentierung sowie das Training, die Analyse und die Anwendung von CNNs.
In dieser kumulativen Arbeit werden zunächst einfachere, aber effiziente konventionelle Methoden zur Datenvervielfältigung untersucht, wie z. B. radiometrische und geometrische Bildmanipulationen, die bereits häufig in der Literatur verwendet werden. Diese Methoden werden jedoch in der Regel zufällig nacheinander angewandt und folgen keiner bestimmten Reihenfolge. Der Schwerpunkt des ersten Forschungsartikels liegt darin, diesen Ansatz zu untersuchen und sowohl Online- als auch Offline-Datenerweiterungspipelines zu entwickeln, die eine systematische Sequenzierung dieser Operationen ermöglichen. Bei der Offline Variante werden die auf der Festplatte gespeicherten Trainingsdaten vervielfältigt, während die Online-Erweiterung dynamisch zur Laufzeit erfolgt, kurz bevor die Bilder dem CNN gezeigt werden. Es wird erfolgreich gezeigt, dass eine zufällige Verkettung von geometrischen und radiometrischen Methoden den neuen Pipelines unterlegen ist.
Anschließend wird ein Vergleich von 3D-CNNs durchgeführt, um die optimalen Modelle für Segmentierungsaufgaben zu identifizieren, wie z.B. die Segmentierung von Carbonbewehrung und Luftporen in CT-Scans von carbonverstärktem Beton (CRC). Durch die Bewertung von acht 3D-CNN-Modellen auf sechs Datensätzen werden Empfehlungen für die Auswahl des genauesten Modells auf der Grundlage der Datensatzeigenschaften gegeben. Die Analyse unterstreicht die konstante Überlegenheit des 3D UNets, eines der CNNs, und seiner Residualversion bei Segmentierung von Rovings (Carbonfaserbündel) und Poren.
Aufbauend auf den 2D Augmentierungspipelines und den Ergebnissen des 3D-CNN-Vergleichs werden die Pipelines auf die dritte Dimension erweitert, um insbesondere die Segmentierung der Carbonbewehrung in CT-Scans von CRC zu ermöglichen. Eine vergleichende Analyse verschiedener 3D Augmentierungsstrategien, die sowohl Offline- als auch Online-Erweiterungsvarianten umfassen, gibt Aufschluss über deren Effektivität. Die Offline-Augmentierung führt zwar zu weniger Artefakten, kann aber nur Rovings segmentieren, die bereits in den Trainingsdaten vorhanden sind. Die Online-Augmentierung erweist sich hingegen als unerlässlich für die effektive Segmentierung von Carbon-Roving-Typen, die nicht im Datensatz enthalten sind. Einschränkungen wie die geringe Vielfalt des Datensatzes und eine zu aggressive Online-Datenerweiterung, die zu Segmentierungsartefakten führt, erfordern jedoch weitere Methoden, um die Datenknappheit zu beheben.
In Anbetracht der Notwendigkeit eines größeren und vielfältigeren Datensatzes erweitert diese Arbeit die Ergebnisse der drei Forschungsartikel durch die Einführung einer auf Deep Learning basierenden Augmentierung, die ein Generative Adversarial Network (GAN), genannt Contrastive Unpaired Translation (CUT), zur Erzeugung synthetischer Trainingsdaten verwendet. Durch die Kombination des GANs mit den Augmentierungspipelines wird eine halbüberwachte Ende-zu-Ende-Trainingsmethode vorgestellt und die erfolgreiche Erzeugung von Trainingsdaten für die 2D-Porensegmentierung demonstriert. Es bestehen jedoch noch Herausforderungen bei der Implementierung einer stabilen 3D-CUT-Version, was weitere Forschungs- und Entwicklungsanstrengungen erfordert.
Zusammenfassend adressieren die Ergebnisse dieser Dissertation Herausforderungen der CT-Datensegmentierung in der Materialwissenschaft, die durch Deep-Learning-Techniken und neuartige 2D- und 3D-Online- und Offline-Augmentierungspipelines gelöst werden. Durch die Evaluierung verschiedener 3D-CNN-Modelle werden maßgeschneiderte Empfehlungen für spezifische Segmentierungsaufgaben gegeben. Darüber hinaus zeigen Untersuchungen zur Deep Learning basierten Augmentierung mit CUT vielversprechende Ergebnisse bei der Generierung synthetischer Trainingsdaten.
Zukünftige Arbeiten umfassen die Entwicklung einer stabilen Implementierung einer 3D-CUT-Version, die Erforschung neuer Modellarchitekturen und die Entwicklung von subvoxelgenauen Segmentierungstechniken. Diese haben das Potenzial für bedeutende Fortschritte bei der Segmentierung in Tomographiedaten.:Abstract IV
Zusammenfassung VI
1 Introduction 1
1.1 Thesis Structure 2
1.2 Scientific Context 3
1.2.1 Developments in the Segmentation in Tomography Data 3
1.2.2 3D Semantic Segmentation using Machine Learning 5
1.2.3 Data Augmentation 6
2 Developed Software Solutions: AiSeg and unpAIred 9
2.1 Software Design 10
2.2 Installation 11
2.3 AiSeg 11
2.4 unpAIred 12
2.5 Limitations 12
3 Factors Affecting Image Quality in Computed Tomography 13
3.1 From CT Scan to Reconstruction 13
3.2 X-ray Tube and Focal Spot 14
3.3 Beam Hardening 14
3.4 Absorption, Scattering and Pairing 15
3.5 X-ray Detector 16
3.6 Geometric Calibration 17
3.7 Reconstruction Algorithm 17
3.8 Artifact corrections 18
4 On the Development of Augmentation Pipelines for Image Segmentation 19
4.0 Abstract 20
4.1 Introduction 20
4.2 Methods 21
4.2.1 Data Preparation 21
4.2.2 Augmentation 21
4.2.3 Networks 24
4.2.4 Training and Metrics 25
4.3 Experimental Design 26
4.3.1 Hardware 26
4.3.2 Workflow 26
4.3.3 Test on Cityscapes 26
4.4 Results and Discussion 26
4.4.1 Stage 1: Crating a Baseline 27
4.4.2 Stage 2: Using Offline Augmentation 27
4.4.3 Stage 3: Using Online Augmentation 27
4.4.4 Test on Cityscapes 29
4.4.5 Future Work – A New Online Augmentation 30
4.5 Conclusion 31
4.6 Appendix 31
4.6.1 Appendix A. List of All Networks 31
4.6.2 Appendix B. Augmentation Methods 32
4.6.3 Appendix C. Used RIWA Online Augmentation Parameters 36
4.6.4 Appendix D. Used Cityscapes Online Augmentation Parameters 36
4.6.5 Appendix E. Comparison of CNNs with best Backbones on RIWA 37
4.6.6 Appendix F. Segmentation Results 38
4.7 References 39
5 Comparison of 3D CNNs for Volume Segmentation 43
5.0 Abstract 44
5.1 Introduction 44
5.2 Datasets 44
5.2.1 Carbon Rovings 45
5.2.2 Concrete Pores 45
5.2.3 Polyethylene Fibers 45
5.2.4 Brain Mitochondria 45
5.2.5 Brain Tumor Segmentation Challenge (BraTS) 46
5.2.6 Head and Neck Cancer 46
5.3 Methods 46
5.3.1 Data Preprocessing 46
5.3.2 Hyperparameters 46
5.3.3 Metrics 47
5.3.4 Experimental Design 48
5.4 Results and Discussion 48
5.4.1 Impact of Initial Random States (Head and Neck Cancer Dataset) 48
5.4.2 Carbon Rovings 48
5.4.3 Concrete Pores 49
5.4.4 Polyethylene Fibers 49
5.4.5 Brain Mitochondria 50
5.4.6 BraTS 51
5.5 Conclusion 51
5.6 References 52
6 Segmentation of Carbon in CRC Using 3D Augmentation 55
6.0 Abstract 56
6.1 Introduction 56
6.2 Materials and Methods 58
6.2.1 Specimens 58
6.2.2 Microtomography 59
6.2.3 AI-Based Segmentation 60
6.2.4 Roving Extraction 64
6.2.5 Multiscale Modeling 65
6.2.6 Scaled Boundary Isogeometric Analysis 66
6.2.7 Parameterized RVE and Definition of Characteristic Geometric Properties 67
6.3 Results and Discussion 70
6.3.1 Microtomography 70
6.3.2 Deep Learning 71
6.3.3 Roving Extraction 74
6.3.4 Parameterized RVE and Definition of Characteristic Geometric Properties 75
6.4 Conclusion 79
6.5 References 80
7 Image-to-Image Translation for Semi-Supervised Semantic Segmentation 85
7.1 Introduction 85
7.2 Methods 86
7.2.1 Generative Adversarial Networks 87
7.2.2 Contrastive Unpaired Translation 87
7.2.3 Fréchet Inception Distance 89
7.2.4 Datasets 89
7.3 Experimental Design 92
7.4 Results and Discussion 94
7.4.1 Training and Inference of CUT 94
7.4.2 End-to-End Training for Semantic Segmentation 99
7.5 Conclusion 104
7.5.1 Future Work 104
8 Synthesis 107
8.1 Research Summary 107
8.1.1 Augmentation Pipelines 107
8.1.2 3D CNN Comparison 108
8.1.3 3D Data Augmentation for the Segmentation of Carbon Rovings 108
8.1.4 Synthetic Training Data Generation 109
8.2 Future Developments 109
8.2.1 Augmentation 109
8.2.2 Pre-trained 3D Encoder 111
8.2.3 On the Quality Control of Carbon Reinforced Concrete 111
8.2.4 Subvoxel Accurate Segmentation 113
8.2.5 Towards Volume-to-Volume Translation 114
8.3 Conclusion 114
References 117
List of Tables 125
List of Figures 127
List of Abbreviations 131
|
222 |
Training deep convolutional architectures for visionDesjardins, Guillaume 08 1900 (has links)
Les tâches de vision artificielle telles que la reconnaissance d’objets demeurent irrésolues à ce jour. Les algorithmes d’apprentissage tels que les Réseaux de Neurones Artificiels (RNA), représentent une approche prometteuse permettant d’apprendre des caractéristiques utiles pour ces tâches. Ce processus d’optimisation est néanmoins difficile. Les réseaux profonds à base de Machine de Boltzmann Restreintes (RBM) ont récemment été proposés afin de guider l’extraction de représentations intermédiaires, grâce à un algorithme d’apprentissage non-supervisé. Ce mémoire présente, par l’entremise de trois articles, des contributions à ce domaine de recherche.
Le premier article traite de la RBM convolutionelle. L’usage de champs réceptifs locaux ainsi que le regroupement d’unités cachées en couches partageant les même paramètres, réduit considérablement le nombre de paramètres à apprendre et engendre des détecteurs de caractéristiques locaux et équivariant aux translations. Ceci mène à des modèles ayant une meilleure vraisemblance, comparativement aux RBMs entraînées sur des segments d’images.
Le deuxième article est motivé par des découvertes récentes en neurosciences. Il analyse l’impact d’unités quadratiques sur des tâches de classification visuelles, ainsi que celui d’une nouvelle fonction d’activation. Nous observons que les RNAs à base d’unités quadratiques utilisant la fonction softsign, donnent de meilleures performances de généralisation.
Le dernière article quand à lui, offre une vision critique des algorithmes populaires d’entraînement de RBMs. Nous montrons que l’algorithme de Divergence Contrastive (CD) et la CD Persistente ne sont pas robustes : tous deux nécessitent une surface d’énergie relativement plate afin que leur chaîne négative puisse mixer. La PCD à "poids rapides" contourne ce problème en perturbant légèrement le modèle, cependant, ceci génère des échantillons bruités. L’usage de chaînes tempérées dans la phase négative est une façon robuste d’adresser ces problèmes et mène à de meilleurs modèles génératifs. / High-level vision tasks such as generic object recognition remain out of reach for modern Artificial Intelligence systems. A promising approach involves learning algorithms, such as the Arficial Neural Network (ANN), which automatically learn to extract useful features for the task at hand. For ANNs, this represents a difficult optimization problem however. Deep Belief Networks have thus been proposed as a way to guide the discovery of intermediate representations, through a greedy unsupervised training of stacked Restricted Boltzmann Machines (RBM). The articles presented here-in represent contributions to this field of research.
The first article introduces the convolutional RBM. By mimicking local receptive fields and tying the parameters of hidden units within the same feature map, we considerably reduce the number of parameters to learn and enforce local, shift-equivariant feature detectors. This translates to better likelihood scores, compared to RBMs trained on small image patches.
In the second article, recent discoveries in neuroscience motivate an investigation into the impact of higher-order units on visual classification, along with the evaluation of a novel activation function. We show that ANNs with quadratic units using the softsign activation function offer better generalization error across several tasks. Finally, the third article gives a critical look at recently proposed RBM training algorithms. We show that Contrastive Divergence (CD) and Persistent CD are brittle in that they require the energy landscape to be smooth in order for their negative chain to mix well. PCD with fast-weights addresses the issue by performing small model perturbations, but may result in spurious samples. We propose using simulated tempering to draw negative samples. This leads to better generative models and increased robustness to various hyperparameters.
|
223 |
Apprentissage de représentations sur-complètes par entraînement d’auto-encodeursLajoie, Isabelle 12 1900 (has links)
Les avancés dans le domaine de l’intelligence artificielle, permettent à des systèmes informatiques de résoudre des tâches de plus en plus complexes liées par exemple à la vision, à la compréhension de signaux sonores ou au traitement de la langue. Parmi les modèles existants, on retrouve les Réseaux de Neurones Artificiels (RNA), dont la popularité a fait un grand bond en avant avec la découverte de Hinton et al. [22], soit l’utilisation de Machines de Boltzmann Restreintes (RBM) pour un pré-entraînement non-supervisé couche après couche, facilitant grandement l’entraînement supervisé du réseau à plusieurs couches cachées (DBN), entraînement qui s’avérait jusqu’alors très difficile à réussir. Depuis cette découverte, des chercheurs ont étudié l’efficacité de nouvelles stratégies de pré-entraînement, telles que l’empilement d’auto-encodeurs traditionnels(SAE) [5, 38], et l’empilement d’auto-encodeur débruiteur (SDAE) [44]. C’est dans ce contexte qu’a débuté la présente étude. Après un bref passage en revue des notions de base du domaine de l’apprentissage machine et des méthodes de pré-entraînement employées jusqu’à présent avec les modules RBM, AE et DAE, nous avons approfondi notre compréhension du pré-entraînement de type SDAE, exploré ses différentes propriétés et étudié des variantes de SDAE comme stratégie d’initialisation d’architecture profonde. Nous avons ainsi pu, entre autres choses, mettre en lumière l’influence du niveau de bruit, du nombre de couches et du nombre d’unités cachées sur l’erreur de généralisation du SDAE. Nous avons constaté une amélioration de la performance sur la tâche supervisée avec l’utilisation des bruits poivre et sel (PS) et gaussien (GS), bruits s’avérant mieux justifiés que celui utilisé jusqu’à présent, soit le masque à zéro (MN). De plus, nous avons démontré que la performance profitait d’une emphase imposée sur la reconstruction des données corrompues durant l’entraînement des différents DAE. Nos travaux ont aussi permis de révéler que le DAE était en mesure d’apprendre, sur des images naturelles, des filtres semblables à ceux retrouvés dans
les cellules V1 du cortex visuel, soit des filtres détecteurs de bordures. Nous aurons par ailleurs pu montrer que les représentations apprises du SDAE, composées des caractéristiques ainsi extraites, s’avéraient fort utiles à l’apprentissage d’une machine à vecteurs de support (SVM) linéaire ou à noyau gaussien, améliorant grandement sa performance de généralisation. Aussi, nous aurons observé que similairement au DBN, et contrairement au SAE, le SDAE possédait une bonne capacité en tant que modèle générateur. Nous avons également ouvert la porte à de nouvelles stratégies de pré-entraînement et découvert le potentiel de l’une d’entre elles, soit l’empilement d’auto-encodeurs rebruiteurs (SRAE). / Progress in the machine learning domain allows computational system to address more
and more complex tasks associated with vision, audio signal or natural language processing. Among the existing models, we find the Artificial Neural Network (ANN), whose popularity increased suddenly with the recent breakthrough of Hinton et al. [22], that consists in using Restricted Boltzmann Machines (RBM) for performing an unsupervised, layer by layer, pre-training initialization, of a Deep Belief Network (DBN), which enables the subsequent successful supervised training of such architecture. Since this discovery, researchers studied the efficiency of other similar pre-training strategies such
as the stacking of traditional auto-encoder (SAE) [5, 38] and the stacking of denoising
auto-encoder (SDAE) [44]. This is the context in which the present study started. After a brief introduction of the basic machine learning principles and of the pre-training methods used until now with RBM, AE and DAE modules, we performed a series of experiments to deepen our
understanding of pre-training with SDAE, explored its different proprieties and explored variations on the DAE algorithm as alternative strategies to initialize deep networks. We evaluated the sensitivity to the noise level, and influence of number of layers and number of hidden units on the generalization error obtained with SDAE. We experimented with other noise types and saw improved performance on the supervised task with the use of pepper and salt noise (PS) or gaussian noise (GS), noise types that are more justified then the one used until now which is masking noise (MN). Moreover, modifying the algorithm by imposing an emphasis on the corrupted components reconstruction during the unsupervised training of each different DAE showed encouraging performance improvements. Our work also allowed to reveal that DAE was capable of learning, on naturals images, filters similar to those found in V1 cells of the visual cortex, that are in essence edges detectors. In addition, we were able to verify that the learned representations of SDAE, are very good characteristics to be fed to a linear or gaussian support vector machine (SVM), considerably enhancing its generalization performance. Also, we observed that, alike DBN, and unlike SAE, the SDAE had the potential to be used as a good generative model. As well, we opened the door to novel pre-training strategies
and discovered the potential of one of them : the stacking of renoising auto-encoders
(SRAE).
|
224 |
Moranapho : apprentissage non supervisé de la morphologie d'une langue par généralisation de relations analogiquesLavallée, Jean-François 08 1900 (has links)
Récemment, nous avons pu observer un intérêt grandissant pour l'application de l'analogie formelle à l'analyse morphologique. L'intérêt premier de ce concept repose sur ses parallèles avec le processus mental impliqué dans la création de nouveaux termes basée sur les relations morphologiques préexistantes de la langue. Toutefois, l'utilisation de ce concept reste tout de même marginale due notamment à son coût de calcul élevé.Dans ce document, nous présenterons le système à base de graphe Moranapho fondé sur l'analogie formelle. Nous démontrerons par notre participation au Morpho Challenge 2009 (Kurimo:10) et nos expériences subséquentes, que la qualité des analyses obtenues par ce système rivalise avec l'état de l'art.
Nous analyserons aussi l'influence de certaines de ses composantes sur la qualité des analyses morphologiques produites.
Nous appuierons les conclusions tirées de nos analyses sur des théories bien établies dans le domaine de la linguistique.
Ceci nous permet donc de fournir certaines prédictions sur les succès et les échecs de notre système, lorsqu'appliqué à d'autres langues que celles testées au cours de nos expériences. / Recently, we have witnessed a growing interest in applying the concept of formal analogy to unsupervised morphology acquisition. The attractiveness of this concept lies in its parallels with the mental process involved in the creation of new words based on morphological relations existing in the language. However, the use of formal analogy remain marginal partly due to their high computational cost. In this document, we present Moranapho, a graph-based system founded on the concept of formal analogy. Our participation in the 2009 Morpho Challenge (Kurimo:10) and our subsequent experiments demonstrate that the performance of Moranapho are favorably comparable to the state-of-the-art. We studied the influence of some of its components on the quality of the morphological analysis produced as well.
Finally, we will discuss our findings based on well-established theories in the field of linguistics. This allows us to provide some predictions on the successes and failures of our system when applied to languages other than those tested in our experiments.
|
225 |
Understanding deep architectures and the effect of unsupervised pre-trainingErhan, Dumitru 10 1900 (has links)
Cette thèse porte sur une classe d'algorithmes d'apprentissage appelés architectures profondes. Il existe des résultats qui indiquent que les représentations peu profondes et locales ne sont pas suffisantes pour la modélisation des fonctions comportant plusieurs facteurs de variation. Nous sommes particulièrement intéressés par ce genre de données car nous espérons qu'un agent intelligent sera en mesure d'apprendre à les modéliser automatiquement; l'hypothèse est que les architectures profondes sont mieux adaptées pour les modéliser.
Les travaux de Hinton (2006) furent une véritable percée, car l'idée d'utiliser un algorithme d'apprentissage non-supervisé, les machines de Boltzmann restreintes, pour l'initialisation des poids d'un réseau de neurones
supervisé a été cruciale pour entraîner l'architecture profonde la plus populaire, soit les réseaux de neurones artificiels avec des poids totalement connectés. Cette idée a été reprise et reproduite avec succès dans plusieurs contextes et avec une variété de modèles.
Dans le cadre de cette thèse, nous considérons les architectures profondes comme des biais inductifs. Ces biais sont représentés non seulement par les modèles eux-mêmes, mais aussi par les méthodes d'entraînement qui sont souvent utilisés en conjonction avec ceux-ci. Nous désirons définir les raisons pour lesquelles cette classe de fonctions généralise bien, les situations auxquelles ces fonctions pourront être appliquées, ainsi que les descriptions qualitatives de telles fonctions.
L'objectif de cette thèse est d'obtenir une meilleure compréhension du succès des architectures profondes. Dans le premier article, nous testons la concordance entre nos intuitions---que les réseaux profonds sont nécessaires pour mieux apprendre avec des données comportant plusieurs facteurs de variation---et les résultats empiriques. Le second article est une étude approfondie de la question: pourquoi l'apprentissage non-supervisé aide à mieux généraliser dans un réseau profond? Nous explorons et évaluons plusieurs hypothèses tentant d'élucider le fonctionnement de ces modèles. Finalement, le troisième article cherche à définir de façon qualitative les fonctions modélisées par un réseau profond. Ces visualisations facilitent l'interprétation des représentations et invariances modélisées par une architecture profonde. / This thesis studies a class of algorithms called deep architectures. We argue that models that are based on a shallow composition of local features are not appropriate for the set of real-world functions and
datasets that are of interest to us, namely data with many factors of variation.
Modelling such functions and datasets is important if we are hoping to create an
intelligent agent that can learn from complicated data. Deep architectures are
hypothesized to be a step in the right direction, as they are compositions of nonlinearities and can learn compact
distributed representations of data with many factors of variation.
Training fully-connected artificial neural networks---the most common form of a
deep architecture---was not possible before Hinton (2006) showed that one can
use stacks of unsupervised Restricted Boltzmann Machines to initialize or
pre-train a supervised multi-layer network. This breakthrough has been
influential, as the basic idea of using unsupervised learning to improve
generalization in deep networks has been reproduced in a multitude of other
settings and models.
In this thesis, we cast the deep learning ideas and techniques as defining a
special kind of inductive bias. This bias is defined not only by the kind of
functions that are eventually represented by such deep models, but also by the
learning process that is commonly used for them. This work is a study of the
reasons for why this class of functions generalizes well, the situations where
they should work well, and the qualitative statements that one could make about
such functions.
This thesis is thus an attempt to understand why deep architectures work.
In the first of the articles presented we study the question of how well our
intuitions about the need for deep models correspond to functions that they can
actually model well. In the second article we perform an in-depth study of why
unsupervised pre-training helps deep learning and explore a variety of
hypotheses that give us an intuition for the dynamics of learning in such
architectures. Finally, in the third article, we want to better understand what
a deep architecture models, qualitatively speaking. Our visualization approach
enables us to understand the representations and invariances modelled and
learned by deeper layers.
|
226 |
Using unsupervised machine learning for fault identification in virtual machinesSchneider, C. January 2015 (has links)
Self-healing systems promise operating cost reductions in large-scale computing environments through the automated detection of, and recovery from, faults. However, at present there appears to be little known empirical evidence comparing the different approaches, or demonstrations that such implementations reduce costs. This thesis compares previous and current self-healing approaches before demonstrating a new, unsupervised approach that combines artificial neural networks with performance tests to perform fault identification in an automated fashion, i.e. the correct and accurate determination of which computer features are associated with a given performance test failure. Several key contributions are made in the course of this research including an analysis of the different types of self-healing approaches based on their contextual use, a baseline for future comparisons between self-healing frameworks that use artificial neural networks, and a successful, automated fault identification in cloud infrastructure, and more specifically virtual machines. This approach uses three established machine learning techniques: Naïve Bayes, Baum-Welch, and Contrastive Divergence Learning. The latter demonstrates minimisation of human-interaction beyond previous implementations by producing a list in decreasing order of likelihood of potential root causes (i.e. fault hypotheses) which brings the state of the art one step closer toward fully self-healing systems. This thesis also examines the impact of that different types of faults have on their respective identification. This helps to understand the validity of the data being presented, and how the field is progressing, whilst examining the differences in impact to identification between emulated thread crashes and errant user changes – a contribution believed to be unique to this research. Lastly, future research avenues and conclusions in automated fault identification are described along with lessons learned throughout this endeavor. This includes the progression of artificial neural networks, how learning algorithms are being developed and understood, and possibilities for automatically generating feature locality data.
|
227 |
Unsupervised representation learning for anomaly detection on neuroimaging. Application to epilepsy lesion detection on brain MRI / Apprentissage de représentations non supervisé pour la détection d'anomalies en neuro-imagerie. Application à la détection de lésions d’épilepsie en IRMAlaverdyan, Zaruhi 18 January 2019 (has links)
Cette étude vise à développer un système d’aide au diagnostic (CAD) pour la détection de lésions épileptogènes, reposant sur l’analyse de données de neuroimagerie, notamment, l’IRM T1 et FLAIR. L’approche adoptée, introduite précédemment par Azami et al., 2016, consiste à placer la tâche de détection dans le cadre de la détection de changement à l'échelle du voxel, basée sur l’apprentissage d’un modèle one-class SVM pour chaque voxel dans le cerveau. L'objectif principal de ce travail est de développer des mécanismes d’apprentissage de représentations, qui capturent les informations les plus discriminantes à partir de l’imagerie multimodale. Les caractéristiques manuelles ne sont pas forcément les plus pertinentes pour la tâche visée. Notre première contribution porte sur l'intégration de différents réseaux profonds non-supervisés, pour extraire des caractéristiques dans le cadre du problème de détection de changement. Nous introduisons une nouvelle configuration des réseaux siamois, mieux adaptée à ce contexte. Le système CAD proposé a été évalué sur l’ensemble d’images IRM T1 des patients atteints d'épilepsie. Afin d'améliorer la performance obtenue, nous avons proposé d'étendre le système pour intégrer des données multimodales qui possèdent des informations complémentaires sur la pathologie. Notre deuxième contribution consiste donc à proposer des stratégies de combinaison des différentes modalités d’imagerie dans un système pour la détection de changement. Ce système multimodal a montré une amélioration importante sur la tâche de détection de lésions épileptogènes sur les IRM T1 et FLAIR. Notre dernière contribution se focalise sur l'intégration des données TEP dans le système proposé. Etant donné le nombre limité des images TEP, nous envisageons de synthétiser les données manquantes à partir des images IRM disponibles. Nous démontrons que le système entraîné sur les données réelles et synthétiques présente une amélioration importante par rapport au système entraîné sur les images réelles uniquement. / This work represents one attempt to develop a computer aided diagnosis system for epilepsy lesion detection based on neuroimaging data, in particular T1-weighted and FLAIR MR sequences. Given the complexity of the task and the lack of a representative voxel-level labeled data set, the adopted approach, first introduced in Azami et al., 2016, consists in casting the lesion detection task as a per-voxel outlier detection problem. The system is based on training a one-class SVM model for each voxel in the brain on a set of healthy controls, so as to model the normality of the voxel. The main focus of this work is to design representation learning mechanisms, capturing the most discriminant information from multimodality imaging. Manual features, designed to mimic the characteristics of certain epilepsy lesions, such as focal cortical dysplasia (FCD), on neuroimaging data, are tailored to individual pathologies and cannot discriminate a large range of epilepsy lesions. Such features reflect the known characteristics of lesion appearance; however, they might not be the most optimal ones for the task at hand. Our first contribution consists in proposing various unsupervised neural architectures as potential feature extracting mechanisms and, eventually, introducing a novel configuration of siamese networks, to be plugged into the outlier detection context. The proposed system, evaluated on a set of T1-weighted MRIs of epilepsy patients, showed a promising performance but a room for improvement as well. To this end, we considered extending the CAD system so as to accommodate multimodality data which offers complementary information on the problem at hand. Our second contribution, therefore, consists in proposing strategies to combine representations of different imaging modalities into a single framework for anomaly detection. The extended system showed a significant improvement on the task of epilepsy lesion detection on T1-weighted and FLAIR MR images. Our last contribution focuses on the integration of PET data into the system. Given the small number of available PET images, we make an attempt to synthesize PET data from the corresponding MRI acquisitions. Eventually we show an improved performance of the system when trained on the mixture of synthesized and real images.
|
228 |
Obstacle detection and emergency exit sign recognition for autonomous navigation using camera phoneMohammed, Abdulmalik January 2017 (has links)
In this research work, we develop an obstacle detection and emergency exit sign recognition system on a mobile phone by extending the feature from accelerated segment test detector with Harris corner filter. The first step often required for many vision based applications is the detection of objects of interest in an image. Hence, in this research work, we introduce emergency exit sign detection method using colour histogram. The hue and saturation component of an HSV colour model are processed into features to build a 2D colour histogram. We backproject a 2D colour histogram to detect emergency exit sign from a captured image as the first task required before performing emergency exit sign recognition. The result of classification shows that the 2D histogram is fast and can discriminate between objects and background with accuracy. One of the challenges confronting object recognition methods is the type of image feature to compute. In this work therefore, we present two feature detectors and descriptor methods based on the feature from accelerated segment test detector with Harris corner filter. The first method is called Upright FAST-Harris and binary detector (U-FaHB), while the second method Scale Interpolated FAST-Harris and Binary (SIFaHB). In both methods, feature points are extracted using the accelerated segment test detectors and Harris filter to return the strongest corner points as features. However, in the case of SIFaHB, the extraction of feature points is done across the image plane and along the scale-space. The modular design of these detectors allows for the integration of descriptors of any kind. Therefore, we combine these detectors with binary test descriptor like BRIEF to compute feature regions. These detectors and the combined descriptor are evaluated using different images observed under various geometric and photometric transformations and the performance is compared with other detectors and descriptors. The results obtained show that our proposed feature detector and descriptor method is fast and performs better compared with other methods like SIFT, SURF, ORB, BRISK, CenSurE. Based on the potential of U-FaHB detector and descriptor, we extended it for use in optical flow computation, which we termed the Nearest-flow method. This method has the potential of computing flow vectors for use in obstacle detection. Just like any other new methods, we evaluated the Nearest flow method using real and synthetic image sequences. We compare the performance of the Nearest-flow with other methods like the Lucas and Kanade, Farneback and SIFT-flow. The results obtained show that our Nearest-flow method is faster to compute and performs better on real scene images compared with the other methods. In the final part of this research, we demonstrate the application potential of our proposed methods by developing an obstacle detection and exit sign recognition system on a camera phone and the result obtained shows that the methods have the potential to solve this vision based object detection and recognition problem.
|
229 |
Improved training of energy-based modelsKumar, Rithesh 06 1900 (has links)
No description available.
|
230 |
非監督式新細胞認知機神經網路之研究 / Studies on the Unsupervised Neocognitron陳彥勳, Chen, Yen-Shiun Unknown Date (has links)
本論文使用非監督式新細胞認知機(Unsupervised neocognitron)神經網路來便是印刷體中文字。
關於非監督式新細胞認知機,本論文提出兩項修改。第一項,Us1子層的結點不進行學習,而是直接套用人為方式所指定的12個區域特徵,而Us1之後的S子層仍然使用非監督式學習的方式決定其所要偵測的區域特徵。第二項修改則是,在學習過中設定一個上限值來限制代表節點(representative)產生的個數。如此設計的目的是為了避免模板(cell-planes)分配不均的問題。在本研究,採用這兩項修改的新細胞認知機稱為模式一,而使用第二項修改的新細胞認知機稱為模式二。
本論文裡的所有實驗分為兩部分。在第一部分有四個實驗,這些實驗都使用相同的訓練範例與測試範例。訓練範例有兩組,第一組包含“川”,“三”,“大”,“人”,“台”等五個中文字。而第二組包含“零”,“壹”,“貳”,“參”,“肆”等中文字。訓練範例都市採用細明體,而測試範例則是採用其他九種不同字體。第一個實驗的主要目的是測試模式一的績效。實驗結果顯示,模式一很容易學習成功而且辨識率可以接受。另外三個實驗的目的是想要了解某些參數值與系統績效的關係。這些參數包含S-欄的大小(the size of S-column),模板樹(the number of cell-planes),以及節點的接收場大小(the size of cells’ receptive field)。這三個實驗所使用的網路系統是模式一。
第二部分有二個實驗,主要的目的是比較模式一與模式二的系統績效。在第一個實驗,所使用的訓練範例與第一部分實驗相同。實驗結果顯示模式一比較容易成功地學習,而且系統有不錯的表現。第二個實驗,使用17個中文字做為訓練範例。這17個字包括“零”,“壹”,“貳”,“參”,“肆”,“伍”,“陸”,“柒”,“捌”,“玖”,“拾”,“佰”,“仟”,“萬”,“億”,“圓”,“角”。實驗結果顯示,模式一仍然是一個不錯的系統。 / In this study, we are investigating the feasibility of applying the unsupervised neocognitron to the recognition of printed Chinese characters.
Two propositions for the unsupervised neocognitron are mentioned. The first on proposes that the input connections of the first layer are manually given, and all subsequent layers are trained unsupervised. The second one concerns the selection of representatives. During the process of learning, the number of cell-planes that send representatives for each training pattern has an upper bound. The unsupervised neocognitron with implementing these two propositions is named as Model 1, and the unsupervised neocognitron with implementing only the second proposition is named as Model 2.
Experiment in this study are grouped into two parts, called Part I and Part II. In Part I, four experiments are conducted. For each experiment, two sets of training patterns will be conducted respectively. The first one, called the simple training set, consists of five printed Chinese characters“川”,“三”,“大”,“人”, and “台” with size of 25*25 in MingLight font. The second one, called the complex training set, contains another five printed Chinese characters“零”,“壹”,“貳”,“參”, and “肆” in the some font and size. After training, these characters of other nine different fonts are presented to test the generalization of the network.
The objective of the first experiment of Part I is to investigate the performance of Model 1. Simulation results shot that Model 1 demonstrates a good ability to achieve a successful learning. In other three experiments, the effect of choosing different value for some parameters in investigated. The parameters include the size of S-column, the number of cell-planes, and the receptive field of cells.
In Part II, a comparison of the performance of Model 1 and Model 2 is made. In the first experiment, Model 1 and Model 2 are trained to recognize the simple and complex training sets described above. Experimental results show that Model 1 shows higher ability to achieve a successful learning, and performance of Model 1 is acceptable. In the second experiment, 17 training patterns are presented during the learning process. These training patterns include “零”,“壹”,“貳”,“參”,“肆”,“伍”,“陸”,“柒”,“捌”,“玖”,“拾”,“佰”,“仟”,“萬”,“億”,“圓”,, and “角”. From the simulation results, Model 1 is a promising approach for the recognition of printed Chinese characters.
|
Page generated in 0.0781 seconds