• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 68
  • 6
  • 4
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 105
  • 105
  • 46
  • 43
  • 27
  • 27
  • 23
  • 23
  • 21
  • 20
  • 20
  • 20
  • 17
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Segmentation in Tomography Data: Exploring Data Augmentation for Supervised and Unsupervised Voxel Classification with Neural Networks

Wagner, Franz 23 September 2024 (has links)
Computed Tomography (CT) imaging provides invaluable insight into internal structures of objects and organisms, which is critical for applications ranging from materials science to medical diagnostics. In CT data, an object is represented by a 3D reconstruction that is generated by combining multiple 2D X-ray images taken from various angles around the object. Each voxel, a volumetric pixel, within the reconstructed volume represents a small cubic element, allowing for detailed spatial representation. To extract meaningful information from CT imaging data and facilitate analysis and interpretation, accurate segmentation of internal structures is essential. However, this can be challenging due to various artifacts introduced by the physics of a CT scan and the properties of the object being imaged. This dissertation directly addresses this challenge by using deep learning techniques. Specifically, Convolutional Neural Networks (CNNs) are used for segmentation. However, they face the problem of limited training data. Data scarcity is addressed by data augmentation through the unsupervised generation of synthetic training data and the use of 2D and 3D data augmentation methods. A combination of these augmentation strategies allows for streamlining segmentation in voxel data and effectively addresses data scarcity. Essentially, the work aims to simplify training of CNNs, using minimal or no labeled data. To enhance accessibility to the results of this thesis, two user-friendly software solutions, unpAIred and AiSeg, have been developed. These platforms enable the generation of training data, data augmentation, as well as training, analysis, and application of CNNs. This cumulative work first examines simpler but efficient conventional data augmentation methods, such as radiometric and geometric image manipulations, which are already widely used in literature. However, these methods are usually randomly applied and do not follow a specific order. The primary focus of the first paper is to investigate this approach and to develop both online and offline data augmentation pipelines that allow for systematic sequencing of these operations. Offline augmentation involves augmenting training data stored on a drive, while online augmentation is performed dynamically at runtime, just before images are fed to the CNN. It is successfully shown that random data augmentation methods are inferior to the new pipelines. A careful comparison of 3D CNNs is then performed to identify optimal models for specific segmentation tasks, such as carbon and pore segmentation in CT scans of Carbon Reinforced Concrete (CRC). Through an evaluation of eight 3D CNN models on six datasets, tailored recommendations are provided for selecting the most effective model based on dataset characteristics. The analysis highlights the consistent performance of the 3D U-Net, one of the CNNs, and its residual variant, which excel at roving (a bundle of carbon fibers) and pore segmentation tasks. Based on the augmentation pipelines and the results of the 3D CNN comparison, the pipelines are extended to 3D, specifically targeting the segmentation of carbon in CT scans of CRC. A comparative analysis of different 3D augmentation strategies, including both offline and online augmentation variants, provides insight into their effectiveness. While offline augmentation results in fewer artifacts, it can only segment rovings already present in the training data, while online augmentation is essential for effectively segmenting different types of rovings contained in CT scans. However, constraints such as limited diversity of the dataset and overly aggressive augmentation that resulted in segmentation artifacts require further investigation to address data scarcity. Recognizing the need for a larger and more diverse dataset, this thesis extends the results of the three former papers by introducing a deep learning-based augmentation using a Generative Adversarial Network (GAN), called Contrastive Unpaired Translation (CUT), for synthetic training data generation. By combining the GAN with augmentation pipelines, semi-supervised and unsupervised end-to-end training methods are introduced and the successful generation of training data for 2D pore segmentation is demonstrated. However, challenges remain in achieving a stable 3D CUT implementation, which warrants further research and development efforts. In summary, the results of this dissertation address the challenges of accurate CT data segmentation in materials science through deep learning techniques and novel 2D and 3D online and offline augmentation pipelines. By evaluating different 3D CNN models, tailored recommendations for specific segmentation tasks are provided. Furthermore, the exploration of deep learning-based augmentation using CUT shows promising results in the generating synthetic training data. Future work will include the development of a stable implementation of a 3D CUT version, the exploration of new model architectures, and the development of sub-voxel accurate segmentation techniques. These have the potential for significant advances in segmentation in tomography data.:Abstract IV Zusammenfassung VI 1 Introduction 1 1.1 Thesis Structure 2 1.2 Scientific Context 3 1.2.1 Developments in the Segmentation in Tomography Data 3 1.2.2 3D Semantic Segmentation using Machine Learning 5 1.2.3 Data Augmentation 6 2 Developed Software Solutions: AiSeg and unpAIred 9 2.1 Software Design 10 2.2 Installation 11 2.3 AiSeg 11 2.4 unpAIred 12 2.5 Limitations 12 3 Factors Affecting Image Quality in Computed Tomography 13 3.1 From CT Scan to Reconstruction 13 3.2 X-ray Tube and Focal Spot 14 3.3 Beam Hardening 14 3.4 Absorption, Scattering and Pairing 15 3.5 X-ray Detector 16 3.6 Geometric Calibration 17 3.7 Reconstruction Algorithm 17 3.8 Artifact corrections 18 4 On the Development of Augmentation Pipelines for Image Segmentation 19 4.0 Abstract 20 4.1 Introduction 20 4.2 Methods 21 4.2.1 Data Preparation 21 4.2.2 Augmentation 21 4.2.3 Networks 24 4.2.4 Training and Metrics 25 4.3 Experimental Design 26 4.3.1 Hardware 26 4.3.2 Workflow 26 4.3.3 Test on Cityscapes 26 4.4 Results and Discussion 26 4.4.1 Stage 1: Crating a Baseline 27 4.4.2 Stage 2: Using Offline Augmentation 27 4.4.3 Stage 3: Using Online Augmentation 27 4.4.4 Test on Cityscapes 29 4.4.5 Future Work – A New Online Augmentation 30 4.5 Conclusion 31 4.6 Appendix 31 4.6.1 Appendix A. List of All Networks 31 4.6.2 Appendix B. Augmentation Methods 32 4.6.3 Appendix C. Used RIWA Online Augmentation Parameters 36 4.6.4 Appendix D. Used Cityscapes Online Augmentation Parameters 36 4.6.5 Appendix E. Comparison of CNNs with best Backbones on RIWA 37 4.6.6 Appendix F. Segmentation Results 38 4.7 References 39 5 Comparison of 3D CNNs for Volume Segmentation 43 5.0 Abstract 44 5.1 Introduction 44 5.2 Datasets 44 5.2.1 Carbon Rovings 45 5.2.2 Concrete Pores 45 5.2.3 Polyethylene Fibers 45 5.2.4 Brain Mitochondria 45 5.2.5 Brain Tumor Segmentation Challenge (BraTS) 46 5.2.6 Head and Neck Cancer 46 5.3 Methods 46 5.3.1 Data Preprocessing 46 5.3.2 Hyperparameters 46 5.3.3 Metrics 47 5.3.4 Experimental Design 48 5.4 Results and Discussion 48 5.4.1 Impact of Initial Random States (Head and Neck Cancer Dataset) 48 5.4.2 Carbon Rovings 48 5.4.3 Concrete Pores 49 5.4.4 Polyethylene Fibers 49 5.4.5 Brain Mitochondria 50 5.4.6 BraTS 51 5.5 Conclusion 51 5.6 References 52 6 Segmentation of Carbon in CRC Using 3D Augmentation 55 6.0 Abstract 56 6.1 Introduction 56 6.2 Materials and Methods 58 6.2.1 Specimens 58 6.2.2 Microtomography 59 6.2.3 AI-Based Segmentation 60 6.2.4 Roving Extraction 64 6.2.5 Multiscale Modeling 65 6.2.6 Scaled Boundary Isogeometric Analysis 66 6.2.7 Parameterized RVE and Definition of Characteristic Geometric Properties 67 6.3 Results and Discussion 70 6.3.1 Microtomography 70 6.3.2 Deep Learning 71 6.3.3 Roving Extraction 74 6.3.4 Parameterized RVE and Definition of Characteristic Geometric Properties 75 6.4 Conclusion 79 6.5 References 80 7 Image-to-Image Translation for Semi-Supervised Semantic Segmentation 85 7.1 Introduction 85 7.2 Methods 86 7.2.1 Generative Adversarial Networks 87 7.2.2 Contrastive Unpaired Translation 87 7.2.3 Fréchet Inception Distance 89 7.2.4 Datasets 89 7.3 Experimental Design 92 7.4 Results and Discussion 94 7.4.1 Training and Inference of CUT 94 7.4.2 End-to-End Training for Semantic Segmentation 99 7.5 Conclusion 104 7.5.1 Future Work 104 8 Synthesis 107 8.1 Research Summary 107 8.1.1 Augmentation Pipelines 107 8.1.2 3D CNN Comparison 108 8.1.3 3D Data Augmentation for the Segmentation of Carbon Rovings 108 8.1.4 Synthetic Training Data Generation 109 8.2 Future Developments 109 8.2.1 Augmentation 109 8.2.2 Pre-trained 3D Encoder 111 8.2.3 On the Quality Control of Carbon Reinforced Concrete 111 8.2.4 Subvoxel Accurate Segmentation 113 8.2.5 Towards Volume-to-Volume Translation 114 8.3 Conclusion 114 References 117 List of Tables 125 List of Figures 127 List of Abbreviations 131 / Computertomographie (CT) bietet wertvolle Einblicke in die inneren Strukturen von Objekten und Organismen, was für Anwendungen von der Materialwissenschaft bis zur medizinischen Diagnostik von entscheidender Bedeutung ist. In CT-Daten ist ein Objekt durch eine 3D-Rekonstruktion dargestellt, die durch die Kombination mehrerer 2D-Röntgenbilder aus verschiedenen Winkeln um das Objekt herum erstellt wird. Jedes Voxel, ein Volumen Pixel, innerhalb des rekonstruierten Volumens stellt ein kleines kubisches Element dar und ermöglicht eine detaillierte räumliche Darstellung. Um aussagekräftige Informationen aus CT-Bilddaten zu extrahieren und eine Analyse und Interpretation zu ermöglichen, ist eine genaue Segmentierung der inneren Strukturen unerlässlich. Dies kann jedoch aufgrund verschiedener Artefakte, die durch die Physik eines CT-Scans und Eigenschaften des abgebildeten Objekts verursacht werden, eine Herausforderung darstellen. Diese Dissertation befasst sich direkt mit dieser Herausforderung, indem sie Techniken des Deep Learnings einsetzt. Konkret werden für die Segmentierung Convolutional Neural Networks (CNNs) verwendet, welche jedoch mit dem Problem begrenzter Trainingsdaten konfrontiert sind. Der Datenknappheit wird dabei durch Datenerweiterung begegnet, indem unbeaufsichtigt synthetische Trainingsdaten erzeugt und 2D- und 3D-Augmentierungssmethoden eingesetzt werden. Eine Kombination dieser Vervielfältigungsstrategien erlaubt eine Vereinfachung der Segmentierung in Voxeldaten und behebt effektiv die Datenknappheit. Im Wesentlichen zielt diese Arbeit darauf ab, das Training von CNNs zu vereinfachen, wobei wenige oder gar keine gelabelten Daten benötigt werden. Um die Ergebnisse dieser Arbeit Forschenden zugänglicher zu machen, wurden zwei benutzerfreundliche Softwarelösungen, unpAIred und AiSeg, entwickelt. Diese ermöglichen die Generierung von Trainingsdaten, die Augmentierung sowie das Training, die Analyse und die Anwendung von CNNs. In dieser kumulativen Arbeit werden zunächst einfachere, aber effiziente konventionelle Methoden zur Datenvervielfältigung untersucht, wie z. B. radiometrische und geometrische Bildmanipulationen, die bereits häufig in der Literatur verwendet werden. Diese Methoden werden jedoch in der Regel zufällig nacheinander angewandt und folgen keiner bestimmten Reihenfolge. Der Schwerpunkt des ersten Forschungsartikels liegt darin, diesen Ansatz zu untersuchen und sowohl Online- als auch Offline-Datenerweiterungspipelines zu entwickeln, die eine systematische Sequenzierung dieser Operationen ermöglichen. Bei der Offline Variante werden die auf der Festplatte gespeicherten Trainingsdaten vervielfältigt, während die Online-Erweiterung dynamisch zur Laufzeit erfolgt, kurz bevor die Bilder dem CNN gezeigt werden. Es wird erfolgreich gezeigt, dass eine zufällige Verkettung von geometrischen und radiometrischen Methoden den neuen Pipelines unterlegen ist. Anschließend wird ein Vergleich von 3D-CNNs durchgeführt, um die optimalen Modelle für Segmentierungsaufgaben zu identifizieren, wie z.B. die Segmentierung von Carbonbewehrung und Luftporen in CT-Scans von carbonverstärktem Beton (CRC). Durch die Bewertung von acht 3D-CNN-Modellen auf sechs Datensätzen werden Empfehlungen für die Auswahl des genauesten Modells auf der Grundlage der Datensatzeigenschaften gegeben. Die Analyse unterstreicht die konstante Überlegenheit des 3D UNets, eines der CNNs, und seiner Residualversion bei Segmentierung von Rovings (Carbonfaserbündel) und Poren. Aufbauend auf den 2D Augmentierungspipelines und den Ergebnissen des 3D-CNN-Vergleichs werden die Pipelines auf die dritte Dimension erweitert, um insbesondere die Segmentierung der Carbonbewehrung in CT-Scans von CRC zu ermöglichen. Eine vergleichende Analyse verschiedener 3D Augmentierungsstrategien, die sowohl Offline- als auch Online-Erweiterungsvarianten umfassen, gibt Aufschluss über deren Effektivität. Die Offline-Augmentierung führt zwar zu weniger Artefakten, kann aber nur Rovings segmentieren, die bereits in den Trainingsdaten vorhanden sind. Die Online-Augmentierung erweist sich hingegen als unerlässlich für die effektive Segmentierung von Carbon-Roving-Typen, die nicht im Datensatz enthalten sind. Einschränkungen wie die geringe Vielfalt des Datensatzes und eine zu aggressive Online-Datenerweiterung, die zu Segmentierungsartefakten führt, erfordern jedoch weitere Methoden, um die Datenknappheit zu beheben. In Anbetracht der Notwendigkeit eines größeren und vielfältigeren Datensatzes erweitert diese Arbeit die Ergebnisse der drei Forschungsartikel durch die Einführung einer auf Deep Learning basierenden Augmentierung, die ein Generative Adversarial Network (GAN), genannt Contrastive Unpaired Translation (CUT), zur Erzeugung synthetischer Trainingsdaten verwendet. Durch die Kombination des GANs mit den Augmentierungspipelines wird eine halbüberwachte Ende-zu-Ende-Trainingsmethode vorgestellt und die erfolgreiche Erzeugung von Trainingsdaten für die 2D-Porensegmentierung demonstriert. Es bestehen jedoch noch Herausforderungen bei der Implementierung einer stabilen 3D-CUT-Version, was weitere Forschungs- und Entwicklungsanstrengungen erfordert. Zusammenfassend adressieren die Ergebnisse dieser Dissertation Herausforderungen der CT-Datensegmentierung in der Materialwissenschaft, die durch Deep-Learning-Techniken und neuartige 2D- und 3D-Online- und Offline-Augmentierungspipelines gelöst werden. Durch die Evaluierung verschiedener 3D-CNN-Modelle werden maßgeschneiderte Empfehlungen für spezifische Segmentierungsaufgaben gegeben. Darüber hinaus zeigen Untersuchungen zur Deep Learning basierten Augmentierung mit CUT vielversprechende Ergebnisse bei der Generierung synthetischer Trainingsdaten. Zukünftige Arbeiten umfassen die Entwicklung einer stabilen Implementierung einer 3D-CUT-Version, die Erforschung neuer Modellarchitekturen und die Entwicklung von subvoxelgenauen Segmentierungstechniken. Diese haben das Potenzial für bedeutende Fortschritte bei der Segmentierung in Tomographiedaten.:Abstract IV Zusammenfassung VI 1 Introduction 1 1.1 Thesis Structure 2 1.2 Scientific Context 3 1.2.1 Developments in the Segmentation in Tomography Data 3 1.2.2 3D Semantic Segmentation using Machine Learning 5 1.2.3 Data Augmentation 6 2 Developed Software Solutions: AiSeg and unpAIred 9 2.1 Software Design 10 2.2 Installation 11 2.3 AiSeg 11 2.4 unpAIred 12 2.5 Limitations 12 3 Factors Affecting Image Quality in Computed Tomography 13 3.1 From CT Scan to Reconstruction 13 3.2 X-ray Tube and Focal Spot 14 3.3 Beam Hardening 14 3.4 Absorption, Scattering and Pairing 15 3.5 X-ray Detector 16 3.6 Geometric Calibration 17 3.7 Reconstruction Algorithm 17 3.8 Artifact corrections 18 4 On the Development of Augmentation Pipelines for Image Segmentation 19 4.0 Abstract 20 4.1 Introduction 20 4.2 Methods 21 4.2.1 Data Preparation 21 4.2.2 Augmentation 21 4.2.3 Networks 24 4.2.4 Training and Metrics 25 4.3 Experimental Design 26 4.3.1 Hardware 26 4.3.2 Workflow 26 4.3.3 Test on Cityscapes 26 4.4 Results and Discussion 26 4.4.1 Stage 1: Crating a Baseline 27 4.4.2 Stage 2: Using Offline Augmentation 27 4.4.3 Stage 3: Using Online Augmentation 27 4.4.4 Test on Cityscapes 29 4.4.5 Future Work – A New Online Augmentation 30 4.5 Conclusion 31 4.6 Appendix 31 4.6.1 Appendix A. List of All Networks 31 4.6.2 Appendix B. Augmentation Methods 32 4.6.3 Appendix C. Used RIWA Online Augmentation Parameters 36 4.6.4 Appendix D. Used Cityscapes Online Augmentation Parameters 36 4.6.5 Appendix E. Comparison of CNNs with best Backbones on RIWA 37 4.6.6 Appendix F. Segmentation Results 38 4.7 References 39 5 Comparison of 3D CNNs for Volume Segmentation 43 5.0 Abstract 44 5.1 Introduction 44 5.2 Datasets 44 5.2.1 Carbon Rovings 45 5.2.2 Concrete Pores 45 5.2.3 Polyethylene Fibers 45 5.2.4 Brain Mitochondria 45 5.2.5 Brain Tumor Segmentation Challenge (BraTS) 46 5.2.6 Head and Neck Cancer 46 5.3 Methods 46 5.3.1 Data Preprocessing 46 5.3.2 Hyperparameters 46 5.3.3 Metrics 47 5.3.4 Experimental Design 48 5.4 Results and Discussion 48 5.4.1 Impact of Initial Random States (Head and Neck Cancer Dataset) 48 5.4.2 Carbon Rovings 48 5.4.3 Concrete Pores 49 5.4.4 Polyethylene Fibers 49 5.4.5 Brain Mitochondria 50 5.4.6 BraTS 51 5.5 Conclusion 51 5.6 References 52 6 Segmentation of Carbon in CRC Using 3D Augmentation 55 6.0 Abstract 56 6.1 Introduction 56 6.2 Materials and Methods 58 6.2.1 Specimens 58 6.2.2 Microtomography 59 6.2.3 AI-Based Segmentation 60 6.2.4 Roving Extraction 64 6.2.5 Multiscale Modeling 65 6.2.6 Scaled Boundary Isogeometric Analysis 66 6.2.7 Parameterized RVE and Definition of Characteristic Geometric Properties 67 6.3 Results and Discussion 70 6.3.1 Microtomography 70 6.3.2 Deep Learning 71 6.3.3 Roving Extraction 74 6.3.4 Parameterized RVE and Definition of Characteristic Geometric Properties 75 6.4 Conclusion 79 6.5 References 80 7 Image-to-Image Translation for Semi-Supervised Semantic Segmentation 85 7.1 Introduction 85 7.2 Methods 86 7.2.1 Generative Adversarial Networks 87 7.2.2 Contrastive Unpaired Translation 87 7.2.3 Fréchet Inception Distance 89 7.2.4 Datasets 89 7.3 Experimental Design 92 7.4 Results and Discussion 94 7.4.1 Training and Inference of CUT 94 7.4.2 End-to-End Training for Semantic Segmentation 99 7.5 Conclusion 104 7.5.1 Future Work 104 8 Synthesis 107 8.1 Research Summary 107 8.1.1 Augmentation Pipelines 107 8.1.2 3D CNN Comparison 108 8.1.3 3D Data Augmentation for the Segmentation of Carbon Rovings 108 8.1.4 Synthetic Training Data Generation 109 8.2 Future Developments 109 8.2.1 Augmentation 109 8.2.2 Pre-trained 3D Encoder 111 8.2.3 On the Quality Control of Carbon Reinforced Concrete 111 8.2.4 Subvoxel Accurate Segmentation 113 8.2.5 Towards Volume-to-Volume Translation 114 8.3 Conclusion 114 References 117 List of Tables 125 List of Figures 127 List of Abbreviations 131
82

Transformer-Based Networks for Fault Detection and Diagnostics of Rotating Machinery

Wong, Jonathan January 2024 (has links)
Machine health and condition monitoring are billion-dollar concerns for industry. Quality control and continuous improvement are some of the most important factors for manufacturers to consider in order to maintain a successful business. When work floor interruptions occur, engineers frequently employ “Band-Aid” fixes due to resource, timing, or technical constraints without solving for the root cause. Thus, a need for quick, reliable, and accurate fault detection and diagnosis methods are required. Within complex rotating machinery, a fundamental component that accounts for large amounts of downtime and failure involves a very basic yet crucial element, the rolling-element bearing. A worn-out bearing constitutes to some of the most drastic failures in any mechanical system next to electrical failures associated with stator windings. The cyclical motion provides a way for measurements to be taken via vibration sensors and analyzed through signal processing techniques. Methods will be discussed to transform these acquired signals into usable input data for neural network training in order to classify the type of fault that is present within the system. With the wide-spread utilization and adoption of neural networks, we turn our attention to the growing field of sequence-to-sequence deep learning architectures. Language based models have since been adapted to a multitude of tasks outside of text translation and word prediction. We now see powerful Transformers being used to accomplish generative modeling, computer vision, and anomaly detection -- spanning across all industries. This research aims to determine the efficacy of the Transformer neural network for use in the detection and classification of faults within 3-phase induction motors for the automotive industry. We require a quick turnaround, often leading to small datasets in which methods such as data augmentation will be employed to improve the training process of our time-series signals. / Thesis / Master of Applied Science (MASc)
83

Mélanges bayésiens de modèles d'extrêmes multivariés : application à la prédétermination régionale des crues avec données incomplètes / Bayesian model mergings for multivariate extremes : application to regional predetermination of floods with incomplete data

Sabourin, Anne 24 September 2013 (has links)
La théorie statistique univariée des valeurs extrêmes se généralise au cas multivarié mais l'absence d'un cadre paramétrique naturel complique l'inférence de la loi jointe des extrêmes. Les marges d'erreur associée aux estimateurs non paramétriques de la structure de dépendance sont difficilement accessibles à partir de la dimension trois. Cependant, quantifier l'incertitude est d'autant plus important pour les applications que le problème de la rareté des données extrêmes est récurrent, en particulier en hydrologie. L'objet de cette thèse est de développer des modèles de dépendance entre extrêmes, dans un cadre bayésien permettant de représenter l'incertitude. Le chapitre 2 explore les propriétés des modèles obtenus en combinant des modèles paramétriques existants, par mélange bayésien (Bayesian Model Averaging BMA). Un modèle semi-paramétrique de mélange de Dirichlet est étudié au chapitre suivant : une nouvelle paramétrisation est introduite afin de s'affranchir d'une contrainte de moments caractéristique de la structure de dépendance et de faciliter l'échantillonnage de la loi à posteriori. Le chapitre 4 est motivé par une application hydrologique : il s'agit d'estimer la structure de dépendance spatiale des crues extrêmes dans la région cévenole des Gardons en utilisant des données historiques enregistrées en quatre points. Les données anciennes augmentent la taille de l'échantillon mais beaucoup de ces données sont censurées. Une méthode d'augmentation de données est introduite, dans le cadre du mélange de Dirichlet, palliant l'absence d'expression explicite de la vraisemblance censurée. Les conclusions et perspectives sont discutées au chapitre 5 / Uni-variate extreme value theory extends to the multivariate case but the absence of a natural parametric framework for the joint distribution of extremes complexifies inferential matters. Available non parametric estimators of the dependence structure do not come with tractable uncertainty intervals for problems of dimension greater than three. However, uncertainty estimation is all the more important for applied purposes that data scarcity is a recurrent issue, particularly in the field of hydrology. The purpose of this thesis is to develop modeling tools for the dependence structure between extremes, in a Bayesian framework that allows uncertainty assessment. Chapter 2 explores the properties of the model obtained by combining existing ones, in a Bayesian Model Averaging framework. A semi-parametric Dirichlet mixture model is studied next : a new parametrization is introduced, in order to relax a moments constraint which characterizes the dependence structure. The re-parametrization significantly improves convergence and mixing properties of the reversible-jump algorithm used to sample the posterior. The last chapter is motivated by an hydrological application, which consists in estimating the dependence structure of floods recorded at four neighboring stations, in the ‘Gardons’ region, southern France, using historical data. The latter increase the sample size but most of them are censored. The lack of explicit expression for the likelihood in the Dirichlet mixture model is handled by using a data augmentation framework
84

Inferência em distribuições discretas bivariadas

Chire, Verônica Amparo Quispe 26 November 2013 (has links)
Made available in DSpace on 2016-06-02T20:06:09Z (GMT). No. of bitstreams: 1 5618.pdf: 988258 bytes, checksum: 1ce6234a919d1f5b4a4d4fd7482d543c (MD5) Previous issue date: 2013-11-26 / Financiadora de Estudos e Projetos / The analysis of bivariate data can be found in several areas of knowledge, when the data of interest are obtained in a paired way and present correlation between counts. In this work the Holgate bivariate Poisson, bivariate generalized Poisson and bivariate zero-inflated Poisson models are presented, which are useful to the modeling of bivariate count data correlated. Illustrative applications are presented for these models and the comparison between them is made by using criteria of model selection AIC and BIC, as well as the asymptotic likelihood ratio test. Particularly, we propose a Bayesian approach to the Holgate bivariate Poisson and bivariate zero-inflated Poisson models, based in the Gibbs sampling algorithm with data augmentation. / A análise de dados bivariados pode ser encontrada nas mais diversas áreas do conhecimento, quando os dados de interesse são obtidos de forma pareada e apresentam correlação entre as contagens. Neste trabalho são apresentados os modelos Poisson bivariado de Holgate, Poisson generalizado bivariado e Poisson bivariado inflacionado de zeros, os quais são úteis na modelagem de dados de contagem bivariados correlacionados. Aplicações ilustrativas serão apresentadas para estes modelos e a comparação entre eles será realizada pelos critérios de seleção de modelos AIC e BIC, assim como pelo teste da razão de verossimilhança assintótico. Particularmente, propomos uma abordagem Bayesiana para os modelos Poisson bivariado de Holgate e Poisson Inflacionado de zeros, baseada no algoritmo Gibbs sampling com dados ampliados.
85

Odhad hloubky pomocí konvolučních neuronových sítí / Depth Estimation by Convolutional Neural Networks

Ivanecký, Ján January 2016 (has links)
This thesis deals with depth estimation using convolutional neural networks. I propose a three-part model as a solution to this problem. The model contains a global context network which estimates coarse depth structure of the scene, a gradient network which estimates depth gradients and a refining network which utilizes the outputs of previous two networks to produce the final depth map. Additionally, I present a normalized loss function for training neural networks. Applying normalized loss function results in better estimates of the scene's relative depth structure, however it results in a loss of information about the absolute scale of the scene.
86

Anomaly Detection and Security Deep Learning Methods Under Adversarial Situation

Miguel Villarreal-Vasquez (9034049) 27 June 2020 (has links)
<p>Advances in Artificial Intelligence (AI), or more precisely on Neural Networks (NNs), and fast processing technologies (e.g. Graphic Processing Units or GPUs) in recent years have positioned NNs as one of the main machine learning algorithms used to solved a diversity of problems in both academia and the industry. While they have been proved to be effective in solving many tasks, the lack of security guarantees and understanding of their internal processing disrupts their wide adoption in general and cybersecurity-related applications. In this dissertation, we present the findings of a comprehensive study aimed to enable the absorption of state-of-the-art NN algorithms in the development of enterprise solutions. Specifically, this dissertation focuses on (1) the development of defensive mechanisms to protect NNs against adversarial attacks and (2) application of NN models for anomaly detection in enterprise networks.</p><p>In this state of affairs, this work makes the following contributions. First, we performed a thorough study of the different adversarial attacks against NNs. We concentrate on the attacks referred to as trojan attacks and introduce a novel model hardening method that removes any trojan (i.e. misbehavior) inserted to the NN models at training time. We carefully evaluate our method and establish the correct metrics to test the efficiency of defensive methods against these types of attacks: (1) accuracy with benign data, (2) attack success rate, and (3) accuracy with adversarial data. Prior work evaluates their solutions using the first two metrics only, which do not suffice to guarantee robustness against untargeted attacks. Our method is compared with the state-of-the-art. The obtained results show our method outperforms it. Second, we proposed a novel approach to detect anomalies using LSTM-based models. Our method analyzes at runtime the event sequences generated by the Endpoint Detection and Response (EDR) system of a renowned security company running and efficiently detects uncommon patterns. The new detecting method is compared with the EDR system. The results show that our method achieves a higher detection rate. Finally, we present a Moving Target Defense technique that smartly reacts upon the detection of anomalies so as to also mitigate the detected attacks. The technique efficiently replaces the entire stack of virtual nodes, making ongoing attacks in the system ineffective.</p><p> </p>
87

Comparative Study of Classification Methods for the Mitigation of Class Imbalance Issues in Medical Imaging Applications

Kueterman, Nathan 22 June 2020 (has links)
No description available.
88

Myaamia Translator: Using Neural Machine Translation With Attention to Translate a Low-resource Language

Baaniya, Bishal 06 April 2023 (has links)
No description available.
89

Exploring State-of-the-Art Machine Learning Methods for Quantifying Exercise-induced Muscle Fatigue / Exploring State-of-the-Art Machine Learning Methods for Quantifying Exercise-induced Muscle Fatigue

Afram, Abboud, Sarab Fard Sabet, Danial January 2023 (has links)
Muscle fatigue is a severe problem for elite athletes, and this is due to the long resting times, which can vary. Various mechanisms can cause muscle fatigue which signifies that the specific muscle has reached its maximum force and cannot continue the task. This thesis was about surveying and exploring state-of-the-art methods and systematically, theoretically, and practically testing the applicability and performance of more recent machine learning methods on an existing EMG to muscle fatigue pipeline. Several challenges within the EMG domain exist, such as inadequate data, finding the most suitable model, and how they should be addressed to achieve reliable prediction. This required approaches for addressing these problems by combining and comparing various state-of-the-art methodologies, such as data augmentation techniques for upsampling, spectrogram methods for signal processing, and transfer learning to gain a reliable prediction by various pre-trained CNN models. The approach during this study was to conduct seven experiments consisting of a classification task that aims to predict muscle fatigue in various stages. These stages are divided into 7 classes from 0-6, and higher classes represent a fatigued muscle. In the tabular part of the experiments, the Decision Tree, Random Forest, and Support Vector Machine (SVM) were trained, and the accuracy was determined. A similar approach was made for the spectrogram part, where the signals were converted to spectrogram images, and with a combination of traditional- and intelligent data augmentation techniques, such as noise and DCGAN, the limited dataset was increased. A comparison between the performance of AlexNet, VGG16, DenseNet, and InceptionV3 pre-trained CNN models was made to predict differences in jump heights. The result was evaluated by implementing baseline classifiers on tabular data and pre-trained CNN model classifiers for CWT and STFT spectrograms with and without data augmentation. The evaluation of various state-of-the-art methodologies for a classification problem showed that DenseNet and VGG16 gave a reliable accuracy of 89.8 % on intelligent data augmented CWT images. The intelligent data augmentation applied on CWT images allows the pre-trained CNN models to learn features that can generalize unseen data. Proving that the combination of state-of-the-art methods can be introduced and address the challenges within the EMG domain.
90

Point Cloud Data Augmentation for 4D Panoptic Segmentation / Punktmolndataförstärkning för 4D-panoptisk Segmentering

Jin, Wangkang January 2022 (has links)
4D panoptic segmentation is an emerging topic in the field of autonomous driving, which jointly tackles 3D semantic segmentation, 3D instance segmentation, and 3D multi-object tracking based on point cloud data. However, the difficulty of collection limits the size of existing point cloud datasets. Therefore, data augmentation is employed to expand the amount of existing data for better generalization and prediction ability. In this thesis, we built a new point cloud dataset named VCE dataset from scratch. Besides, we adopted a neural network model for the 4D panoptic segmentation task and proposed a simple geometric method based on translation operation. Compared to the baseline model, better results were obtained after augmentation, with an increase of 2.15% in LSTQ. / 4D-panoptisk segmentering är ett framväxande ämne inom området autonom körning, som gemensamt tar itu med semantisk 3D-segmentering, 3D-instanssegmentering och 3D-spårning av flera objekt baserat på punktmolnsdata. Svårigheten att samla in begränsar dock storleken på befintliga punktmolnsdatauppsättningar. Därför används dataökning för att utöka mängden befintliga data för bättre generalisering och förutsägelseförmåga. I det här examensarbetet byggde vi en ny punktmolndatauppsättning med namnet VCE-datauppsättning från grunden. Dessutom antog vi en neural nätverksmodell för 4D-panoptisk segmenteringsuppgift och föreslog en enkel geometrisk metod baserad på översättningsoperation. Jämfört med baslinjemodellen erhölls bättre resultat efter förstärkning, med en ökning på 2.15% i LSTQ.

Page generated in 0.0983 seconds