Spelling suggestions: "subject:"hyperparameter byelection"" "subject:"hyperparameter dielection""
1 |
Learning Hyperparameters for Inverse Problems by Deep Neural NetworksMcDonald, Ashlyn Grace 08 May 2023 (has links)
Inverse problems arise in a wide variety of applications including biomedicine, environmental sciences, astronomy, and more. Computing reliable solutions to these problems requires the inclusion of prior knowledge in a process that is often referred to as regularization. Most regularization techniques require suitable choices of regularization parameters. In this work, we will describe new approaches that use deep neural networks (DNN) to estimate these regularization parameters. We will train multiple networks to approximate mappings from observation data to individual regularization parameters in a supervised learning approach. Once the networks are trained, we can efficiently compute regularization parameters for newly-obtained data by forward propagation through the DNNs. The network-obtained regularization parameters can be computed more efficiently and may even lead to more accurate solutions compared to existing regularization parameter selection methods. Numerical results for tomography demonstrate the potential benefits of using DNNs to learn regularization parameters. / Master of Science / Inverse problems arise in a wide variety of applications including biomedicine, environmental sciences, astronomy, and more. With these types of problems, the goal is to reconstruct an approximation of the original input when we can only observe the output. However, the output often includes some sort of noise or error, which means that computing reliable solutions to these problems is difficult. In order to combat this problem, we can include prior knowledge about the solution in a process that is often referred to as regularization. Most regularization techniques require suitable choices of regularization parameters. In this work, we will describe new approaches that use deep neural networks (DNN) to obtain these parameters. We will train multiple networks to approximate mappings from observation data to individual regularization parameters in a supervised learning approach. Once the networks are trained, we can efficiently compute regularization parameters for newly-obtained data by forward propagation through the DNNs. The network-obtained regularization parameters can be computed more efficiently and may even lead to more accurate solutions compared to existing regularization parameter selection methods. Numerical results for tomography demonstrate the potential of using DNNs to learn regularization parameters.
|
2 |
Towards Scalable Machine Learning with Privacy ProtectionFay, Dominik January 2023 (has links)
The increasing size and complexity of datasets have accelerated the development of machine learning models and exposed the need for more scalable solutions. This thesis explores challenges associated with large-scale machine learning under data privacy constraints. With the growth of machine learning models, traditional privacy methods such as data anonymization are becoming insufficient. Thus, we delve into alternative approaches, such as differential privacy. Our research addresses the following core areas in the context of scalable privacy-preserving machine learning: First, we examine the implications of data dimensionality on privacy for the application of medical image analysis. We extend the classification algorithm Private Aggregation of Teacher Ensembles (PATE) to deal with high-dimensional labels, and demonstrate that dimensionality reduction can be used to improve privacy. Second, we consider the impact of hyperparameter selection on privacy. Here, we propose a novel adaptive technique for hyperparameter selection in differentially gradient-based optimization. Third, we investigate sampling-based solutions to scale differentially private machine learning to dataset with a large number of records. We study the privacy-enhancing properties of importance sampling, highlighting that it can outperform uniform sub-sampling not only in terms of sample efficiency but also in terms of privacy. The three techniques developed in this thesis improve the scalability of machine learning while ensuring robust privacy protection, and aim to offer solutions for the effective and safe application of machine learning in large datasets. / Den ständigt ökande storleken och komplexiteten hos datamängder har accelererat utvecklingen av maskininlärningsmodeller och gjort behovet av mer skalbara lösningar alltmer uppenbart. Den här avhandlingen utforskar tre utmaningar förknippade med storskalig maskininlärning under dataskyddskrav. För stora och komplexa maskininlärningsmodeller blir traditionella metoder för integritet, såsom datananonymisering, otillräckliga. Vi undersöker därför alternativa tillvägagångssätt, såsom differentiell integritet. Vår forskning behandlar följande utmaningar inom skalbar och integitetsmedveten maskininlärning: För det första undersöker vi hur hög data-dimensionalitet påverkar integriteten för medicinsk bildanalys. Vi utvidgar klassificeringsalgoritmen Private Aggregation of Teacher Ensembles (PATE) för att hantera högdimensionella etiketter och visar att dimensionsreducering kan användas för att förbättra integriteten. För det andra studerar vi hur valet av hyperparametrar påverkar integriteten. Här föreslår vi en ny adaptiv teknik för val av hyperparametrar i gradient-baserad optimering med garantier på differentiell integritet. För det tredje granskar vi urvalsbaserade lösningar för att skala differentiellt privat maskininlärning till stora datamängder. Vi studerar de integritetsförstärkande egenskaperna hos importance sampling och visar att det kan överträffa ett likformigt urval av sampel, inte bara när det gäller effektivitet utan även för integritet. De tre teknikerna som utvecklats i denna avhandling förbättrar skalbarheten för integritetsskyddad maskininlärning och syftar till att erbjuda lösningar för effektiv och säker tillämpning av maskininlärning på stora datamängder. / <p>QC 20231101</p>
|
Page generated in 0.0994 seconds