• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 1
  • 1
  • 1
  • Tagged with
  • 33
  • 33
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • 9
  • 8
  • 8
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Uncertainty Estimation in Radiation Dose Prediction U-Net / Osäkerhetsskattning för stråldospredicerande U-Nets

Skarf, Frida January 2023 (has links)
The ability to quantify uncertainties associated with neural network predictions is crucial when they are relied upon in decision-making processes, especially in safety-critical applications like radiation therapy. In this paper, a single-model estimator of both epistemic and aleatoric uncertainties in a regression 3D U-net used for radiation dose prediction is presented. To capture epistemic uncertainty, Monte Carlo Dropout is employed, leveraging dropout during test-time inference to obtain a distribution of predictions. The variability among these predictions is used to estimate the model’s epistemic uncertainty. For quantifying aleatoric uncertainty quantile regression, which models conditional quantiles of the output distribution, is used. The method enables the estimation of prediction intervals of a user-specified significance level, where the difference between the upper and lower bound of the interval quantifies the aleatoric uncertainty. The proposed approach is evaluated on two datasets of prostate and breast cancer patient geometries and corresponding radiation doses. Results demonstrate that the quantile regression method provides well-calibrated prediction intervals, allowing for reliable aleatoric uncertainty estimation. Furthermore, the epistemic uncertainty obtained through Monte Carlo Dropout proves effective in identifying out-of-distribution examples, highlighting its usefulness for detecting anomalous cases where the model makes uncertain predictions. / Förmågan att kvantifiera osäkerheter i samband med neurala nätverksprediktioner är avgörande när de åberopas i beslutsprocesser, särskilt i säkerhetskritiska tillämpningar såsom strålterapi. I denna rapport presenteras en en-modellsimplementation för att uppskatta både epistemiska och aleatoriska osäkerheter i ett 3D regressions-U-net som används för att prediktera stråldos. För att fånga epistemisk osäkerhet används Monte Carlo Dropout, som utnyttjar dropout under testtidsinferens för att få en fördelning av prediktioner. Variabiliteten mellan dessa prediktioner används för att uppskatta modellens epistemiska osäkerhet. För att kvantifiera den aleatoriska osäkerheten används kvantilregression, eller quantile regression, som modellerar de betingade kvantilerna i outputfördelningen. Metoden möjliggör uppskattning av prediktionsintervall med en användardefinierad signifikansnivå, där skillnaden mellan intervallets övre och undre gräns kvantifierar den aleatoriska osäkerheten. Den föreslagna metoden utvärderas på två dataset innehållandes geometrier för prostata- och bröstcancerpatienter och korresponderande stråldoser. Resultaten visar på att kvantilregression ger välkalibrerade prediktionsintervall, vilket tillåter en tillförlitlig uppskattning av den aleatoriska osäkerheten. Dessutom visar sig den epistemiska osäkerhet som erhålls genom Monte Carlo Dropout vara användbar för att identifiera datapunkter som inte tillhör samma fördelning som träningsdatan, vilket belyser dess lämplighet för att upptäcka avvikande datapunkter där modellen gör osäkra prediktioner.
12

Neural Network Approximations to Solution Operators for Partial Differential Equations

Nickolas D Winovich (11192079) 28 July 2021 (has links)
<div>In this work, we introduce a framework for constructing light-weight neural network approximations to the solution operators for partial differential equations (PDEs). Using a data-driven offline training procedure, the resulting operator network models are able to effectively reduce the computational demands of traditional numerical methods into a single forward-pass of a neural network. Importantly, the network models can be calibrated to specific distributions of input data in order to reflect properties of real-world data encountered in practice. The networks thus provide specialized solvers tailored to specific use-cases, and while being more restrictive in scope when compared to more generally-applicable numerical methods (e.g. procedures valid for entire function spaces), the operator networks are capable of producing approximations significantly faster as a result of their specialization.</div><div><br></div><div>In addition, the network architectures are designed to place pointwise posterior distributions over the observed solutions; this setup facilitates simultaneous training and uncertainty quantification for the network solutions, allowing the models to provide pointwise uncertainties along with their predictions. An analysis of the predictive uncertainties is presented with experimental evidence establishing the validity of the uncertainty quantification schema for a collection of linear and nonlinear PDE systems. The reliability of the uncertainty estimates is also validated in the context of both in-distribution and out-of-distribution test data.</div><div><br></div><div>The proposed neural network training procedure is assessed using a novel convolutional encoder-decoder model, ConvPDE-UQ, in addition to an existing fully-connected approach, DeepONet. The convolutional framework is shown to provide accurate approximations to PDE solutions on varying domains, but is restricted by assumptions of uniform observation data and homogeneous boundary conditions. The fully-connected DeepONet framework provides a method for handling unstructured observation data and is also shown to provide accurate approximations for PDE systems with inhomogeneous boundary conditions; however, the resulting networks are constrained to a fixed domain due to the unstructured nature of the observation data which they accommodate. These two approaches thus provide complementary frameworks for constructing PDE-based operator networks which facilitate the real-time approximation of solutions to PDE systems for a broad range of target applications.</div>
13

Evaluation of Uncertainty in Hydrodynamic Modeling

Camacho Rincon, Rene Alexander 17 August 2013 (has links)
Uncertainty analysis in hydrodynamic modeling is useful to identify and report the limitations of a model caused by different sources of error. In the practice, the main sources of errors are divided into model structure errors, errors in the input data due to measurement imprecision among other, and parametric errors resulting from the difficulty of identifying physically representative parameter values valid at the temporal and spatial scale of the models. This investigation identifies, implements, evaluates, and recommends a set of methods for the evaluation of model structure uncertainty, parametric uncertainty, and input data uncertainty in hydrodynamic modeling studies. A comprehensive review of uncertainty analysis methods is provided and a set of widely applied methods is selected and implemented in real case studies identifying the main limitations and benefits of their use in hydrodynamic studies. In particular, the following methods are investigated: the First Order Variance Analysis (FOVA) method, the Monte Carlo Uncertainty Analysis (MCUA) method, the Bayesian Monte Carlo (BMC) method, the Markov Chain Monte Carlo (MCMC) method and the Generalized Likelihood Uncertainty Estimation (GLUE) method. The results of this investigation indicate that the uncertainty estimates computed with FOVA are consistent with the results obtained by MCUA. In addition, the comparison of BMC, MCMC and GLUE indicates that BMC and MCMC provide similar estimations of the posterior parameter probability distributions, single-point parameter values, and uncertainty bounds mainly due to the use of the same likelihood function, and the low number of parameters involved in the inference process. However, the implementation of MCMC is substantially more complex than the implementation of BMC given that its sampling algorithm requires a careful definition of auxiliary proposal probability distributions along with their variances to obtain parameter samples that effectively belong to the posterior parameter distribution. The analysis also suggest that the results of GLUE are inconsistent with the results of BMC and MCMC. It is concluded that BMC is a powerful and parsimonious strategy for evaluation of all the sources of uncertainty in hydrodynamic modeling. Despites of the computational requirements of BMC, the method can be easily implemented in most practical applications.
14

An investigation into enabling industrial machine tools as traceable measurement systems

Verma, Mayank January 2016 (has links)
On-machine inspection (OMI) via on-machine probing (OMP) is a technology that has the potential to provide a step change in the manufacturing of high precision products. Bringing product inspection closer to the machining process is very attractive proposition for many manufacturers who demand ever better quality, process control and efficiency from their manufacturing systems. However, there is a shortness of understanding, experience, and knowledge with regards to efficiently implementing OMI on industrially-based multi-axis machine tools. Coupled with the risks associated to this disruptive technology, these are major obstacles preventing OMI from being confidently adopted in many high precision manufacturing environments. The research pursued in this thesis investigates the concept of enabling high precision machine tools as measurement devices and focuses upon the question of: “How can traceable on-machine inspection be enabled and sustained in an industrial environment?” As highlighted by the literature and state-of-the-art review, much research and development focuses on the technology surrounding particular aspects of machine tool metrology and measurement whether this is theory, hardware, software, or simulation. Little research has been performed in terms of confirming the viability of industrial OMI and the systematic and holistic application of existing and new technology to enable optimal intervention. This EngD research has contributed towards the use of industrial machine tools as traceable measurement systems. Through the test cases performed, the novel concepts proposed, and solutions tested, a series of fundamental questions have been addressed. Thus, providing new knowledge and use to future researchers, engineers, consultants and manufacturing professionals.
15

Bayesian networks for uncertainty estimation in the response of dynamic structures

Calanni Fraccone, Giorgio M. 07 July 2008 (has links)
The dissertation focuses on estimating the uncertainty associated with stress/strain prediction procedures from dynamic test data used in turbine blade analysis. An accurate prediction of the maximum response levels for physical components during in-field operating conditions is essential for evaluating their performance and life characteristics, as well as for investigating how their behavior critically impacts system design and reliability assessment. Currently, stress/strain inference for a dynamic system is based on the combination of experimental data and results from the analytical/numerical model of the component under consideration. Both modeling challenges and testing limitations, however, contribute to the introduction of various sources of uncertainty within the given estimation procedure, and lead ultimately to diminished accuracy and reduced confidence in the predicted response. The objective of this work is to characterize the uncertainties present in the current response estimation process and provide a means to assess them quantitatively. More specifically, proposed in this research is a statistical methodology based on a Bayesian-network representation of the modeling process which allows for a statistically rigorous synthesis of modeling assumptions and information from experimental data. Such a framework addresses the problem of multi-directional uncertainty propagation, where standard techniques for unidirectional propagation from inputs' uncertainty to outputs' variability are not suited. Furthermore, it allows for the inclusion within the analysis of newly available test data that can provide indirect evidence on the parameters of the structure's analytical model, as well as lead to a reduction of the residual uncertainty in the estimated quantities. As part of this work, key uncertainty sources (i.e., material and geometric properties, sensor measurement and placement, as well as noise due data processing limitations) are investigated, and their impact upon the system response estimates is assessed through sensitivity studies. The results are utilized for the identification of the most significant contributors to uncertainty to be modeled within the developed Bayesian inference scheme. Simulated experimentation, statistically equivalent to specified real tests, is also constructed to generate the data necessary to build the appropriate Bayesian network, which is then infused with actual experimental information for the purpose of explaining the uncertainty embedded in the response predictions and quantifying their inherent accuracy.
16

Predictive Techniques and Methods for Decision Support in Situations with Poor Data Quality

König, Rikard January 2009 (has links)
Today, decision support systems based on predictive modeling are becoming more common, since organizations often collectmore data than decision makers can handle manually. Predictive models are used to find potentially valuable patterns in the data, or to predict the outcome of some event. There are numerous predictive techniques, ranging from simple techniques such as linear regression,to complex powerful ones like artificial neural networks. Complexmodels usually obtain better predictive performance, but are opaque and thus cannot be used to explain predictions or discovered patterns.The design choice of which predictive technique to use becomes even harder since no technique outperforms all others over a large set of problems. It is even difficult to find the best parameter values for aspecific technique, since these settings also are problem dependent.One way to simplify this vital decision is to combine several models, possibly created with different settings and techniques, into an ensemble. Ensembles are known to be more robust and powerful than individual models, and ensemble diversity can be used to estimate the uncertainty associated with each prediction.In real-world data mining projects, data is often imprecise, contain uncertainties or is missing important values, making it impossible to create models with sufficient performance for fully automated systems.In these cases, predictions need to be manually analyzed and adjusted.Here, opaque models like ensembles have a disadvantage, since theanalysis requires understandable models. To overcome this deficiencyof opaque models, researchers have developed rule extractiontechniques that try to extract comprehensible rules from opaquemodels, while retaining sufficient accuracy.This thesis suggests a straightforward but comprehensive method forpredictive modeling in situations with poor data quality. First,ensembles are used for the actual modeling, since they are powerful,robust and require few design choices. Next, ensemble uncertaintyestimations pinpoint predictions that need special attention from adecision maker. Finally, rule extraction is performed to support theanalysis of uncertain predictions. Using this method, ensembles can beused for predictive modeling, in spite of their opacity and sometimesinsufficient global performance, while the involvement of a decisionmaker is minimized.The main contributions of this thesis are three novel techniques that enhance the performance of the purposed method. The first technique deals with ensemble uncertainty estimation and is based on a successful approach often used in weather forecasting. The other twoare improvements of a rule extraction technique, resulting in increased comprehensibility and more accurate uncertainty estimations. / <p><b>Sponsorship</b>:</p><p>This work was supported by the Information Fusion Research</p><p>Program (www.infofusion.se) at the University of Skövde, Sweden, in</p><p>partnership with the Swedish Knowledge Foundation under grant</p><p>2003/0104.</p>
17

Using Deep Learning to SegmentCardiovascular 4D Flow MRI : 3D U-Net for cardiovascular 4D flow MRI segmentation and Bayesian 3D U-Net for uncertainty estimation

Bhutra, Omkar January 2021 (has links)
Deep convolutional neural networks (CNN’s) have achieved state-of-the-art accuraciesfor multi-class segmentation in biomedical image science. In this thesis, A 3D U-Net isused to segment 4D flow Magnetic Resonance Images that include the heart and its largevessels. The 4 dimensional flow MRI dataset has been segmented and validated using amulti-atlas based registration technique. This multi-atlas based technique resulted in highquality segmentations, with the disadvantage of long computation times typically requiredby three-dimensional registration techniques. The 3D U-Net framework learns to classifyvoxels by transforming the information about the segmentation into a latent feature spacein a contracting path and upsampling them to semantic segmentation in an expandingpath. A CNN trained using a sufficiently diverse set of volumes at different time intervalsof the diastole and systole should be able to handle more extreme morphological differencesbetween subjects. Evaluation of the results is based on metric for segmentation evaluationsuch as Dice coefficient. Uncertainty is estimated using a bayesian implementationof the 3D U-Net of similar architecture. / <p>The presentation was online over zoom due to covid19 restrictions.</p>
18

Dataset Drift in Radar Warning Receivers : Out-of-Distribution Detection for Radar Emitter Classification using an RNN-based Deep Ensemble

Coleman, Kevin January 2023 (has links)
Changes to the signal environment of a radar warning receiver (RWR) over time through dataset drift can negatively affect a machine learning (ML) model, deployed for radar emitter classification (REC). The training data comes from a simulator at Saab AB, in the form of pulsed radar in a time-series. In order to investigate this phenomenon on a neural network (NN), this study first implements an underlying classifier (UC) in the form of a deep ensemble (DE), where each ensemble member consists of an NN with two independently trained bidirectional LSTM channels for each of the signal features pulse repetition interval (PRI), pulse width (PW) and carrier frequency (CF). From tests, the UC performs best for REC when using all three features. Because dataset drift can be treated as detecting out-of-distribution (OOD) samples over time, the aim is to reduce NN overconfidence on data from unseen radar emitters in order to enable OOD detection. The method estimates uncertainty with predictive entropy and classifies samples reaching an entropy larger than a threshold as OOD. In the first set of tests, OOD is defined from holding out one feature modulation from the training dataset, and choosing this as the only modulation in the OOD dataset used during testing. With this definition, Stagger and Jitter are most difficult to detect as OOD. Moreover, using DEs with 6 ensemble members and implementing LogitNorm to the architecture improves the OOD detection performance. Furthermore, the OOD detection method performs well for up to 300 emitter classes and predictive entropy outperforms the baseline for almost all tests. Finally, the model performs worse when OOD is simply defined as signals from unseen emitters, because of a precision decrease. In conclusion, the implemented changes managed to reduce the overconfidence for this particular NN, and improve OOD detection for REC.
19

Maskininlärning med konform förutsägelse för prediktiva underhållsuppgifter i industri 4.0 / Machine Learning with Conformal Prediction for Predictive Maintenance tasks in Industry 4.0 : Data-driven Approach

Liu, Shuzhou, Mulahuko, Mpova January 2023 (has links)
This thesis is a cooperation with Knowit, Östrand \&amp; Hansen, and Orkla. It aimed to explore the application of Machine Learning and Deep Learning models with Conformal Prediction for a predictive maintenance situation at Orkla. Predictive maintenance is essential in numerous industrial manufacturing scenarios. It can help to reduce machine downtime, improve equipment reliability, and save unnecessary costs.  In this thesis, various Machine Learning and Deep Learning models, including Decision Tree, Random Forest, Support Vector Regression, Gradient Boosting, and Long short-term memory, are applied to a real-world predictive maintenance dataset. The Orkla dataset was originally planned to use in this thesis project. However, due to some challenges met and time limitations, one NASA C-MAPSS dataset with a similar data structure was chosen to study how Machine Learning models could be applied to predict the remaining useful lifetime (RUL) in manufacturing. Besides, conformal prediction, a recently developed framework to measure the prediction uncertainty of Machine Learning models, is also integrated into the models for more reliable RUL prediction.  The thesis project results show that both the Machine Learning and Deep Learning models with conformal prediction could predict RUL closer to the true RUL while LSTM outperforms the Machine Learning models. Also, the conformal prediction intervals provide informative and reliable information about the uncertainty of the predictions, which can help inform personnel at factories in advance to take necessary maintenance actions.  Overall, this thesis demonstrates the effectiveness of utilizing machine learning and Deep Learning models with Conformal Prediction for predictive maintenance situations. Moreover, based on the modeling results of the NASA dataset, some insights are discussed on how to transfer these experiences into Orkla data for RUL prediction in the future.
20

Calibrated uncertainty estimation for SLAM

Bansal, Dishank 04 1900 (has links)
La focus de cette thèse de maîtrise est l’analyse de l’étalonnage de l’incertitude pour la lo- calisation et la cartographie simultanées (SLAM) en utilisant des modèles de mesure basés sur les réseaux de neurones. SLAM sont un problème fondamental en robotique et en vision par ordinateur, avec de nombreuses applications allant des voitures autonomes aux réalités augmentées. Au cœur de SLAM, il s’agit d’estimer la pose (c’est-à-dire la position et l’orien- tation) d’un robot ou d’une caméra lorsqu’elle se déplace dans un environnement inconnu et de construire simultanément une carte de l’environnement environnant. Le SLAM visuel, qui utilise des images en entrée, est un cadre de SLAM couramment utilisé. Cependant, les méthodes traditionnelles de SLAM visuel sont basées sur des caractéristiques fabriquées à la main et peuvent être vulnérables à des défis tels que la mauvaise luminosité et l’occultation. L’apprentissage profond est devenu une approche plus évolutive et robuste, avec les réseaux de neurones convolutionnels (CNN) devenant le système de perception de facto en robotique. Pour intégrer les méthodes basées sur les CNN aux systèmes de SLAM, il est nécessaire d’estimer l’incertitude ou le bruit dans les mesures de perception. L’apprentissage profond bayésien a fourni diverses méthodes pour estimer l’incertitude dans les réseaux de neurones, notamment les ensembles, la distribution sur les paramètres du réseau et l’ajout de têtes de prédiction pour les paramètres de distribution de la sortie. Cependant, il est également important de s’assurer que ces estimations d’incertitude sont bien étalonnées, c’est-à-dire qu’elles reflètent fidèlement l’erreur de prédiction. Dans cette thèse de maîtrise, nous abordons ce défi en développant un système de SLAM qui intègre un réseau de neurones en tant que modèle de mesure et des estimations d’in- certitude étalonnées. Nous montrons que ce système fonctionne mieux que les approches qui utilisent la méthode traditionnelle d’estimation de l’incertitude, où les estimations de l’incertitude sont simplement considérées comme des hyperparamètres qui sont réglés ma- nuellement. Nos résultats démontrent l’importance de tenir compte de manière précise de l’incertitude dans le problème de SLAM, en particulier lors de l’utilisation d’un réseau de neur. / The focus of this Masters thesis is the analysis of uncertainty calibration for Simultaneous Localization and Mapping (SLAM) using neural network-based measurement models. SLAM is a fundamental problem in robotics and computer vision, with numerous applications rang- ing from self-driving cars to augmented reality. At its core, SLAM involves estimating the pose (i.e., position and orientation) of a robot or camera as it moves through an unknown environment and constructing a map of the surrounding environment simultaneously. Vi- sual SLAM, which uses images as input, is a commonly used SLAM framework. However, traditional Visual SLAM methods rely on handcrafted features and can be vulnerable to challenges such as poor lighting and occlusion. Deep learning has emerged as a more scal- able and robust approach, with Convolutional Neural Networks (CNNs) becoming the de facto perception system in robotics. To integrate CNN-based methods with SLAM systems, it is necessary to estimate the uncertainty or noise in the perception measurements. Bayesian deep learning has provided various methods for estimating uncertainty in neural networks, including ensembles, distribu- tions over network parameters, and adding variance heads for direct uncertainty prediction. However, it is also essential to ensure that these uncertainty estimates are well-calibrated, i.e they accurately reflect the error in the prediction. In this Master’s thesis, we address this challenge by developing a system for SLAM that incorporates a neural network as the measurement model and calibrated uncertainty esti- mates. We show that this system performs better than the approaches which uses traditional uncertainty estimation method, where uncertainty estimates are just considered hyperpa- rameters which are tuned manually. Our results demonstrate the importance of accurately accounting for uncertainty in the SLAM problem, particularly when using a neural network as the measurement model, in order to achieve reliable and robust localization and mapping.

Page generated in 0.0541 seconds