• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 1
  • 1
  • 1
  • Tagged with
  • 35
  • 35
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • 8
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Advancing Deep Learning-based Driver Intention Recognition : Towards a safe integration framework of high-risk AI systems

Vellenga, Koen January 2024 (has links)
Progress in artificial intelligence (AI), onboard computation capabilities, and the integration of advanced sensors in cars have facilitated the development of Advanced Driver Assistance Systems (ADAS). These systems aim to continuously minimize human driving errors. {An example application of an ADAS could be to support a human driver by informing if an intended driving maneuver is safe to pursue given the current state of the driving environment. One of the components enabling such an ADAS is recognizing the driver's intentions. Driver intention recognition (DIR) concerns the identification of what driving maneuver a driver aspires to perform in the near future, commonly spanning a few seconds. A challenging aspect of integrating such a system into a car is the ability of the ADAS to handle unseen scenarios. Deploying any AI-based system in an environment where mistakes can cause harm to human beings is considered a high-risk AI system. Upcoming AI regulations require a car manufacturer to motivate the design, performance-complexity trade-off, and the understanding of potential blind spots of a high-risk AI system.} Therefore, this licentiate thesis focuses on AI-based DIR systems and presents an overview of the current state of the DIR research field. Additionally, experimental results are included that demonstrate the process of empirically motivating and evaluating the design of deep neural networks for DIR. To avoid the reliance on sequential Monte Carlo sampling techniques to produce an uncertainty estimation, we evaluated a surrogate model to reproduce uncertainty estimations learned from probabilistic deep-learning models. Lastly, to contextualize the results within the broader scope of safely integrating future high-risk AI-based systems into a car, we propose a foundational conceptual framework. / <p>Ett av tre delarbeten (övriga se rubriken Delarbeten/List of papers):</p><p>Vellenga, Koen, H. Joe Steinhauer et al. (2024). "Designing deep neural networks for driver intention recognition". <em>Under submission</em>.</p>
12

Uncertainty Estimation in Radiation Dose Prediction U-Net / Osäkerhetsskattning för stråldospredicerande U-Nets

Skarf, Frida January 2023 (has links)
The ability to quantify uncertainties associated with neural network predictions is crucial when they are relied upon in decision-making processes, especially in safety-critical applications like radiation therapy. In this paper, a single-model estimator of both epistemic and aleatoric uncertainties in a regression 3D U-net used for radiation dose prediction is presented. To capture epistemic uncertainty, Monte Carlo Dropout is employed, leveraging dropout during test-time inference to obtain a distribution of predictions. The variability among these predictions is used to estimate the model’s epistemic uncertainty. For quantifying aleatoric uncertainty quantile regression, which models conditional quantiles of the output distribution, is used. The method enables the estimation of prediction intervals of a user-specified significance level, where the difference between the upper and lower bound of the interval quantifies the aleatoric uncertainty. The proposed approach is evaluated on two datasets of prostate and breast cancer patient geometries and corresponding radiation doses. Results demonstrate that the quantile regression method provides well-calibrated prediction intervals, allowing for reliable aleatoric uncertainty estimation. Furthermore, the epistemic uncertainty obtained through Monte Carlo Dropout proves effective in identifying out-of-distribution examples, highlighting its usefulness for detecting anomalous cases where the model makes uncertain predictions. / Förmågan att kvantifiera osäkerheter i samband med neurala nätverksprediktioner är avgörande när de åberopas i beslutsprocesser, särskilt i säkerhetskritiska tillämpningar såsom strålterapi. I denna rapport presenteras en en-modellsimplementation för att uppskatta både epistemiska och aleatoriska osäkerheter i ett 3D regressions-U-net som används för att prediktera stråldos. För att fånga epistemisk osäkerhet används Monte Carlo Dropout, som utnyttjar dropout under testtidsinferens för att få en fördelning av prediktioner. Variabiliteten mellan dessa prediktioner används för att uppskatta modellens epistemiska osäkerhet. För att kvantifiera den aleatoriska osäkerheten används kvantilregression, eller quantile regression, som modellerar de betingade kvantilerna i outputfördelningen. Metoden möjliggör uppskattning av prediktionsintervall med en användardefinierad signifikansnivå, där skillnaden mellan intervallets övre och undre gräns kvantifierar den aleatoriska osäkerheten. Den föreslagna metoden utvärderas på två dataset innehållandes geometrier för prostata- och bröstcancerpatienter och korresponderande stråldoser. Resultaten visar på att kvantilregression ger välkalibrerade prediktionsintervall, vilket tillåter en tillförlitlig uppskattning av den aleatoriska osäkerheten. Dessutom visar sig den epistemiska osäkerhet som erhålls genom Monte Carlo Dropout vara användbar för att identifiera datapunkter som inte tillhör samma fördelning som träningsdatan, vilket belyser dess lämplighet för att upptäcka avvikande datapunkter där modellen gör osäkra prediktioner.
13

Neural Network Approximations to Solution Operators for Partial Differential Equations

Nickolas D Winovich (11192079) 28 July 2021 (has links)
<div>In this work, we introduce a framework for constructing light-weight neural network approximations to the solution operators for partial differential equations (PDEs). Using a data-driven offline training procedure, the resulting operator network models are able to effectively reduce the computational demands of traditional numerical methods into a single forward-pass of a neural network. Importantly, the network models can be calibrated to specific distributions of input data in order to reflect properties of real-world data encountered in practice. The networks thus provide specialized solvers tailored to specific use-cases, and while being more restrictive in scope when compared to more generally-applicable numerical methods (e.g. procedures valid for entire function spaces), the operator networks are capable of producing approximations significantly faster as a result of their specialization.</div><div><br></div><div>In addition, the network architectures are designed to place pointwise posterior distributions over the observed solutions; this setup facilitates simultaneous training and uncertainty quantification for the network solutions, allowing the models to provide pointwise uncertainties along with their predictions. An analysis of the predictive uncertainties is presented with experimental evidence establishing the validity of the uncertainty quantification schema for a collection of linear and nonlinear PDE systems. The reliability of the uncertainty estimates is also validated in the context of both in-distribution and out-of-distribution test data.</div><div><br></div><div>The proposed neural network training procedure is assessed using a novel convolutional encoder-decoder model, ConvPDE-UQ, in addition to an existing fully-connected approach, DeepONet. The convolutional framework is shown to provide accurate approximations to PDE solutions on varying domains, but is restricted by assumptions of uniform observation data and homogeneous boundary conditions. The fully-connected DeepONet framework provides a method for handling unstructured observation data and is also shown to provide accurate approximations for PDE systems with inhomogeneous boundary conditions; however, the resulting networks are constrained to a fixed domain due to the unstructured nature of the observation data which they accommodate. These two approaches thus provide complementary frameworks for constructing PDE-based operator networks which facilitate the real-time approximation of solutions to PDE systems for a broad range of target applications.</div>
14

Evaluation of Uncertainty in Hydrodynamic Modeling

Camacho Rincon, Rene Alexander 17 August 2013 (has links)
Uncertainty analysis in hydrodynamic modeling is useful to identify and report the limitations of a model caused by different sources of error. In the practice, the main sources of errors are divided into model structure errors, errors in the input data due to measurement imprecision among other, and parametric errors resulting from the difficulty of identifying physically representative parameter values valid at the temporal and spatial scale of the models. This investigation identifies, implements, evaluates, and recommends a set of methods for the evaluation of model structure uncertainty, parametric uncertainty, and input data uncertainty in hydrodynamic modeling studies. A comprehensive review of uncertainty analysis methods is provided and a set of widely applied methods is selected and implemented in real case studies identifying the main limitations and benefits of their use in hydrodynamic studies. In particular, the following methods are investigated: the First Order Variance Analysis (FOVA) method, the Monte Carlo Uncertainty Analysis (MCUA) method, the Bayesian Monte Carlo (BMC) method, the Markov Chain Monte Carlo (MCMC) method and the Generalized Likelihood Uncertainty Estimation (GLUE) method. The results of this investigation indicate that the uncertainty estimates computed with FOVA are consistent with the results obtained by MCUA. In addition, the comparison of BMC, MCMC and GLUE indicates that BMC and MCMC provide similar estimations of the posterior parameter probability distributions, single-point parameter values, and uncertainty bounds mainly due to the use of the same likelihood function, and the low number of parameters involved in the inference process. However, the implementation of MCMC is substantially more complex than the implementation of BMC given that its sampling algorithm requires a careful definition of auxiliary proposal probability distributions along with their variances to obtain parameter samples that effectively belong to the posterior parameter distribution. The analysis also suggest that the results of GLUE are inconsistent with the results of BMC and MCMC. It is concluded that BMC is a powerful and parsimonious strategy for evaluation of all the sources of uncertainty in hydrodynamic modeling. Despites of the computational requirements of BMC, the method can be easily implemented in most practical applications.
15

Uncertainty Estimation on Natural Language Processing

He, Jianfeng 15 May 2024 (has links)
Text plays a pivotal role in our daily lives, encompassing various forms such as social media posts, news articles, books, reports, and more. Consequently, Natural Language Processing (NLP) has garnered widespread attention. This technology empowers us to undertake tasks like text classification, entity recognition, and even crafting responses within a dialogue context. However, despite the expansive utility of NLP, it frequently necessitates a critical decision: whether to place trust in a model's predictions. To illustrate, consider a state-of-the-art (SOTA) model entrusted with diagnosing a disease or assessing the veracity of a rumor. An incorrect prediction in such scenarios can have dire consequences, impacting individuals' health or tarnishing their reputation. Consequently, it becomes imperative to establish a reliable method for evaluating the reliability of an NLP model's predictions, which is our focus-uncertainty estimation on NLP. Though many works have researched uncertainty estimation or NLP, the combination of these two domains is rare. This is because most NLP research emphasizes model prediction performance but tends to overlook the reliability of NLP model predictions. Additionally, current uncertainty estimation models may not be suitable for NLP due to the unique characteristics of NLP tasks, such as the need for more fine-grained information in named entity recognition. Therefore, this dissertation proposes novel uncertainty estimation methods for different NLP tasks by considering the NLP task's distinct characteristics. The NLP tasks are categorized into natural language understanding (NLU) and natural language generation (NLG, such as text summarization). Among the NLU tasks, the understanding could be on two views, global-view (e.g. text classification at document level) and local-view (e.g. natural language inference at sentence level and named entity recognition at token level). As a result, we research uncertainty estimation on three tasks: text classification, named entity recognition, and text summarization. Besides, because few-shot text classification has captured much attention recently, we also research the uncertainty estimation on few-shot text classification. For the first topic, uncertainty estimation on text classification, few uncertainty models focus on improving the performance of text classification where human resources are involved. In response to this gap, our research focuses on enhancing the accuracy of uncertainty scores by bolstering the confidence associated with winning scores. we introduce MSD, a novel model comprising three distinct components: 'mix-up,' 'self-ensembling,' and 'distinctiveness score.' The primary objective of MSD is to refine the accuracy of uncertainty scores by mitigating the issue of overconfidence in winning scores while simultaneously considering various categories of uncertainty. seamlessly integrate with different Deep Neural Networks. Extensive experiments with ablation settings are conducted on four real-world datasets, resulting in consistently competitive improvements. Our second topic focuses on uncertainty estimation on few-shot text classification (UEFTC), which has few or even only one available support sample for each class. UEFTC represents an underexplored research domain where, due to limited data samples, a UEFTC model predicts an uncertainty score to assess the likelihood of classification errors. However, traditional uncertainty estimation models in text classification are ill-suited for UEFTC since they demand extensive training data, while UEFTC operates in a few-shot scenario, typically providing just a few support samples, or even just one, per class. To tackle this challenge, we introduce Contrastive Learning from Uncertainty Relations (CLUR) as a solution tailored for UEFTC. CLUR exhibits the unique capability to be effectively trained with only one support sample per class, aided by pseudo uncertainty scores. A distinguishing feature of CLUR is its autonomous learning of these pseudo uncertainty scores, in contrast to previous approaches that relied on manual specification. Our investigation of CLUR encompasses four model structures, allowing us to evaluate the performance of three commonly employed contrastive learning components in the context of UEFTC. Our findings highlight the effectiveness of two of these components. Our third topic focuses on uncertainty estimation on sequential labeling. Sequential labeling involves the task of assigning labels to individual tokens in a sequence, exemplified by Named Entity Recognition (NER). Despite significant advancements in enhancing NER performance in prior research, the realm of uncertainty estimation for NER (UE-NER) remains relatively uncharted but is of paramount importance. This topic focuses on UE-NER, seeking to gauge uncertainty scores for NER predictions. Previous models for uncertainty estimation often overlook two distinctive attributes of NER: the interrelation among entities (where the learning of one entity's embedding depends on others) and the challenges posed by incorrect span predictions in entity extraction. To address these issues, we introduce the Sequential Labeling Posterior Network (SLPN), designed to estimate uncertainty scores for the extracted entities while considering uncertainty propagation from other tokens. Additionally, we have devised an evaluation methodology tailored to the specific nuances of wrong-span cases. Our fourth topic focuses on an overlooked question that persists regarding the evaluation reliability of uncertainty estimation in text summarization (UE-TS). Text summarization, a key task in natural language generation (NLG), holds significant importance, particularly in domains where inaccuracies can have serious consequences, such as healthcare. UE-TS has garnered attention due to the potential risks associated with erroneous summaries. However, the reliability of evaluating UE-TS methods raises concerns, stemming from the interdependence between uncertainty model metrics and the wide array of NLG metrics. To address these concerns, we introduce a comprehensive UE-TS benchmark incorporating twenty-six NLG metrics across four dimensions. This benchmark evaluates the uncertainty estimation capabilities of two large language models and one pre-trained language model across two datasets. Additionally, it assesses the effectiveness of fourteen common uncertainty estimation methods. Our study underscores the necessity of utilizing diverse, uncorrelated NLG metrics and uncertainty estimation techniques for a robust evaluation of UE-TS methods. / Doctor of Philosophy / Text is integral to our daily activities, appearing in various forms such as social media posts, news articles, books, and reports. We rely on text for communication, information dissemination, and decision-making. Given its ubiquity, the ability to process and understand text through Natural Language Processing (NLP) has become increasingly important. NLP technology enables us to perform tasks like text classification, which involves categorizing text into predefined labels, and named entity recognition (NER), which identifies specific entities such as names, dates, and locations within text. Additionally, NLP facilitates generating coherent and contextually appropriate responses in conversational agents, enhancing human-computer interaction. However, the reliability of NLP models is crucial, especially in sensitive applications like medical diagnoses, where errors can have severe consequences. This dissertation focuses on uncertainty estimation in NLP, a less explored but essential area. Uncertainty estimation helps evaluate the confidence of NLP model predictions. We propose new methods tailored to various NLP tasks, acknowledging their unique needs. NLP tasks are divided into natural language understanding (NLU) and natural language generation (NLG). Within NLU, we look at tasks from two perspectives: a global view (e.g., document-level text classification) and a local view (e.g., sentence-level inference and token-level entity recognition). Our research spans text classification, named entity recognition (NER), and text summarization, with a special focus on few-shot text classification due to its recent prominence. For text classification, we introduce the MSD model, which includes three components to enhance uncertainty score accuracy and address overconfidence issues. This model integrates seamlessly with different neural networks and shows consistent improvements in experiments. For few-shot text classification, we develop Contrastive Learning from Uncertainty Relations (CLUR), designed to work effectively with minimal support samples per class. CLUR autonomously learns pseudo uncertainty scores, demonstrating effectiveness with various contrastive learning components. In NER, we address the unique challenges of entity interrelation and span prediction errors. We propose the Sequential Labeling Posterior Network (SLPN) to estimate uncertainty scores while considering uncertainty propagation from other tokens. For text summarization, we create a benchmark with tens of metrics to evaluate uncertainty estimation methods across two datasets. This benchmark helps assess the reliability of these methods, highlighting the need for diverse, uncorrelated metrics. Overall, our work advances the understanding and implementation of uncertainty estimation in NLP, providing more reliable and accurate predictions across different tasks.
16

An investigation into enabling industrial machine tools as traceable measurement systems

Verma, Mayank January 2016 (has links)
On-machine inspection (OMI) via on-machine probing (OMP) is a technology that has the potential to provide a step change in the manufacturing of high precision products. Bringing product inspection closer to the machining process is very attractive proposition for many manufacturers who demand ever better quality, process control and efficiency from their manufacturing systems. However, there is a shortness of understanding, experience, and knowledge with regards to efficiently implementing OMI on industrially-based multi-axis machine tools. Coupled with the risks associated to this disruptive technology, these are major obstacles preventing OMI from being confidently adopted in many high precision manufacturing environments. The research pursued in this thesis investigates the concept of enabling high precision machine tools as measurement devices and focuses upon the question of: “How can traceable on-machine inspection be enabled and sustained in an industrial environment?” As highlighted by the literature and state-of-the-art review, much research and development focuses on the technology surrounding particular aspects of machine tool metrology and measurement whether this is theory, hardware, software, or simulation. Little research has been performed in terms of confirming the viability of industrial OMI and the systematic and holistic application of existing and new technology to enable optimal intervention. This EngD research has contributed towards the use of industrial machine tools as traceable measurement systems. Through the test cases performed, the novel concepts proposed, and solutions tested, a series of fundamental questions have been addressed. Thus, providing new knowledge and use to future researchers, engineers, consultants and manufacturing professionals.
17

Bayesian networks for uncertainty estimation in the response of dynamic structures

Calanni Fraccone, Giorgio M. 07 July 2008 (has links)
The dissertation focuses on estimating the uncertainty associated with stress/strain prediction procedures from dynamic test data used in turbine blade analysis. An accurate prediction of the maximum response levels for physical components during in-field operating conditions is essential for evaluating their performance and life characteristics, as well as for investigating how their behavior critically impacts system design and reliability assessment. Currently, stress/strain inference for a dynamic system is based on the combination of experimental data and results from the analytical/numerical model of the component under consideration. Both modeling challenges and testing limitations, however, contribute to the introduction of various sources of uncertainty within the given estimation procedure, and lead ultimately to diminished accuracy and reduced confidence in the predicted response. The objective of this work is to characterize the uncertainties present in the current response estimation process and provide a means to assess them quantitatively. More specifically, proposed in this research is a statistical methodology based on a Bayesian-network representation of the modeling process which allows for a statistically rigorous synthesis of modeling assumptions and information from experimental data. Such a framework addresses the problem of multi-directional uncertainty propagation, where standard techniques for unidirectional propagation from inputs' uncertainty to outputs' variability are not suited. Furthermore, it allows for the inclusion within the analysis of newly available test data that can provide indirect evidence on the parameters of the structure's analytical model, as well as lead to a reduction of the residual uncertainty in the estimated quantities. As part of this work, key uncertainty sources (i.e., material and geometric properties, sensor measurement and placement, as well as noise due data processing limitations) are investigated, and their impact upon the system response estimates is assessed through sensitivity studies. The results are utilized for the identification of the most significant contributors to uncertainty to be modeled within the developed Bayesian inference scheme. Simulated experimentation, statistically equivalent to specified real tests, is also constructed to generate the data necessary to build the appropriate Bayesian network, which is then infused with actual experimental information for the purpose of explaining the uncertainty embedded in the response predictions and quantifying their inherent accuracy.
18

Predictive Techniques and Methods for Decision Support in Situations with Poor Data Quality

König, Rikard January 2009 (has links)
Today, decision support systems based on predictive modeling are becoming more common, since organizations often collectmore data than decision makers can handle manually. Predictive models are used to find potentially valuable patterns in the data, or to predict the outcome of some event. There are numerous predictive techniques, ranging from simple techniques such as linear regression,to complex powerful ones like artificial neural networks. Complexmodels usually obtain better predictive performance, but are opaque and thus cannot be used to explain predictions or discovered patterns.The design choice of which predictive technique to use becomes even harder since no technique outperforms all others over a large set of problems. It is even difficult to find the best parameter values for aspecific technique, since these settings also are problem dependent.One way to simplify this vital decision is to combine several models, possibly created with different settings and techniques, into an ensemble. Ensembles are known to be more robust and powerful than individual models, and ensemble diversity can be used to estimate the uncertainty associated with each prediction.In real-world data mining projects, data is often imprecise, contain uncertainties or is missing important values, making it impossible to create models with sufficient performance for fully automated systems.In these cases, predictions need to be manually analyzed and adjusted.Here, opaque models like ensembles have a disadvantage, since theanalysis requires understandable models. To overcome this deficiencyof opaque models, researchers have developed rule extractiontechniques that try to extract comprehensible rules from opaquemodels, while retaining sufficient accuracy.This thesis suggests a straightforward but comprehensive method forpredictive modeling in situations with poor data quality. First,ensembles are used for the actual modeling, since they are powerful,robust and require few design choices. Next, ensemble uncertaintyestimations pinpoint predictions that need special attention from adecision maker. Finally, rule extraction is performed to support theanalysis of uncertain predictions. Using this method, ensembles can beused for predictive modeling, in spite of their opacity and sometimesinsufficient global performance, while the involvement of a decisionmaker is minimized.The main contributions of this thesis are three novel techniques that enhance the performance of the purposed method. The first technique deals with ensemble uncertainty estimation and is based on a successful approach often used in weather forecasting. The other twoare improvements of a rule extraction technique, resulting in increased comprehensibility and more accurate uncertainty estimations. / <p><b>Sponsorship</b>:</p><p>This work was supported by the Information Fusion Research</p><p>Program (www.infofusion.se) at the University of Skövde, Sweden, in</p><p>partnership with the Swedish Knowledge Foundation under grant</p><p>2003/0104.</p>
19

Using Deep Learning to SegmentCardiovascular 4D Flow MRI : 3D U-Net for cardiovascular 4D flow MRI segmentation and Bayesian 3D U-Net for uncertainty estimation

Bhutra, Omkar January 2021 (has links)
Deep convolutional neural networks (CNN’s) have achieved state-of-the-art accuraciesfor multi-class segmentation in biomedical image science. In this thesis, A 3D U-Net isused to segment 4D flow Magnetic Resonance Images that include the heart and its largevessels. The 4 dimensional flow MRI dataset has been segmented and validated using amulti-atlas based registration technique. This multi-atlas based technique resulted in highquality segmentations, with the disadvantage of long computation times typically requiredby three-dimensional registration techniques. The 3D U-Net framework learns to classifyvoxels by transforming the information about the segmentation into a latent feature spacein a contracting path and upsampling them to semantic segmentation in an expandingpath. A CNN trained using a sufficiently diverse set of volumes at different time intervalsof the diastole and systole should be able to handle more extreme morphological differencesbetween subjects. Evaluation of the results is based on metric for segmentation evaluationsuch as Dice coefficient. Uncertainty is estimated using a bayesian implementationof the 3D U-Net of similar architecture. / <p>The presentation was online over zoom due to covid19 restrictions.</p>
20

Dataset Drift in Radar Warning Receivers : Out-of-Distribution Detection for Radar Emitter Classification using an RNN-based Deep Ensemble

Coleman, Kevin January 2023 (has links)
Changes to the signal environment of a radar warning receiver (RWR) over time through dataset drift can negatively affect a machine learning (ML) model, deployed for radar emitter classification (REC). The training data comes from a simulator at Saab AB, in the form of pulsed radar in a time-series. In order to investigate this phenomenon on a neural network (NN), this study first implements an underlying classifier (UC) in the form of a deep ensemble (DE), where each ensemble member consists of an NN with two independently trained bidirectional LSTM channels for each of the signal features pulse repetition interval (PRI), pulse width (PW) and carrier frequency (CF). From tests, the UC performs best for REC when using all three features. Because dataset drift can be treated as detecting out-of-distribution (OOD) samples over time, the aim is to reduce NN overconfidence on data from unseen radar emitters in order to enable OOD detection. The method estimates uncertainty with predictive entropy and classifies samples reaching an entropy larger than a threshold as OOD. In the first set of tests, OOD is defined from holding out one feature modulation from the training dataset, and choosing this as the only modulation in the OOD dataset used during testing. With this definition, Stagger and Jitter are most difficult to detect as OOD. Moreover, using DEs with 6 ensemble members and implementing LogitNorm to the architecture improves the OOD detection performance. Furthermore, the OOD detection method performs well for up to 300 emitter classes and predictive entropy outperforms the baseline for almost all tests. Finally, the model performs worse when OOD is simply defined as signals from unseen emitters, because of a precision decrease. In conclusion, the implemented changes managed to reduce the overconfidence for this particular NN, and improve OOD detection for REC.

Page generated in 0.1534 seconds