Return to search

Enabling Digital Twinning via Information-Theoretic Machine Learning-Based Inference Intelligence

<p dir="ltr">Nuclear energy, renowned for its clean, carbon-free attributes and cost-effectiveness, stands as a pivotal pillar in the global quest for sustainable energy sources. Additionally, nuclear power, being a spatially high-concentrated industry, offers an unparalleled energy density compared to other sources of energy. Despite its numerous advantages, if a nuclear power plant (NPP) is not operated safely, it can lead to long-term shutdowns, radiation exposure to workers, radiation contamination of surrounding areas, or even a national-scale disaster, as witnessed in the Chernobyl incident of 1986. Therefore, ensuring the safe operation of nuclear reactors is considered the most important factor in their operation. Recognizing the intricate tradeoff between safety and economy, economic considerations are often sacrificed in favor of safety.</p><p dir="ltr">Given this context, it becomes crucial to develop technologies that ensure NPPs’ safety while optimizing their operational efficiency, thereby minimizing the sacrifice of economic benefits. In response to this critical need, scientists introduced the term “digital twin (DT)”, derived from the concept of product lifecycle management. As the first instance of the term, the DT model comprises the physical product, its digital representation, data flowing from the physical to the DT, and information flowing from the digital to the physical twin. In this regard, various nuclear stakeholders such as reactor designers, researchers, operators, and regulators in the nuclear sector, are pursuing the DT technologies which are expected to enable NPPs to be monitored and operated/controlled in an automated and reliable manner. DT is now being actively sought given its wide potential, including increased operational effectiveness, enhanced safety and reliability, uncertainty reduction, etc.</p><p dir="ltr">While a number of technical challenges must be overcome to successfully implement DT technology, this Ph.D. work limits its focus on one of the DT’s top challenges, i.e., model validation, which ensures that model predictions can be trusted for a given application, e.g., the domain envisaged for code usage. Model validation is also a key regulatory requirement in support of the various developmental stages starting from conceptual design to deployment, licensing, operation, and safety. To ensure a given model to be validated, the regulatory process requires the consolidation of two independent sources of knowledge, one from measurements collected from experimental conditions, and the other from code predictions that model the same experimental conditions.</p><p dir="ltr">and computational domains in an optimal manner, considering the characteristics of predictor and target responses. Successful model validation necessitates a complete data analytics pipeline, generally including data preprocessing, data analysis (model training), and result interpretation. Therefore, this Ph.D. work begins by revisiting fundamental concepts such as uncertainty classification, sensitivity analysis (SA), similarity/representativity metrics, and outlier rejection techniques, which serve as robust cornerstones of validation analysis.</p><p dir="ltr">The ultimate goal of this Ph.D. work is to develop an intelligent inference framework that infers/predicts given responses, adaptively handling various levels of data complexities, i.e., residual shape, nonlinearity, heteroscedasticity, etc. These Ph.D. studies are expected to significantly advance DT technology, enabling support for various levels of operational autonomy in both existing and first-of-a-kind reactor designs. This extends to critical aspects such as nuclear criticality safety, nuclear fuel depletion dynamics, spent nuclear fuel (SNF) analysis, and the introduction of new fuel designs, such as high burnup fuel and high-assay low-enriched uranium fuel (HALEU). These advancements are crucial in scenarios where constructing new experiments is costly, time-consuming, or infeasible with new reactor systems or high-consequence events like criticality accidents.</p>

  1. 10.25394/pgs.24653775.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/24653775
Date30 November 2023
CreatorsJeongwon Seo (8458191)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/thesis/Enabling_Digital_Twinning_via_Information-Theoretic_Machine_Learning-Based_Inference_Intelligence/24653775

Page generated in 0.0019 seconds