• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 67
  • 67
  • 67
  • 67
  • 67
  • 24
  • 22
  • 22
  • 20
  • 20
  • 18
  • 18
  • 17
  • 16
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

PHYSICS-INFORMED NEURAL NETWORK SOLUTION OF POINT KINETICS EQUATIONS FOR PUR-1 DIGITAL TWIN

Konstantinos Prantikos (14196773) 01 December 2022 (has links)
<p>  </p> <p>A <em>digital twin</em> (DT), which keeps track of nuclear reactor history to provide real-time predictions, has been recently proposed for nuclear reactor monitoring. A digital twin can be implemented using either a differential equations-based physics model, or a data-driven machine learning model<strong>. </strong>The principal challenge in physics model-based DT consists of achieving sufficient model fidelity to represent a complex experimental system, while the main challenge in data-driven DT appears in the extensive training requirements and potential lack of predictive ability. </p> <p>In this thesis, we investigate the performance of a hybrid approach, which is based on physics-informed neural networks (PINNs) that encode fundamental physical laws into the loss function of the neural network. In this way, PINNs establish theoretical constraints and biases to supplement measurement data and provide solution to several limitations of purely data-driven machine learning (ML) models. We develop a PINN model to solve the point kinetic equations (PKEs), which are time dependent stiff nonlinear ordinary differential equations that constitute a nuclear reactor reduced-order model under the approximation of ignoring the spatial dependence of the neutron flux. PKEs portray the kinetic behavior of the system, and this kind of approach is the basis for most analyses of reactor systems, except in cases where flux shapes are known to vary with time. This system describes the nuclear parameters such as neutron density concentration, the delayed neutron precursor density concentration and reactivity. Both neutron density and delayed neutron precursor density concentrations are the vital parameters for safety and the transient behavior of the reactor power. </p> <p>The PINN model solution of PKEs is developed to monitor a start-up transient of the Purdue University Reactor Number One (PUR-1) using experimental parameters for the reactivity feedback schedule and the neutron source. The facility under modeling, PUR-1, is a pool type small research reactor located in West Lafayette Indiana. It is an all-digital light water reactor (LWR) submerged into a deep-water pool and has a power output of 10kW. The results demonstrate strong agreement between the PINN solution and finite difference numerical solution of PKEs. We investigate PINNs performance in both data interpolation and extrapolation. </p> <p>The findings of this thesis research indicate that the PINN model achieved highest performance and lowest errors in data interpolation. In the case of extrapolation data, three different test cases were considered, the first where the extrapolation is performed in a five-seconds interval, the second where the extrapolation is performed in a 10-seconds interval, and the third where the extrapolation is performed in a 15-seconds interval. The extrapolation errors are comparable to those of interpolation predictions. Extrapolation accuracy decreases with increasing time interval.</p>
42

Graph Matching Based on a Few Seeds: Theoretical Algorithms and Graph Neural Network Approaches

Liren Yu (17329693) 03 November 2023 (has links)
<p dir="ltr">Since graphs are natural representations for encoding relational data, the problem of graph matching is an emerging task and has attracted increasing attention, which could potentially impact various domains such as social network de-anonymization and computer vision. Our main interest is designing polynomial-time algorithms for seeded graph matching problems where a subset of pre-matched vertex-pairs (seeds) is revealed. </p><p dir="ltr">However, the existing work does not fully investigate the pivotal role of seeds and falls short of making the most use of the seeds. Notably, the majority of existing hand-crafted algorithms only focus on using ``witnesses'' in the 1-hop neighborhood. Although some advanced algorithms are proposed to use multi-hop witnesses, their theoretical analysis applies only to \ER random graphs and requires seeds to be all correct, which often do not hold in real applications. Furthermore, a parallel line of research, Graph Neural Network (GNN) approaches, typically employs a semi-supervised approach, which requires a large number of seeds and lacks the capacity to distill knowledge transferable to unseen graphs.</p><p dir="ltr">In my dissertation, I have taken two approaches to address these limitations. In the first approach, we study to design hand-crafted algorithms that can properly use multi-hop witnesses to match graphs. We first study graph matching using multi-hop neighborhoods when partially-correct seeds are provided. Specifically, consider two correlated graphs whose edges are sampled independently from a parent \ER graph $\mathcal{G}(n,p)$. A mapping between the vertices of the two graphs is provided as seeds, of which an unknown fraction is correct. We first analyze a simple algorithm that matches vertices based on the number of common seeds in the $1$-hop neighborhoods, and then further propose a new algorithm that uses seeds in the $D$-hop neighborhoods. We establish non-asymptotic performance guarantees of perfect matching for both $1$-hop and $2$-hop algorithms, showing that our new $2$-hop algorithm requires substantially fewer correct seeds than the $1$-hop algorithm when graphs are sparse. Moreover, by combining our new performance guarantees for the $1$-hop and $2$-hop algorithms, we attain the best-known results (in terms of the required fraction of correct seeds) across the entire range of graph sparsity and significantly improve the previous results. We then study the role of multi-hop neighborhoods in matching power-law graphs. Assume that two edge-correlated graphs are independently edge-sampled from a common parent graph with a power-law degree distribution. A set of correctly matched vertex-pairs is chosen at random and revealed as initial seeds. Our goal is to use the seeds to recover the remaining latent vertex correspondence between the two graphs. Departing from the existing approaches that focus on the use of high-degree seeds in $1$-hop neighborhoods, we develop an efficient algorithm that exploits the low-degree seeds in suitably-defined $D$-hop neighborhoods. Our result achieves an exponential reduction in the seed size requirement compared to the best previously known results.</p><p dir="ltr">In the second approach, we study GNNs for seeded graph matching. We propose a new supervised approach that can learn from a training set how to match unseen graphs with only a few seeds. Our SeedGNN architecture incorporates several novel designs, inspired by our theoretical studies of seeded graph matching: 1) it can learn to compute and use witness-like information from different hops, in a way that can be generalized to graphs of different sizes; 2) it can use easily-matched node-pairs as new seeds to improve the matching in subsequent layers. We evaluate SeedGNN on synthetic and real-world graphs and demonstrate significant performance improvements over both non-learning and learning algorithms in the existing literature. Furthermore, our experiments confirm that the knowledge learned by SeedGNN from training graphs can be generalized to test graphs of different sizes and categories.</p>
43

Machine Learning of Heater Zone Sensors in Liquid Sodium Facility

Maria Pantopoulou (16494174) 06 July 2023 (has links)
<p>  </p> <p>Advanced high temperature fluid reactors (AR), such as sodium fast reactors (SFR) and molten salt cooled reactors (MSCR) are promising nuclear energy options, which offer lower levelized electricity costs compared to existing light water reactors (LWR). Increasing economic competitiveness of ARs in the open market involves developing strategies for reducing operation and maintenance (O&M) costs. Digitization of AR’s allows to implement continuous on-line monitoring paradigm to achieve early detection of incipient problems, and thus reduce O&M costs. Machine learning (ML) algorithms offer a number of advantages for reactor monitoring through anticipation of key performance variables using data-driven process models. ML model does not require detailed knowledge of the system, which could be difficult to obtain or unavailable because of commercial privacy restrictions. In addition, any data obtained from sensors or through various ML models need to be securely transmitted under all possible conditions, including those of cyber-attacks. Quantum information processing offers promising solutions to these threats by establishing secure communications, due to unique properties of entanglement and superposition in quantum physics. More specifically, quantum key distribution (QKD) algorithms can be used to generate and transmit keys between the reactor and a remote user. In one of popular QKD communication protocols, BB84, the symmetric keys are paired with an advanced encryption standard (AES) protocol protecting the information. Another challenge in sensor measurements is the noise, which can affect the accuracy and reliability of the measured values. The presence of noise in sensor measurements can lead to incorrect interpretations of the data, and therefore, it is crucial to develop effective signal processing techniques to improve the quality of measurements. </p> <p>In this study, we develop several variations of Recurrent Neural Networks (RNN) and test their ability to predict future values of thermocouple measurements. Data obtained by a heat-up experiment conducted in a liquid sodium experimental facility is used for training and testing the RNNs. The method of extrapolation is also explored using measurements of different sensors to train and test a network. We then examine through computer simulations the potential of secure real-time communication of monitoring information using the BB84 protocol. Finally, signal analysis is performed with Discrete Fourier Transform (DFT) sensor signals to analyze and correlate the prediction results with the results obtained by the analysis of the time series in the frequency domain. Using information from the frequency analysis, we apply cutoff filters in the original time series and test again the performance of the networks. Results show that the ML models developed in this work can be efficiently used for forecasting of thermocouple measurements, as they provide Root Mean Square Error (RMSE) values lower than the measurement uncertainty of the thermocouples. Extrapolation produces good results, with performance related to the Euclidean distance between the sets of time series. Moreover, the results from the utilization of the BB84 protocol to securely transmit the measurements prove the feasibility of secure real-time communication of monitoring information. The application of the cutoff filters provided more accurate predictions of the thermocouple measurements than in the case of the unfiltered signals.</p> <p>The suit of computational tools developed in this work is shown to be efficient and promises to have a positive impact on improving performance of an AR.</p>
44

Machine Learning-Based Predictive Methods for Polyphase Motor Condition Monitoring

David Matthew LeClerc (13048125) 29 July 2022 (has links)
<p>  This paper explored the application of three machine learning models focused on predictive motor maintenance. Logistic Regression, Sequential Minimal Optimization (SMO), and NaïveBayes models. A comparative analysis of these models illustrated that while each had an accuracy greater than 95% in this study, the Logistic Regression Model exhibited the most reliable operation.</p>
45

TEMPORAL EVENT MODELING OF SOCIAL HARM WITH HIGH DIMENSIONAL AND LATENT COVARIATES

Xueying Liu (13118850) 09 September 2022 (has links)
<p>    </p> <p>The counting process is the fundamental of many real-world problems with event data. Poisson process, used as the background intensity of Hawkes process, is the most commonly used point process. The Hawkes process, a self-exciting point process fits to temporal event data, spatial-temporal event data, and event data with covariates. We study the Hawkes process that fits to heterogeneous drug overdose data via a novel semi-parametric approach. The counting process is also related to survival data based on the fact that they both study the occurrences of events over time. We fit a Cox model to temporal event data with a large corpus that is processed into high dimensional covariates. We study the significant features that influence the intensity of events. </p>
46

Towards Building a High-Performance Intelligent Radio Network through Deep Learning: Addressing Data Privacy, Adversarial Robustness, Network Structure, and Latency Requirements.

Abu Shafin Moham Mahdee Jameel (18424200) 26 April 2024 (has links)
<p dir="ltr">With the increasing availability of inexpensive computing power in wireless radio network nodes, machine learning based models are being deployed in operations that traditionally relied on rule-based or statistical methods. Contemporary high bandwidth networks enable easy availability of significant amounts of training data in a comparatively short time, aiding in the development of better deep learning models. Specialized deep learning models developed for wireless networks have been shown to consistently outperform traditional methods in a variety of wireless network applications.</p><p><br></p><p dir="ltr">We aim to address some of the unique challenges inherent in the wireless radio communication domain. Firstly, as data is transmitted over the air, data privacy and adversarial attacks pose heightened risks. Secondly, due to the volume of data and the time-sensitive nature of the processing that is required, the speed of the machine learning model becomes a significant factor, often necessitating operation within a latency constraint. Thirdly, the impact of diverse and time-varying wireless environments means that any machine learning model also needs to be generalizable. The increasing computing power present in wireless nodes provides an opportunity to offload some of the deep learning to the edge, which also impacts data privacy.</p><p><br></p><p dir="ltr">Towards this goal, we work on deep learning methods that operate along different aspects of a wireless network—on network packets, error prediction, modulation classification, and channel estimation—and are able to operate within the latency constraint, while simultaneously providing better privacy and security. After proposing solutions that work in a traditional centralized learning environment, we explore edge learning paradigms where the learning happens in distributed nodes.</p>
47

LEARNING OBJECTIVE FUNCTIONS FOR AUTONOMOUS SYSTEMS

Zihao Liang (18966976) 03 July 2024 (has links)
<p dir="ltr">In recent years, advancements in robotics and computing power have enabled robots to master complex tasks. Nevertheless, merely executing tasks isn't sufficient for robots. To achieve higher robot autonomy, learning the objective function is crucial. Autonomous systems can effectively eliminate the need for explicit programming by autonomously learning the control objective and deriving their control policy through the observation of task demonstrations. Hence, there's a need to develop a method for robots to learn the desired objective functions. In this thesis, we address several challenges in objective learning for autonomous systems, enhancing the applicability of our method in real-world scenarios. The ultimate objective of the thesis is to create a universal objective learning approach capable of addressing a range of existing challenges in the field while emphasizing data efficiency and robustness. Hence, building upon the previously mentioned intuition, we present a framework for autonomous systems to address a variety of objective learning tasks in real-time, even in the presence of noisy data. In addition to objective learning, this framework is capable of handling various other learning and control tasks.</p><p dir="ltr">The first part of this thesis concentrates on objective learning methods, specifically inverse optimal control (IOC). Within this domain, we have made three significant contributions aimed at addressing three existing challenges in IOC: 1) learning from minimal data, 2) learning without prior knowledge of system dynamics, and 3) learning with system outputs. </p><p dir="ltr">The second part of this thesis aims to develop a unified IOC framework to address all the challenges previously mentioned. It introduces a new paradigm for autonomous systems, referred to as Online Control-Informed Learning. This paradigm aims to tackle various of learning and control tasks online with data efficiency and robustness to noisy data. Integrating optimal control theory, online state estimation techniques, and machine learning methods, our proposed paradigm offers an online learning framework capable of tackling a diverse array of learning and control tasks. These include online imitation learning, online system identification, and policy tuning on-the-fly, all with efficient use of data and computation resources while ensuring robust performance.</p>
48

Unraveling Complexity: Panoptic Segmentation in Cellular and Space Imagery

Emanuele Plebani (18403245) 03 June 2024 (has links)
<p dir="ltr">Advancements in machine learning, especially deep learning, have facilitated the creation of models capable of performing tasks previously thought impossible. This progress has opened new possibilities across diverse fields such as medical imaging and remote sensing. However, the performance of these models relies heavily on the availability of extensive labeled datasets.<br>Collecting large amounts of labeled data poses a significant financial burden, particularly in specialized fields like medical imaging and remote sensing, where annotation requires expert knowledge. To address this challenge, various methods have been developed to mitigate the necessity for labeled data or leverage information contained in unlabeled data. These encompass include self-supervised learning, few-shot learning, and semi-supervised learning. This dissertation centers on the application of semi-supervised learning in segmentation tasks.<br><br>We focus on panoptic segmentation, a task that combines semantic segmentation (assigning a class to each pixel) and instance segmentation (grouping pixels into different object instances). We choose two segmentation tasks in different domains: nerve segmentation in microscopic imaging and hyperspectral segmentation in satellite images from Mars.<br>Our study reveals that, while direct application of methods developed for natural images may yield low performance, targeted modifications or the development of robust models can provide satisfactory results, thereby unlocking new applications like machine-assisted annotation of new data.<br><br>This dissertation begins with a challenging panoptic segmentation problem in microscopic imaging, systematically exploring model architectures to improve generalization. Subsequently, it investigates how semi-supervised learning may mitigate the need for annotated data. It then moves to hyperspectral imaging, introducing a Hierarchical Bayesian model (HBM) to robustly classify single pixels. Key contributions of include developing a state-of-the-art U-Net model for nerve segmentation, improving the model's ability to segment different cellular structures, evaluating semi-supervised learning methods in the same setting, and proposing HBM for hyperspectral segmentation. <br>The dissertation also provides a dataset of labeled CRISM pixels and mineral detections, and a software toolbox implementing the full HBM pipeline, to facilitate the development of new models.</p>
49

Accelerating AI-driven scientific discovery with end-to-end learning and random projection

Md Nasim (19471057) 23 August 2024 (has links)
<p dir="ltr">Scientific discovery of new knowledge from data can enhance our understanding of the physical world and lead to the innovation of new technologies. AI-driven methods can greatly accelerate scientific discovery and are essential for analyzing and identifying patterns in huge volumes of experimental data. However, current AI-driven scientific discovery pipeline suffers from several inefficiencies including but not limited to lack of <b>precise modeling</b>, lack of <b>efficient learning methods</b>, and lack of <b>human-in-the-loop integrated frameworks</b> in the scientific discovery loop. Such inefficiencies increase resource requirements such as expensive computing infrastructures, significant human expert efforts and subsequently slows down scientific discovery.</p><p dir="ltr">In this thesis, I introduce a collection of methods to address the lack of precise modeling, lack of efficient learning methods and lack of human-in-the-loop integrated frameworks in AI-driven scientific discovery workflow. These methods include automatic physics model learning from partially annotated noisy video data, accelerated partial differential equation (PDE) physics model learning, and an integrated AI-driven platform for rapid analysis of experimental video data. <b>My research has led to the discovery of a new size fluctuation property of material defects</b> exposed to high temperature and high irradiation environments such as inside nuclear reactors. Such discovery is essential for designing strong materials that are critical for energy applications.</p><p dir="ltr">To address the lack of precise modeling of physics learning tasks, I developed NeuraDiff, an end-to-end method for learning phase field physics models from noisy video data. In previous learning approaches involving multiple disjoint steps, errors in one step can propagate to another, thus affecting the accuracy of the learned physics models. Trial-and-error simulation methods for learning physics model parameters are inefficient, heavily dependent on expert intuition and may not yield reasonably accurate physics models even after many trial iterations. By encoding the physics model equations directly into learning, end-to-end NeuraDiff framework can provide <b>~100%</b> accurate tracking of material defects and yield correct physics model parameters. </p><p dir="ltr">To address the lack of efficient methods for PDE physics model learning, I developed Rapid-PDE and Reel. The key idea behind these methods is the random projection based compression of system change signals which are sparse in - either value domain (Rapid-PDE) or, both value and frequency domain (Reel). Experiments show that PDE model training times can be reduced significantly using our Rapid-PDE (<b>50-70%)</b> and Reel (<b>70-98%</b>) methods. </p><p dir="ltr">To address the lack of human-in-the-loop integrated frameworks for high volume experimental data analysis, I developed an integrated framework with an easy-to-use annotation tool. Our interactive AI-driven annotation tool can reduce video annotation times by <b>50-75%</b>, and enables material scientists to scale up the analysis of experimental videos.</p><p dir="ltr"><b>Our framework for analyzing experimental data has been deployed in the real world</b> for scaling up in-situ irradiation experiment video analysis and has played a crucial role in the discovery of size fluctuation of material defects under extreme heat and irradiation. </p>
50

Multi-fidelity Machine Learning for Perovskite Band Gap Predictions

Panayotis Thalis Manganaris (16384500) 16 June 2023 (has links)
<p>A wide range of optoelectronic applications demand semiconductors optimized for purpose.</p> <p>My research focused on data-driven identification of ABX3 Halide perovskite compositions for optimum photovoltaic absorption in solar cells.</p> <p>I trained machine learning models on previously reported datasets of halide perovskite band gaps based on first principles computations performed at different fidelities.</p> <p>Using these, I identified mixtures of candidate constituents at the A, B or X sites of the perovskite supercell which leveraged how mixed perovskite band gaps deviate from the linear interpolations predicted by Vegard's law of mixing to obtain a selection of stable perovskites with band gaps in the ideal range of 1 to 2 eV for visible light spectrum absorption.</p> <p>These models predict the perovskite band gap using the composition and inherent elemental properties as descriptors.</p> <p>This enables accurate, high fidelity prediction and screening of the much larger chemical space from which the data samples were drawn.</p> <p><br></p> <p>I utilized a recently published density functional theory (DFT) dataset of more than 1300 perovskite band gaps from four different levels of theory, added to an experimental perovskite band gap dataset of \textasciitilde{}100 points, to train random forest regression (RFR), Gaussian process regression (GPR), and Sure Independence Screening and Sparsifying Operator (SISSO) regression models, with data fidelity added as one-hot encoded features.</p> <p>I found that RFR yields the best model with a band gap root mean square error of 0.12 eV on the total dataset and 0.15 eV on the experimental points.</p> <p>SISSO provided compound features and functions for direct prediction of band gap, but errors were larger than from RFR and GPR.</p> <p>Additional insights gained from Pearson correlation and Shapley additive explanation (SHAP) analysis of learned descriptors suggest the RFR models performed best because of (a) their focus on identifying and capturing relevant feature interactions and (b) their flexibility to represent nonlinear relationships between such interactions and the band gap.</p> <p>The best model was deployed for predicting experimental band gap of 37785 hypothetical compounds.</p> <p>Based on this, we identified 1251 stable compounds with band gap predicted to be between 1 and 2 eV at experimental accuracy, successfully narrowing the candidates to about 3% of the screened compositions.</p>

Page generated in 0.0721 seconds