• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • Tagged with
  • 11
  • 11
  • 11
  • 11
  • 8
  • 7
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Physics Informed Neural Networks for Engineering Systems

Sukirt (8828960) 13 May 2020 (has links)
<div>This thesis explores the application of deep learning techniques to problems in fluid mechanics, with particular focus on physics informed neural networks. Physics</div><div>informed neural networks leverage the information gathered over centuries in the</div><div>form of physical laws mathematically represented in the form of partial differential</div><div>equations to make up for the dearth of data associated with engineering and physi-</div><div>cal systems. To demonstrate the capability of physics informed neural networks, an</div><div>inverse and a forward problem are considered. The inverse problem involves discov-</div><div>ering a spatially varying concentration ?field from the observations of concentration</div><div>of a passive scalar. A forward problem involving conjugate heat transfer is solved as</div><div>well, where the boundary conditions on velocity and temperature are used to discover</div><div>the velocity, pressure and temperature ?fields in the entire domain. The predictions of</div><div>the physics informed neural networks are compared against simulated data generated</div><div>using OpenFOAM.</div>
2

Physics-informed Neural Networks for Biopharma Applications

Cedergren, Linnéa January 2021 (has links)
Physics-Informed Neural Networks (PINNs) are hybrid models that incorporate differential equations into the training of neural networks, with the aim of bringing the best of both worlds. This project used a mathematical model describing a Continuous Stirred-Tank Reactor (CSTR), to test two possible applications of PINNs. The first type of PINN was trained to predict an unknown reaction rate law, based only on the differential equation and a time series of the reactor state. The resulting model was used inside a multi-step solver to simulate the system state over time. The results showed that the PINN could accurately model the behaviour of the missing physics also for new initial conditions. However, the model suffered from extrapolation error when tested on a larger reactor, with a much lower reaction rate. Comparisons between using a numerical derivative or automatic differentiation in the loss equation, indicated that the latter had a higher robustness to noise. Thus, it is likely the best choice for real applications. A second type of PINN was trained to forecast the system state one-step-ahead based on previous states and other known model parameters. An ordinary feed-forward neural network with an equal architecture was used as baseline. The second type of PINN did not outperform the baseline network. Further studies are needed to conclude if or when physics-informed loss should be used in autoregressive applications.
3

Solving Navier-Stokes equations in protoplanetary disk using physics-informed neural networks

Mao, Shunyuan 07 January 2022 (has links)
We show how physics-informed neural networks can be used to solve compressible \NS equations in protoplanetary disks. While young planets form in protoplanetary disks, because of the limitation of current techniques, direct observations of them are challenging. So instead, existing methods infer the presence and properties of planets from the disk structures created by disk-planet interactions. Hydrodynamic and radiative transfer simulations play essential roles in this process. Currently, the lack of computer resources for these expensive simulations has become one of the field's main bottlenecks. To solve this problem, we explore the possibility of using physics-informed neural networks, a machine learning method that trains neural networks using physical laws, to substitute the simulations. We identify three main bottlenecks that prevent the physics-informed neural networks from achieving this goal, which we overcome by hard-constraining initial conditions, scaling outputs and balancing gradients. With these improvements, we reduce the relative L2 errors of predicted solutions by 97% ~ 99\% compared to the vanilla PINNs on solving compressible NS equations in protoplanetary disks. / Graduate / 2022-12-10
4

Integrating Machine Learning Into Process-Based Modeling to Predict Ammonia Losses From Stored Liquid Dairy Manure

Genedy, Rana Ahmed Kheir 16 June 2023 (has links)
Storing manure on dairy farms is essential for maximizing its fertilizer value, reducing management costs, and minimizing potential environmental pollution challenges. However, ammonia loss through volatilization during storage remains a challenge. Quantifying these losses is necessary to inform decision-making processes to improve manure management, and design ammonia mitigation strategies. In 2003, the National Research Council recommended using process-based models to estimate emissions of pollutants, such as ammonia, from animal feeding operations. While much progress has been made to meet this call, still, their accuracy is limited because of the inadequate values of manure properties such as heat and mass transfer coefficients. Additionally, the process-based models lack realistic estimations for manure temperatures; they use ambient air temperature surrogates which was found to underestimate the atmospheric emissions during storage. This study uses machine learning algorithms' unique abilities to address some of the challenges of process-based modeling. Firstly, ammonia concentrations, manure temperature, and local meteorological factors were measured from three dairy farms with different manure management practices and storage types. This data was used to estimate the influence of manure characteristics and meteorological factors on the trend of ammonia emissions. Secondly, the data was subjected to four data-driven machine learning algorithms and a physics-informed neural network (PINN) to predict manure temperature. Finally, a deep-learning approach that combines process-based modeling and recurrent neural networks (LSTM) was introduced to estimate ammonia loss from dairy manure during storage. This method involves inverse problem-solving to estimate the heat and mass transfer coefficients for ammonia transport and emission from stored manure using the hyperparameters optimization tool, Optuna. Results show that ammonia flux patterns mirrored manure temperature closely compared to ambient air temperature, with wind speed and crust thickness significantly influencing ammonia emissions. The data-driven machine learning models used to estimate the ammonia emissions had a high predictive ability; however, their generalization accuracy was poor. However, the PINN model had superior generalization accuracy with R2 during the testing phase exceeded 0.70, in contrast to -0.03 and 0.66 for finite-elements heat transfer and data-driven neural network, respectively. In addition, optimizing the process-based model parameters has significantly improved performance. Finally, Physics-informed LSTM has the potential to replace conventional process-based models due to its computational efficiency and does not require extensive data collection. The outcomes of this study contribute to precision agriculture, specifically designing suitable on-farm strategies to minimize nutrient loss and greenhouse gas emissions during the manure storage periods. / Doctor of Philosophy / Dairy farming is critical for meeting the global demand for animal protein products; however, it generates a lot of manure that must be appropriately managed. Manure can only be applied to crop or pasture lands during growing seasons. Typically, manure is stored on farms until time permits for land application. During storage, microbial processes occur in the manure, releasing gases such as ammonia. Ammonia emitted contributes to the degradation of ambient air quality, human and animal health problems, biodiversity loss, and soil health deterioration. Furthermore, releasing ammonia from stored manure reduces the nitrogen fertilizer value of stored manure. Implementing control measures to mitigate ammonia emission is necessary to reduce nitrogen loss from stored manure. Deciding and applying appropriate control measures require knowledge of the rate of ammonia emission and when it occurs. Process-based models are a less expensive and more reliable method for estimating ammonia emissions from stored liquid dairy manure. Process-based model is a mathematical model that simulates processes related to ammonia production and emission from stored manure. However, process-based models have limitations because they require estimates of manure properties, which vary depending on the manure management. Additionally, these models use air temperature instead of manure temperature, underestimating the ammonia lost during storage. Therefore, this study used machine learning algorithms to develop more accurate models for predicting manure temperature and estimating ammonia emissions. First, we collected manure temperature, ammonia emissions, and weather data from three dairy farms with different manure management practices and storage structures. We used it to estimate the factors that affect ammonia emissions. The data was then used to develop four machine-learning models and one integrated machine-learning-based to assess their ability to predict manure temperature. Finally, a different machine learning approach that combines process-based modeling and neural networks was used to directly estimate ammonia loss from dairy manure during storage. The results show that manure temperature is closely related to the amount of ammonia lost, and factors like wind speed and crust thickness also influence the amount of ammonia lost. Machine learning algorithms offer a more accurate way to predict manure temperature than traditional methods. Finally, combining machine learning and process-based modeling improved the ammonia emission estimates. This study contributes to precision agriculture by designing suitable on-farm strategies to minimize nutrient loss during manure storage periods. It provides valuable information for dairy farmers and policymakers on managing manure storage more effectively and sustainably.
5

Physics-informed Machine Learning with Uncertainty Quantification

Daw, Arka 12 February 2024 (has links)
Physics Informed Machine Learning (PIML) has emerged as the forefront of research in scientific machine learning with the key motivation of systematically coupling machine learning (ML) methods with prior domain knowledge often available in the form of physics supervision. Uncertainty quantification (UQ) is an important goal in many scientific use-cases, where the obtaining reliable ML model predictions and accessing the potential risks associated with them is crucial. In this thesis, we propose novel methodologies in three key areas for improving uncertainty quantification for PIML. First, we propose to explicitly infuse the physics prior in the form of monotonicity constraints through architectural modifications in neural networks for quantifying uncertainty. Second, we demonstrate a more general framework for quantifying uncertainty with PIML that is compatible with generic forms of physics supervision such as PDEs and closed form equations. Lastly, we study the limitations of physics-based loss in the context of Physics-informed Neural Networks (PINNs), and develop an efficient sampling strategy to mitigate the failure modes. / Doctor of Philosophy / Owing to the success of deep learning in computer vision and natural language processing there is a growing interest of using deep learning in scientific applications. In scientific applications, knowledge is available in the form of closed form equations, partial differential equations, etc. along with labeled data. My work focuses on developing deep learning methods that integrate these forms of supervision. Especially, my work focuses on building methods that can quantify uncertainty in deep learning models, which is an important goal for high-stakes applications.
6

Quantifying implicit and explicit constraints on physics-informed neural processes

Haoyang Zheng (10141679) 30 April 2021 (has links)
<p>Due to strong interactions among various phases and among the phases and fluid motions, multiphase flows (MPFs) are so complex that lots of efforts have to be paid to predict its sequential patterns of phases and motions. The present paper applies the physical constraints inherent in MPFs and enforces them to a physics-informed neural network (PINN) model either explicitly or implicitly, depending on the type of constraints. To predict the unobserved order parameters (OPs) (which locate the phases) in the future steps, the conditional neural processes (CNPs) with long short-term memory (LSTM, combined as CNPLSTM) are applied to quickly infer the dynamics of the phases after encoding only a few observations. After that, the multiphase consistent and conservative boundedness mapping (MCBOM) algorithm is implemented the correction the predicted OPs from CNP-LSTM so that the mass conservation, the summation of the volume fractions of the phases being unity, the consistency of reduction, and the boundedness of the OPs are strictly satisfied. Next, the density of the fluid mixture is computed from the corrected OPs. The observed velocity and density of the fluid mixture then encode in a physics-informed conditional neural processes and long short-term memory (PICNP-LSTM) where the constraint of momentum conservation is included in the loss function. Finally, the unobserved velocity in future steps is predicted from PICNP-LSTM. The proposed physics-informed neural processes (PINPs) model (CNP-LSTM-MCBOM-PICNP-LSTM) for MPFs avoids unphysical behaviors of the OPs, accelerates the convergence, and requires fewer data. The proposed model successfully predicts several canonical MPF problems, i.e., the horizontal shear layer (HSL) and dam break (DB) problems, and its performances are validated.</p>
7

Solving Partial Differential Equations With Neural Networks

Karlsson Faronius, Håkan January 2023 (has links)
In this thesis three different approaches for solving partial differential equa-tions with neural networks will be explored; namely Physics-Informed NeuralNetworks, Fourier Neural Operators and the Deep Ritz method. Physics-Informed Neural Networks and the Deep Ritz Method are unsupervised machine learning methods, while the Fourier Neural Operator is a supervised method. The Physics-Informed Neural Network is implemented on Burger’s equation,while the Fourier Neural Operator is implemented on Poisson’s equation and Darcy’s law and the Deep Ritz method is applied to several variational problems. The Physics-Informed Neural Network is also used for the inverse problem; given some data on a solution, the neural network is trained to determine what the underlying partial differential equation is whose solution is given by the data. Apart from this, importance sampling is also implemented to accelerate the training of physics-informed neural networks. The contributions of this thesis are to implement a slightly different form of importance sampling on the physics-informed neural network, to show that the Deep Ritz method can be used for a larger class of variational problems than the original publication suggests and to apply the Fourier Neural Operator on an application in geophyiscs involving Darcy’s law where the coefficient factor is given by exponentiated two-dimensional pink noise.
8

Uncertainty Quantification Using Simulation-based and Simulation-free methods with Active Learning Approaches

Zhang, Chi January 2022 (has links)
No description available.
9

Predicting Digital Porous Media Properties Using Machine Learning Methods

Elmorsy, Mohamed January 2023 (has links)
Subsurface porous media, like aquifers, petroleum reservoirs, and geothermal systems, are vital for natural resources and environmental management. Extensive research has been conducted to understand flow and transport in these media, addressing challenges in hydrocarbon extraction, carbon storage and waste management. Classifying the type of porous media (e.g., sandstone, carbonate) is often the first step in the rock characterization process, and it provides critical information regarding the physical properties of the porous media. Therefore, we utilize multivariate statistical methods with discriminant analysis to categorize porous media samples which proved to be efficient by achieving excellent classification accuracy on testing datasets and served as a surrogate tool to study key porous media characteristics. While recent advances in three-dimensional (3D) imaging of core samples have enabled digital subsurface characterization, the exorbitant computational cost associated with direct numerical simulation in 3D remains a persistent challenge. In contrast, machine learning (ML) models are much more efficient, though their use in subsurface characterization is still in its infancy. Therefore, we introduce a novel 3D convolution neural network (CNN) for end-to-end prediction of permeability. By increasing dataset size, diversity, and optimizing the network architecture, our model surpasses the accuracy of existing 3D CNN models for permeability prediction. It demonstrates excellent generalizability, accurately predicting permeability in previously unseen samples. However, despite the efficiency of the developed 3D CNN model for accurate and fast permeability prediction, its utility remains limited to small subdomains of the digital rock samples. Therefore, we introduce an upscaling technique using a new analytical solution to calculate effective permeability in a 3D digital rock composed of 2 × 2 × 2 anisotropic cells. By incorporating this solution into physics-informed neural network (PINN) models, we achieve highly accurate results. Even when upscaling previously unseen samples at multiple levels, the PINN with the physics-informed module maintains excellent accuracy. This advancement enhances the capability of ML models, like 3D CNN, for efficient and accurate digital rock analysis at the core scale. After successfully applying ML models in permeability prediction, we now extend their application to another important parameter in subsurface engineering projects: effective thermal conductivity, which is a key parameter in engineering projects like radioactive waste repositories, geothermal energy production, and underground energy storage. To address the need for large training data and processing power in ML models, we propose a novel framework based on transfer learning. This approach allows prior knowledge from previous applications to be transferred, resulting in faster and more efficient implementation of new relevant applications. We introduce CNN models trained on various porous media samples that leverage transfer learning to predict porous media sample thermal conductivity accurately. Our approach reduces training time, processing power, and data requirements, enabling effective prediction and analysis of porous media properties such as permeability and thermal conductivity. It also facilitates the application of ML to other properties, improving efficiency and accuracy. / Thesis / Doctor of Philosophy (PhD)
10

PHYSICS-INFORMED NEURAL NETWORK SOLUTION OF POINT KINETICS EQUATIONS FOR PUR-1 DIGITAL TWIN

Konstantinos Prantikos (14196773) 01 December 2022 (has links)
<p>  </p> <p>A <em>digital twin</em> (DT), which keeps track of nuclear reactor history to provide real-time predictions, has been recently proposed for nuclear reactor monitoring. A digital twin can be implemented using either a differential equations-based physics model, or a data-driven machine learning model<strong>. </strong>The principal challenge in physics model-based DT consists of achieving sufficient model fidelity to represent a complex experimental system, while the main challenge in data-driven DT appears in the extensive training requirements and potential lack of predictive ability. </p> <p>In this thesis, we investigate the performance of a hybrid approach, which is based on physics-informed neural networks (PINNs) that encode fundamental physical laws into the loss function of the neural network. In this way, PINNs establish theoretical constraints and biases to supplement measurement data and provide solution to several limitations of purely data-driven machine learning (ML) models. We develop a PINN model to solve the point kinetic equations (PKEs), which are time dependent stiff nonlinear ordinary differential equations that constitute a nuclear reactor reduced-order model under the approximation of ignoring the spatial dependence of the neutron flux. PKEs portray the kinetic behavior of the system, and this kind of approach is the basis for most analyses of reactor systems, except in cases where flux shapes are known to vary with time. This system describes the nuclear parameters such as neutron density concentration, the delayed neutron precursor density concentration and reactivity. Both neutron density and delayed neutron precursor density concentrations are the vital parameters for safety and the transient behavior of the reactor power. </p> <p>The PINN model solution of PKEs is developed to monitor a start-up transient of the Purdue University Reactor Number One (PUR-1) using experimental parameters for the reactivity feedback schedule and the neutron source. The facility under modeling, PUR-1, is a pool type small research reactor located in West Lafayette Indiana. It is an all-digital light water reactor (LWR) submerged into a deep-water pool and has a power output of 10kW. The results demonstrate strong agreement between the PINN solution and finite difference numerical solution of PKEs. We investigate PINNs performance in both data interpolation and extrapolation. </p> <p>The findings of this thesis research indicate that the PINN model achieved highest performance and lowest errors in data interpolation. In the case of extrapolation data, three different test cases were considered, the first where the extrapolation is performed in a five-seconds interval, the second where the extrapolation is performed in a 10-seconds interval, and the third where the extrapolation is performed in a 15-seconds interval. The extrapolation errors are comparable to those of interpolation predictions. Extrapolation accuracy decreases with increasing time interval.</p>

Page generated in 0.0628 seconds