• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • Tagged with
  • 16
  • 16
  • 16
  • 16
  • 12
  • 11
  • 6
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Physics Informed Neural Networks for Engineering Systems

Sukirt (8828960) 13 May 2020 (has links)
<div>This thesis explores the application of deep learning techniques to problems in fluid mechanics, with particular focus on physics informed neural networks. Physics</div><div>informed neural networks leverage the information gathered over centuries in the</div><div>form of physical laws mathematically represented in the form of partial differential</div><div>equations to make up for the dearth of data associated with engineering and physi-</div><div>cal systems. To demonstrate the capability of physics informed neural networks, an</div><div>inverse and a forward problem are considered. The inverse problem involves discov-</div><div>ering a spatially varying concentration ?field from the observations of concentration</div><div>of a passive scalar. A forward problem involving conjugate heat transfer is solved as</div><div>well, where the boundary conditions on velocity and temperature are used to discover</div><div>the velocity, pressure and temperature ?fields in the entire domain. The predictions of</div><div>the physics informed neural networks are compared against simulated data generated</div><div>using OpenFOAM.</div>
2

Physics-informed Neural Networks for Biopharma Applications

Cedergren, Linnéa January 2021 (has links)
Physics-Informed Neural Networks (PINNs) are hybrid models that incorporate differential equations into the training of neural networks, with the aim of bringing the best of both worlds. This project used a mathematical model describing a Continuous Stirred-Tank Reactor (CSTR), to test two possible applications of PINNs. The first type of PINN was trained to predict an unknown reaction rate law, based only on the differential equation and a time series of the reactor state. The resulting model was used inside a multi-step solver to simulate the system state over time. The results showed that the PINN could accurately model the behaviour of the missing physics also for new initial conditions. However, the model suffered from extrapolation error when tested on a larger reactor, with a much lower reaction rate. Comparisons between using a numerical derivative or automatic differentiation in the loss equation, indicated that the latter had a higher robustness to noise. Thus, it is likely the best choice for real applications. A second type of PINN was trained to forecast the system state one-step-ahead based on previous states and other known model parameters. An ordinary feed-forward neural network with an equal architecture was used as baseline. The second type of PINN did not outperform the baseline network. Further studies are needed to conclude if or when physics-informed loss should be used in autoregressive applications.
3

Solving Navier-Stokes equations in protoplanetary disk using physics-informed neural networks

Mao, Shunyuan 07 January 2022 (has links)
We show how physics-informed neural networks can be used to solve compressible \NS equations in protoplanetary disks. While young planets form in protoplanetary disks, because of the limitation of current techniques, direct observations of them are challenging. So instead, existing methods infer the presence and properties of planets from the disk structures created by disk-planet interactions. Hydrodynamic and radiative transfer simulations play essential roles in this process. Currently, the lack of computer resources for these expensive simulations has become one of the field's main bottlenecks. To solve this problem, we explore the possibility of using physics-informed neural networks, a machine learning method that trains neural networks using physical laws, to substitute the simulations. We identify three main bottlenecks that prevent the physics-informed neural networks from achieving this goal, which we overcome by hard-constraining initial conditions, scaling outputs and balancing gradients. With these improvements, we reduce the relative L2 errors of predicted solutions by 97% ~ 99\% compared to the vanilla PINNs on solving compressible NS equations in protoplanetary disks. / Graduate / 2022-12-10
4

Integrating Machine Learning Into Process-Based Modeling to Predict Ammonia Losses From Stored Liquid Dairy Manure

Genedy, Rana Ahmed Kheir 16 June 2023 (has links)
Storing manure on dairy farms is essential for maximizing its fertilizer value, reducing management costs, and minimizing potential environmental pollution challenges. However, ammonia loss through volatilization during storage remains a challenge. Quantifying these losses is necessary to inform decision-making processes to improve manure management, and design ammonia mitigation strategies. In 2003, the National Research Council recommended using process-based models to estimate emissions of pollutants, such as ammonia, from animal feeding operations. While much progress has been made to meet this call, still, their accuracy is limited because of the inadequate values of manure properties such as heat and mass transfer coefficients. Additionally, the process-based models lack realistic estimations for manure temperatures; they use ambient air temperature surrogates which was found to underestimate the atmospheric emissions during storage. This study uses machine learning algorithms' unique abilities to address some of the challenges of process-based modeling. Firstly, ammonia concentrations, manure temperature, and local meteorological factors were measured from three dairy farms with different manure management practices and storage types. This data was used to estimate the influence of manure characteristics and meteorological factors on the trend of ammonia emissions. Secondly, the data was subjected to four data-driven machine learning algorithms and a physics-informed neural network (PINN) to predict manure temperature. Finally, a deep-learning approach that combines process-based modeling and recurrent neural networks (LSTM) was introduced to estimate ammonia loss from dairy manure during storage. This method involves inverse problem-solving to estimate the heat and mass transfer coefficients for ammonia transport and emission from stored manure using the hyperparameters optimization tool, Optuna. Results show that ammonia flux patterns mirrored manure temperature closely compared to ambient air temperature, with wind speed and crust thickness significantly influencing ammonia emissions. The data-driven machine learning models used to estimate the ammonia emissions had a high predictive ability; however, their generalization accuracy was poor. However, the PINN model had superior generalization accuracy with R2 during the testing phase exceeded 0.70, in contrast to -0.03 and 0.66 for finite-elements heat transfer and data-driven neural network, respectively. In addition, optimizing the process-based model parameters has significantly improved performance. Finally, Physics-informed LSTM has the potential to replace conventional process-based models due to its computational efficiency and does not require extensive data collection. The outcomes of this study contribute to precision agriculture, specifically designing suitable on-farm strategies to minimize nutrient loss and greenhouse gas emissions during the manure storage periods. / Doctor of Philosophy / Dairy farming is critical for meeting the global demand for animal protein products; however, it generates a lot of manure that must be appropriately managed. Manure can only be applied to crop or pasture lands during growing seasons. Typically, manure is stored on farms until time permits for land application. During storage, microbial processes occur in the manure, releasing gases such as ammonia. Ammonia emitted contributes to the degradation of ambient air quality, human and animal health problems, biodiversity loss, and soil health deterioration. Furthermore, releasing ammonia from stored manure reduces the nitrogen fertilizer value of stored manure. Implementing control measures to mitigate ammonia emission is necessary to reduce nitrogen loss from stored manure. Deciding and applying appropriate control measures require knowledge of the rate of ammonia emission and when it occurs. Process-based models are a less expensive and more reliable method for estimating ammonia emissions from stored liquid dairy manure. Process-based model is a mathematical model that simulates processes related to ammonia production and emission from stored manure. However, process-based models have limitations because they require estimates of manure properties, which vary depending on the manure management. Additionally, these models use air temperature instead of manure temperature, underestimating the ammonia lost during storage. Therefore, this study used machine learning algorithms to develop more accurate models for predicting manure temperature and estimating ammonia emissions. First, we collected manure temperature, ammonia emissions, and weather data from three dairy farms with different manure management practices and storage structures. We used it to estimate the factors that affect ammonia emissions. The data was then used to develop four machine-learning models and one integrated machine-learning-based to assess their ability to predict manure temperature. Finally, a different machine learning approach that combines process-based modeling and neural networks was used to directly estimate ammonia loss from dairy manure during storage. The results show that manure temperature is closely related to the amount of ammonia lost, and factors like wind speed and crust thickness also influence the amount of ammonia lost. Machine learning algorithms offer a more accurate way to predict manure temperature than traditional methods. Finally, combining machine learning and process-based modeling improved the ammonia emission estimates. This study contributes to precision agriculture by designing suitable on-farm strategies to minimize nutrient loss during manure storage periods. It provides valuable information for dairy farmers and policymakers on managing manure storage more effectively and sustainably.
5

Physics-informed Machine Learning for Digital Twins of Metal Additive Manufacturing

Gnanasambandam, Raghav 07 May 2024 (has links)
Metal additive manufacturing (AM) is an emerging technology for producing parts with virtually no constraint on the geometry. AM builds a part by depositing materials in a layer-by-layer fashion. Despite the benefits in several critical applications, quality issues are one of the primary concerns for the widespread adoption of metal AM. Addressing these issues starts with a better understanding of the underlying physics and includes monitoring and controlling the process in a real-world manufacturing environment. Digital Twins (DTs) are virtual representations of physical systems that enable fast and accurate decision-making. DTs rely on Artificial Intelligence (AI) to process complex information from multiple sources in a manufacturing system at multiple levels. This information typically comes from partially known process physics, in-situ sensor data, and ex-situ quality measurements for a metal AM process. Most current AI models cannot handle ill-structured information from metal AM. Thus, this work proposes three novel machine-learning methods for improving the quality of metal AM processes. These methods enable DTs to control quality in several processes, including laser powder bed fusion (LPBF) and additive friction stir deposition (AFSD). The proposed three methods are as follows 1. Process improvement requires mapping the process parameters with ex-situ quality measurements. These mappings often tend to be non-stationary, with limited experimental data. This work utilizes a novel Deep Gaussian Process-based Bayesian optimization (DGP-SI-BO) method for sequential process design. DGP can model non-stationarity better than a traditional Gaussian Process (GP), but it is challenging for BO. The proposed DGP-SI-BO provides a bagging procedure for acquisition function with a DGP surrogate model inferred via Stochastic Imputation (SI). For a fixed time budget, the proposed method gives 10% better quality for the LPBF process than the widely used BO method while being three times faster than the state-of-the-art method. 2. For metal AM, the process physics information is usually in the form of Partial Differential Equations (PDEs). Though the PDEs, along with in-situ data, can be handled through Physics-informed Neural Networks (PINNs), the activation function in NNs is traditionally not designed to handle multi-scale PDEs. This work proposes a novel activation function Self-scalable tanh (Stan) function for PINNs. The proposed activation function modifies the traditional tanh function. Stan function is smooth, non-saturating, and has a trainable parameter. It can allow an easy flow of gradients and enable systematic scaling of the input-output mapping during training. Apart from solving the heat transfer equations for LPBF and AFSD, this work provides applications in areas including quantum physics and solid and fluid mechanics. Stan function also accelerates notoriously hard and ill-posed inverse discovery of process physics. 3. PDE-based simulations typically need to be much faster for in-situ process control. This work proposes to use a Fourier Neural Operator (FNO) for instantaneous predictions (1000 times speed up) of quality in metal AM. FNO is a data-driven method that maps the process parameters with a high dimensional quality tensor (like thermal distribution in LPBF). Training the FNO with simulated data from PINN ensures a quick response to alter the course of the manufacturing process. Once trained, a DT can readily deploy the model for real-time process monitoring. The proposed methods combine complex information to provide reliable machine-learning models and improve understanding of metal AM processes. Though these models can be independent, they complement each other to build DTs and achieve quality assurance in metal AM. / Doctor of Philosophy / Metal 3D printing, technically known as metal additive manufacturing (AM), is an emerging technology for making virtually any physical part with a click of a button. For instance, one of the most common AM processes, Laser Powder Bed Fusion (L-PBF), melts metal powder using a laser to build into any desired shape. Despite the attractiveness, the quality of the built part is often not satisfactory for its intended usage. For example, a metal plate built for a fractured bone may not adhere to the required dimensions. Improving the quality of metal AM parts starts with a better understanding the underlying mechanisms at a fine length scale (size of the powder or even smaller). Collecting data during the process and leveraging the known physics can help adjust the AM process to improve quality. Digital Twins (DTs) are exactly suited for the task, as they combine the process physics and the data obtained from sensors on metal AM machines to inform an AM machine on process settings and adjustments. This work develops three specific methods to utilize the known information from metal AM to improve the quality of the parts built from metal AM machines. These methods combine different types of known information to alter the process setting for metal AM machines that produce high-quality parts.
6

Physics-informed Machine Learning with Uncertainty Quantification

Daw, Arka 12 February 2024 (has links)
Physics Informed Machine Learning (PIML) has emerged as the forefront of research in scientific machine learning with the key motivation of systematically coupling machine learning (ML) methods with prior domain knowledge often available in the form of physics supervision. Uncertainty quantification (UQ) is an important goal in many scientific use-cases, where the obtaining reliable ML model predictions and accessing the potential risks associated with them is crucial. In this thesis, we propose novel methodologies in three key areas for improving uncertainty quantification for PIML. First, we propose to explicitly infuse the physics prior in the form of monotonicity constraints through architectural modifications in neural networks for quantifying uncertainty. Second, we demonstrate a more general framework for quantifying uncertainty with PIML that is compatible with generic forms of physics supervision such as PDEs and closed form equations. Lastly, we study the limitations of physics-based loss in the context of Physics-informed Neural Networks (PINNs), and develop an efficient sampling strategy to mitigate the failure modes. / Doctor of Philosophy / Owing to the success of deep learning in computer vision and natural language processing there is a growing interest of using deep learning in scientific applications. In scientific applications, knowledge is available in the form of closed form equations, partial differential equations, etc. along with labeled data. My work focuses on developing deep learning methods that integrate these forms of supervision. Especially, my work focuses on building methods that can quantify uncertainty in deep learning models, which is an important goal for high-stakes applications.
7

Quantifying implicit and explicit constraints on physics-informed neural processes

Haoyang Zheng (10141679) 30 April 2021 (has links)
<p>Due to strong interactions among various phases and among the phases and fluid motions, multiphase flows (MPFs) are so complex that lots of efforts have to be paid to predict its sequential patterns of phases and motions. The present paper applies the physical constraints inherent in MPFs and enforces them to a physics-informed neural network (PINN) model either explicitly or implicitly, depending on the type of constraints. To predict the unobserved order parameters (OPs) (which locate the phases) in the future steps, the conditional neural processes (CNPs) with long short-term memory (LSTM, combined as CNPLSTM) are applied to quickly infer the dynamics of the phases after encoding only a few observations. After that, the multiphase consistent and conservative boundedness mapping (MCBOM) algorithm is implemented the correction the predicted OPs from CNP-LSTM so that the mass conservation, the summation of the volume fractions of the phases being unity, the consistency of reduction, and the boundedness of the OPs are strictly satisfied. Next, the density of the fluid mixture is computed from the corrected OPs. The observed velocity and density of the fluid mixture then encode in a physics-informed conditional neural processes and long short-term memory (PICNP-LSTM) where the constraint of momentum conservation is included in the loss function. Finally, the unobserved velocity in future steps is predicted from PICNP-LSTM. The proposed physics-informed neural processes (PINPs) model (CNP-LSTM-MCBOM-PICNP-LSTM) for MPFs avoids unphysical behaviors of the OPs, accelerates the convergence, and requires fewer data. The proposed model successfully predicts several canonical MPF problems, i.e., the horizontal shear layer (HSL) and dam break (DB) problems, and its performances are validated.</p>
8

PHYSICS-INFORMED NEURAL NETWORKS FOR NON-NEWTONIAN FLUIDS

Sukirt (8828960) 25 July 2024 (has links)
<p dir="ltr">Machine learning and deep learning techniques now provide innovative tools for addressing problems in biological, engineering, and physical systems. Physics-informed neural networks (PINNs) are a type of neural network that incorporate physical laws described by partial differential equations (PDEs) into their supervised learning tasks. This dissertation aims to enhance PINNs with improved training techniques and loss functions to tackle the complex physics of viscoelastic flow and rheology more effectively. The focus areas of the dissertation are listed as follows: i) Assigning relative weights to loss terms in training physics-informed neural networks (PINNs) is complex. We propose a solution using numerical integration via backward Euler discretization to leverage statistical properties of data for determining loss weights. Our study focuses on two and three-dimensional Navier-Stokes equations, using spatio-temporal velocity and pressure data to ascertain kinematic viscosity. We examine two-dimensional flow past a cylinder and three-dimensional flow within an aneurysm. Our method, tested for sensitivity and robustness against various factors, converges faster and more accurately than traditional PINNs, especially for three-dimensional Navier-Stokes equations. We validated our approach with experimental data, using the velocity field from PIV channel flow measurements to generate a reference pressure field and determine water viscosity at room temperature. Results showed strong performance with experimental datasets. Our proposed method is a promising solution for ’stiff’ PDEs and scenarios requiring numerous constraints where traditional PINNs struggle. ii) Machine learning algorithms are valuable for fluid mechanics, but high data costs limit their practicality. To address this, we present viscoelasticNet, a Physics-Informed Neural Network (PINN) framework that selects the appropriate viscoelastic constitutive model and learns the stress field from a given velocity flow field. We incorporate three non-linear viscoelastic models: Oldroyd-B, Giesekus, and Linear PTT. Our framework uses neural networks to represent velocity, pressure, and stress fields and employs the backward Euler method to construct PINNs for the viscoelastic model. The approach is multistage: first, it solves for stress, then uses stress and velocity fields to solve for pressure. ViscoelasticNet effectively learned the parameters of the viscoelastic constitutive model on noisy and sparse datasets. Applied to a two-dimensional stenosis geometry and cross-slot flow, our framework accurately learned constitutive equation parameters, though it struggled with peak stress at cross-slot corners. We suggest addressing this by exploring smaller domains. ViscoelasticNet can extend to other rheological models like FENE-P and extended Pom-Pom and learn entire equations, not just parameters. Future research could explore more complex geometries and three-dimensional cases. Complementing Particle Image Velocimetry (PIV), our method can determine pressure and stress fields once the constitutive equation is learned, allowing the modeling of future fluid applications. iii) Physics-Informed Neural Networks (PINNs) are widely used for solving inverse and forward problems in various scientific and engineering fields. However, most PINNs frameworks operate within the Eulerian domain, where physical quantities are described at fixed points in space. We explore coupling Eulerian and Lagrangian domains using PINNs. By tracking particles in the Lagrangian domain, we aim to learn the velocity field in the Eulerian domain. We begin with a sensitivity analysis, focusing on the time-step size of particle data and the number of particles. Initial tests with external flow past a cylinder show that smaller time-step sizes yield better results, while the number of particles has little effect on accuracy. We then extend our analysis to a real-world scenario: the interior of an airplane cabin. Here, we successfully reconstruct the velocity field by tracking passive particles. Our findings suggest that this coupled Eulerian-Lagrangian PINNs framework is a promising tool for enhancing traditional experimental techniques like particle tracking. It can be extended to learn additional flow properties, such as the pressure field for three-dimensional internal flows, and infer viscosity from passive particle tracking, providing deeper insights into complex fluids and their constitutive models. iv) Time-fractional differential equations are widely used across various fields but often present computational and stability challenges, especially in inverse problems. Leveraging Physics-Informed Neural Networks (PINNs) offers a promising solution for these issues. PINNs efficiently compute fractional time derivatives using finite differences and handle other derivatives via automatic differentiation. This study addresses two inverse problems: (1) anomalous diffusion and (2) fractional viscoelasticity. Our approach defines residual loss scaled with the standard deviation of observed data, using numerically generated and experimental datasets to learn fractional coefficients and calibrate parameters for the fractional Maxwell model. Our framework demonstrated robust performance for anomalous diffusion, maintaining less than 10% relative error in predicting the generalized diffusion coefficient and the fractional derivative order, even with 25% Gaussian noise added to the dataset. This highlights the framework’s resilience and accuracy in noisy conditions. We also validated our approach by predicting relaxation moduli for pig tissue samples, achieving relative errors below 10% compared to literature values. This underscores the efficacy of our fractional model with fewer parameters. Our method can be extended to model non-linear fractional viscoelasticity, incorporate experimental data for anomalous diffusion, and apply it to three-dimensional scenarios, broadening its practical applications.</p>
9

Modeling and Experimental Validation of Mission-Specific Prognosis of Li-Ion Batteries with Hybrid Physics-Informed Neural Networks

Fricke, Kajetan 01 January 2023 (has links) (PDF)
While the second part of the 20th century was dominated by combustion engine powered vehicles, climate change and limited oil resources has been forcing car manufacturers and other companies in the mobility sector to switch to renewable energy sources. Electric engines supplied by Li-ion battery cells are on the forefront of this revolution in the mobility sector. A challenging but very important task hereby is the precise forecasting of the degradation of battery state-of-health and state-of-charge. Hence, there is a high demand in models that can predict the SOH and SOC and consider the specifics of a certain kind of battery cell and the usage profile of the battery. While traditional physics-based and data-driven approaches are used to monitor the SOH and SOC, they both have limitations related to computational costs or that require engineers to continually update their prediction models as new battery cells are developed and put into use in battery-powered vehicle fleets. In this dissertation, we enhance a hybrid physics-informed machine learning version of a battery SOC model to predict voltage drop during discharge. The enhanced model captures the effect of wide variation of load levels, in the form of input current, which causes large thermal stress cycles. The cell temperature build-up during a discharge cycle is used to identify temperature-sensitive model parameters. Additionally, we enhance an aging model built upon cumulative energy drawn by introducing the effect of the load level. We then map cumulative energy and load level to battery capacity with a Gaussian process model. To validate our approach, we use a battery aging dataset collected on a self-developed testbed, where we used a wide current level range to age battery packs in accelerated fashion. Prediction results show that our model can be successfully calibrated and generalizes across all applied load levels.
10

Solving Partial Differential Equations With Neural Networks

Karlsson Faronius, Håkan January 2023 (has links)
In this thesis three different approaches for solving partial differential equa-tions with neural networks will be explored; namely Physics-Informed NeuralNetworks, Fourier Neural Operators and the Deep Ritz method. Physics-Informed Neural Networks and the Deep Ritz Method are unsupervised machine learning methods, while the Fourier Neural Operator is a supervised method. The Physics-Informed Neural Network is implemented on Burger’s equation,while the Fourier Neural Operator is implemented on Poisson’s equation and Darcy’s law and the Deep Ritz method is applied to several variational problems. The Physics-Informed Neural Network is also used for the inverse problem; given some data on a solution, the neural network is trained to determine what the underlying partial differential equation is whose solution is given by the data. Apart from this, importance sampling is also implemented to accelerate the training of physics-informed neural networks. The contributions of this thesis are to implement a slightly different form of importance sampling on the physics-informed neural network, to show that the Deep Ritz method can be used for a larger class of variational problems than the original publication suggests and to apply the Fourier Neural Operator on an application in geophyiscs involving Darcy’s law where the coefficient factor is given by exponentiated two-dimensional pink noise.

Page generated in 0.0849 seconds