Spelling suggestions: "subject:"datadriven 3dmodeling"" "subject:"datadriven bymodeling""
1 |
Discrete Event Simulation of Operating Rooms Using Data-Driven ModelingMalik, Mandvi January 2018 (has links)
No description available.
|
2 |
Data-Driven Modeling and Control of Batch and Continuous Processes using Subspace MethodsPatel, Nikesh January 2022 (has links)
This thesis focuses on subspace based data-driven modeling and control techniques for batch and continuous processes. Motivated by the increasing amount of process data, data-driven modeling approaches have become more popular. These approaches are better in comparison to first-principles models due to their ability to capture true process dynamics. However, data-driven models rely solely on mathematical correlations and are subject to overfitting. As such, applying first-principles based constraints to the subspace model can lead to better predictions and subsequently better control. This thesis demonstrates that the addition of process gain constraints leads to a more accurate constrained model. In addition, this thesis also shows that using the constrained model in a model predictive control (MPC) algorithm allows the system to reach desired setpoints faster. The novel MPC algorithm described in this thesis is specially designed as a quadratic program to include a feedthrough matrix. This is traditionally ignored in industry however this thesis portrays that its inclusion leads to more accurate process control.
Given the importance of accurate process data during model identification, the missing data problem is another area that needs improvement. There are two main scenarios with missing data: infrequent sampling/ sensor errors and quality variables. In the infrequent sampling case, data points are missing in set intervals and so correlating between different batches is not possible as the data is missing in the same place everywhere. The quality variable case is different in that quality measurements require additional expensive test making them unavailable for over 90\% of the observations at the regular sampling frequency. This thesis presents a novel subspace approach using partial least squares and principal component analysis to identify a subspace model. This algorithm is used to solve each case of missing data in both simulation (polymethyl methacrylate) and industrial (bioreactor) processes with improved performance. / Dissertation / Doctor of Philosophy (PhD) / An important consideration of chemical processes is the maximization of production and product quality. To that end developing an accurate controller is necessary to avoid wasting resources and off-spec products. All advance process control approaches rely on the accuracy of the process model, therefore, it is important to identify the best model. This thesis presents two novel subspace based modeling approaches the first using first principles based constraints and the second handling missing data approaches. These models are then applied to a modified state space model with a predictive control strategy to show that the improved models lead to improved control. The approaches in this work are tested on both simulation (polymethyl methacrylate) and industrial (bioreactor) processes.
|
3 |
Process and Quality Modeling in Cyber Additive Manufacturing Networks with Data AnalyticsWang, Lening 16 August 2021 (has links)
A cyber manufacturing system (CMS) is a concept generated from the cyber-physical system (CPS), providing adequate data and computation resources to support efficient and optimal decision making. Examples of these decisions include production control, variation reduction, and cost optimization. A CMS integrates the physical manufacturing equipment and computation resources via Industrial Internet, which provides low-cost Internet connections and control capability in the manufacturing networks. Traditional quality engineering methodologies, however, typically focus on statistical process control or run-to-run quality control through modeling and optimization of an individual process, which makes it less effective in a CMS with many manufacturing systems connected. In addition, more personalization in manufacturing generates limited samples for the same kind of product designs, materials, and specifications, which prohibits the use of many effective data-driven modeling methods. Motivated by Additive Manufacturing (AM) with the potential to manufacture products with a one-of-a-kind design, material, and specification, this dissertation will address the following three research questions:
(1) how can in situ data be used to model multiple similar AM processes connected in a CMS (Chapter 3)?
(2) How to improve the accuracy of the low-fidelity first-principle simulation (e.g., finite element analysis, FEA) for personalized AM products to validate the product and process designs (Chapter 4) in time?
(3) And how to predict the void defect (i.e., unmeasurable quality variables) based on the in situ quality variables.
By answering the above three research questions, the proposed methodology will effectively generate in situ process and quality data for modeling multiple connected AM processes in a CMS. The research to quantify the uncertainty of the simulated in situ process data and their impact on the overall AM modeling is out of the scope of this research. The proposed methodologies will be validated based on fused deposition modeling (FDM) processes and selective laser melting processes (SLM). Moreover, by comparing with the corresponding benchmark methods, the merits of the proposed methods are demonstrated in this dissertation. In addition, the proposed methods are inherently developed with a general data-driven framework. Therefore, they can also potentially be extended to other applications and manufacturing processes. / Doctor of Philosophy / Additive manufacturing (AM) is a promising advanced manufacturing process that can realize the personalized products in complex shapes with unprecedented materials. However, there are many quality issues that can restrict the wide deployment of AM in practice, such as voids, porosity, cracking, etc. To effectively model and further mitigate these quality issues, the cyber manufacturing system (CMS) is adopted. The CMS can provide the data acquisition functionality to collect the real-time process data which directly or indirectly related to the product quality in AM. Moreover, the CMS can provide the computation capability to analyze the AM data and support the decision-making to optimize the AM process. However, due to the characteristics of AM process, there are several challenges effectively and efficiently model the AM data. First, there are many one-of-a-kind products in AM, and leads to limited observations for each product that can support to estimate an accurate model. Therefore, in Chapter 3, I would like to discuss how to jointly model personalized products by sharing the information among these similar-but-non-identical AM processes with limited observations. Second, for personalized product realization in AM, it is essential to validate the product and process designs before fabrication quickly. Usually, finite element analysis (FEA) is employed to simulate the manufacturing process based on the first-principal model. However, due to the complexity, the high-fidelity simulation is very time-consuming and will delay the product realization in AM. Therefore, in Chapter 4, I would like to study how to predict the high-fidelity simulation result based on the low-fidelity simulation with fast computation speed and limited capability. Thirdly, the defects of AM are usually inside the product, and can be identified by the X-ray computed tomography (CT) images after the build of the AM products. However, limited by the sensor technology, CT image is difficult to obtain for online (i.e., layer-wise) defect detection to mitigate the defects. Therefore, as an alternative, I would like to investigate how to predict the CT image based on the optical layer-wise image, which can be obtained during the AM process in Chapter 5. The proposed methodologies will be validated based on two types of AM processes: fused deposition modeling (FDM) processes and selective laser melting processes (SLM).
|
4 |
Predictive Simulations of the Impedance-Matched Multi-Axis Test Method Using Data-Driven ModelingMoreno, Kevin Joel 02 October 2020 (has links)
Environmental testing is essential to certify systems to withstand the harsh dynamic loads they may experience in their service environment or during transport. For example, satel- lites are subjected to large vibration and acoustic loads when transported into orbit and need to be certified with tests that are representative of the anticipated loads. However, tra- ditional certification testing specifications can consist of sequential uniaxial vibration tests, which have been found to severely over- and under-test systems needing certification. The recently developed Impedance-Matched Multi-Axis Test (IMMAT) has been shown in the literature to improve upon traditional environmental testing practices through the use of multi-input multi-output testing and impedance matching. Additionally, with the use of numerical models, predictive simulations can be performed to determine optimal testing pa- rameters. Developing an accurate numerical model, however, requires precise knowledge of the system's dynamic characteristics, such as boundary conditions or material properties. These characteristics are not always available and would also require additional testing for verification. Furthermore, some systems may be extremely difficult to model using numerical methods because they contain millions of finite elements requiring impractical times scales to simulate or because they were fabricated before mainstream use of computer aided drafting and finite element analysis but are still in service. An alternative to numerical modeling is data-driven modeling, which does not require knowledge of a system's dynamic characteris- tics. The Continuous Residue Interpolation (CRI) method has been recently developed as a novel approach for building data-driven models of dynamical systems. CRI builds data- driven models by fitting smooth, continuous basis functions to a subset of frequency response function (FRF) measurements from a dynamical system. The resulting fitted basis functions can be sampled at any geometric location to approximate the expected FRF at that location. The research presented in this thesis explores the use of CRI-derived data-driven models in predictive simulations for the IMMAT performed on a Euler-Bernoulli beam. The results of the simulations reveal that CRI-derived data-driven models of a Euler-Bernoulli beam achieve similar performance when compared to a finite element model and make similar decisions when deciding the excitation locations in an IMMAT. / Master of Science / In the field of vibrations testing, environmental tests are used to ensure that critical devices or structures can withstand harsh vibration environments. For example, satellites experience harsh vibrations and damaging acoustics that are transferred from it's rocket transport vehicle. Traditional environmental tests would require that the satellite be placed on a vibration table and sequentially vibrated in multiple orientations for a specified duration and intensity. However, these traditional environmental tests do not always produce vibrations that are representative of the anticipated transport or operational environment. Newly developed methods, such as the Impedance-Matched Multi-Axis Test (IMMAT) methods achieves representative test results by matching the mounting characteristics of the structure during it's transport or operational environment and vibrating the structure in multiple directions simultaneously. An IMMAT can also be optimized by using finite element models (FEM), which approximate the device to be tested with a discrete number of small volumes whose physics are described by fundamental equations of motion. However, an FEM can only be used if it's dynamic characteristics are sufficiently similar to the structure undergoing testing. This can only be achieved with precise knowledge of the dynamical properties of the structure, which is not always available. An alternate approach to an FEM is to use a data-driven model. Because data-driven models are made using data from the system it is supposed to describe, dynamical properties of the device are pre-built in the model and is not necessary to approximate them. Continuous Residue Interpolation (CRI) is a recently developed data-driven modeling scheme that approximates a structure's dynamic properties with smooth, continuous functions updated with measurements of the input-output response dynamics of the device. This thesis presents the performance of data-driven models generated using CRI when used in predictive simulations of an IMMAT. The results show that CRI- derived data-driven models perform similarly to FEMs and make similar predictions for optimal input vibration locations.
|
5 |
Analyzing the effects of Ca<sup>2+</sup> dynamics on mitochondrial function in health and diseaseToglia, Patrick 04 April 2018 (has links)
Mitochondria plays a crucial role in cells by maintaining energy metabolism and directing cell death mechanisms by buffering calcium (Ca2+ )from cytosol. Therefore, the Ca2+ overload of mitochondria due to the upregulated cytosolic Ca2+ , observed in many neurological disorders is hypothesized to be a key pathway leading to mitochondrial dysfunction and cell death. In particular, Ca2+ homeostasis disruptions due to Alzheimer’ s disease (AD)-causing presenilins (PS1/PS2) and oligomeric forms of β-amyloid peptides Aβ commonly found in AD patients are presumed to cause detrimental effects on the mitochondria and its ability to function properly. We begin by showing that Familial Alzheimer’s disease (FAD)-causing PS mutants affect intracellular Ca2+ ([Ca2+]i) homeostasis by enhancing the gating of inositol 1,4,5-trisphosphate (IP3) receptor (IP3R) Ca2+ channels on the endoplasmic reticulum (ER), leading to exaggerated Ca2+ release into the cytoplasm. Using experimental IP3R-mediated Ca2+ release data in conjunction with a computational model of mitochondrial bioenergetics, we explore how the differences in mitochondrial Ca2+ uptake in control cells and cells expressing FAD-causing PS mutants affect key variables such as ATP, reactive oxygen species (ROS), NADH, and mitochondrial Ca2+ ([Ca2+ ]m). We find that as a result of exaggerated [Ca2+]i in FAD-causing mutant PS-expressing cells, the rate of oxygen consumption increases dramatically and overcomes the Ca2+ dependent enzymes that stimulate NADH production. This leads to decreased rates of proton pumping due to diminished membrane potential (Ψm) along with less ATP and enhanced ROS production. These results show that through Ca2+ signaling disruption, mutant PS leads to mitochondrial dysfunction and potentially cell death.
Next, the model for the mitochondria is expanded to include the mitochondrial uniporter (MCU) that senses Ca2+ in the microdomain formed by the close proximity of mitochondria and ER. Ca2+ concentration in the microdomain ([Ca2+] mic) depends on the distance between the cluster of IP3R channels (r) on ER and mitochondria, the number of IP3R in the cluster (nIP3R), and open-probability (Po) of IP3R. Using the same experimental results for Ca2+ release though IP3R due to FAD-causing PS mutants, in conjunction with a computational model of mitochondrial bioenergetics, a data-driven Markov chain model for IP3R gating, and a model for the dynamics of the mitochondrial permeability transition pore (PTP), we explore the difference in mitochondrial Ca2+ uptake in cells expressing wild type (PS1-WT) and FAD-causing mutant (PS1-M146L) PS. We find that increased mitochondrial [Ca2+]m due to the gain-of-function enhancement of IP3R channels in the cell expressing PS1-M146L leads to the opening of PTP in high conductance state (PTPh), where the latency of opening is inversely correlated with r and proportional to nIP3R. Furthermore, we observe diminished inner mitochondrial Ψm, [NADH], [Ca2+]m, and [ATP] when PTP opens. Additionally, we explore how parameters such as the pH gradient, inorganic phosphate concentration, and the rate of the Na+/ Ca2+ -exchanger affect the latency of PTP to open in PTPh.
Intracellular accumulation of oligomeric forms of Aβ are now believed to play a key role in the early phase of AD as their rise correlates well with the early symptoms of the disease. Extensive evidence points to impaired neuronal Ca2+ homeostasis as a direct consequence of the intracellular Aβ oligomers. To study the effect of intracellular Aβ on Ca2+ signaling and the resulting mitochondrial dysfunction, we employed data-driven modeling in conjunction with total internal reflection fluorescence (TIRF) microscopy (TIRFM). High resolution fluorescence TIRFM together with detailed computational modeling provides a powerful approach towards the understanding of a wide range of Ca2+ signals mediated by the IP3R. Achieving this requires a close agreement between Ca2+ signals from computational models and TIRFM experiments. However, we found that elementary Ca2+ release events, puffs, imaged through TIRFM do not show the rapid single-channel opening and closing during x and between puffs using data-driven single channel models. TIRFM also shows a rapid equilibration of 10 ms after a channel opens or closes which is not achievable in simulation using standard Ca2+ diffusion coefficients and reaction rates between indicator dye and Ca2+. Using the widely used Ca2+ diffusion coefficients and reaction rates, our simulations show equilibration rates that are eight times slower than TIRFM imaging. We show that to get equilibrium rates consistent with observed values, the diffusion coefficients and reaction rates have to be significantly higher than the values reported in the literature. Once a close agreement between experiment and model is achieved, we use multiscale modeling in conjunction with patch-clamp electrophysiology of IP3R and fluorescence imaging of whole-cell Ca2+ response, induced by intracellular Aβ42 oligomers to show that Aβ42 inflicts cytotoxicity by impairing mitochondrial function. Driven by patch-clamp experiments, we first model the kinetics of IP3R, which is then extended to build a model for the whole-cell Ca2+ signals. The whole-cell model is then fitted to fluorescence signals to quantify the overall Ca2+ release from the ER by intracellular Aβ42 oligomers through G-protein-mediated stimulation of IP3 production. The estimated IP3 concentration as a function of intracellular Aβ42 content together with the whole-cell model allows us to show that Aβ42 oligomers impair mitochondrial function through pathological Ca2+ uptake and the resulting reduced mitochondrial inner membrane potential, leading to an overall lower ATP and increased production of reactive oxygen species and [H2O2]. We further show that mitochondrial function can be restored by the addition of Ca2+ buffer EGTA, in accordance with the observed abrogation of Aβ42 cytotoxicity by EGTA in our live cells experiments.
Finally, our modeling study was extended to other pathological phenomena such as epileptic seizures and spreading depolarizations (SD) and their effects on mitochondria by incorporating conservation of particles and charge, and accounting for the energy required to restore ionic gradients to the neuron. By examining the dynamics as a function of potassium and oxygen we can account for a wide range of neuronal hyperactivity from seizures, normoxic SD, and hypoxic SD (HSD) in the model. Together with a detailed model of mitochondria xi and Ca2+ -release through the ER, we determine mitochondrial dysfunction and potential recovery mechanisms from HSD. Our results demonstrate that HSD causes detrimental mitochondrial dysfunction that can only be recovered by restoration of oxygen. Once oxygen is replenished to the neuron, organic phosphate and pH gradients along the mitochondria determine how rapid the neuron recovers from HSD.
|
6 |
Development of Hybrid Inexact Optimization Models for Water Quality Management under UncertaintyZhang, Qianqian January 2021 (has links)
Water quality management (WQM) significantly affects water use and ecosystem health, which is helpful for achieving sustainability in environmental and economic aspects. However, the implementation of water quality management is still challenging in practice due to the uncertainty and nonlinearity existing in water systems, as well as the difficulty of the integration of simulation and optimization analyses. Therefore, effective optimization frameworks for handling nonlinearity, various uncertainties, and integrated complex water quality simulation models are highly desired. This dissertation tries to address such challenges by proposing new efficient hybrid inexact optimization models for water quality management under uncertainty through: i) developing an interval quadratic programming (IQP) model for handling both nonlinearity and uncertainty expressed as intervals for water quality management, and solving the developed model by three algorithms to compare and investigate the most effective and straightforward solution algorithm for IQP-WQM problems; ii) developing a simulation-based interval chance-constrained quadratic programming model, which is able to deal with nonlinearity and uncertainties with multiple formats, and implementing a real-world case study of phosphorus control in the central Grand River, Ontario, Canada; iii) proposing a data-driven interval credibility constrained quadratic programming model for water quality management by utilizing a data-driven surrogate model (i.e., inexact linear regression) to incorporate a complex water quality simulation model with the optimization framework to overcome challenges from the integrated simulation-optimization. The performance of the proposed frameworks/models was tested by different case studies and various mathematical techniques (e.g., sensitivity analysis). The results indicate the proposed models are capable of dealing with nonlinearity and various uncertainties, and significantly reducing the computational burden from simulation-optimization analysis. Coupling such efforts in developing efficient hybrid inexact optimization models for water quality management under uncertainty can provide useful tools to solve large-scale complex water quality management problems in a robust manner, and further provide reliable and effective decision supports for water quality planning and management. / Thesis / Doctor of Philosophy (PhD) / Water quality management plays a key role in facilitating environmental and economic sustainability. However, many challenges still exist in practical water quality management problems, such as various uncertainties and complexities, as well as complicated integrated simulation-optimization analysis. Therefore, the goal of this dissertation is to address such challenges by developing a set of efficient hybrid inexact optimization models for water quality management under uncertainty through: i) developing an interval quadratic programming model for water quality management, and investigating its effective and straightforward solution algorithms; ii) leveraging the power of data-driven modeling and proposing efficient data-driven optimization models based on hybrid inexact programming for water quality management. Robust and effective water quality planning schemes for large-scale water quality management problems can be obtained based on the proposed frameworks/models.
|
7 |
Deep Learning of Model Correction and Discontinuity DetectionZhou, Zixu 26 August 2022 (has links)
No description available.
|
8 |
Fusing Modeling and Testing to Enhance Environmental Testing ApproachesDevine, Timothy Andrew 09 July 2019 (has links)
A proper understanding of the dynamics of a mechanical system is crucial to ensure the highest levels of performance. The understanding is frequently determined through modeling and testing of components. Modeling provides a cost effective method for rapidly developing a knowledge of the system, however the model is incapable of accounting for fluctuations that occur in physical spaces. Testing, when performed properly, provides a near exact understanding of how a pat or assembly functions, however can be expensive both fiscally and temporally.
Often, practitioners of the two disciplines work in parallel, never bothering to intersect with the other group. Further advancement into ways to fuse modeling and testing together is able to produce a more comprehensive understanding of dynamic systems while remaining inexpensive in terms of computation, financial cost, and time. Due to this, the goal of the presented work is to develop ways to merge the two branches to include test data in models for operational systems. This is done through a series of analytical and experimental tasks examining the boundary conditions of various systems.
The first venue explored was an attempt at modeling unknown boundary conditions from an operational environment by modeling the same system in known configurations using a controlled environment, such as what is seen in a laboratory test. An analytical beam was studied under applied environmental loading with grounding stiffnesses added to simulate an operational condition and the response was attempted to be matched by a free boundaries beam with a reduced number of excitation points. Due to the properties of the inverse problem approach taken, the response between the two systems matched at control locations, however at non-control locations the responses showed a large degree of variation. From the mismatch in mechanical impedance, it is apparent that improperly representing boundary conditions can have drastic effects on the accuracy of models and recreational tests.
With the progression now directed towards modeling and testing of boundary conditions, methods were explored to combine the two approaches working together in harmony. The second portion of this work focuses on modeling an unknown boundary connection using a collection of similar testable boundary conditions to parametrically interpolate to the unknown configuration. This was done by using data driven models of the known systems as the interpolating functions, with system boundary stiffness being the varied parameter. This approach yielded near identical parametric model response to the original system response in analytical systems and showed some early signs of promise for an experimental beam.
After the two conducted studies, the potential for extending a parametric data driven model approach to other systems is discussed. In addition to this, improvements to the approach are discussed as well as the benefits it brings. / Master of Science / A proper understanding of the dynamics of a mechanical system in a severe environment is crucial to ensure the highest levels of performance. The understanding is frequently determined through modeling and testing of components. Modeling provides a cost-effective method for rapidly developing a knowledge of the system; however, the model is incapable of accounting for fluctuations that occur in physical spaces. Testing, when performed properly, provides a near exact understanding of how a pat or assembly functions, however, can be expensive both fiscally and temporally. Often, practitioners of the two disciplines work in parallel, never bothering to intersect with the other group and favoring one approach over the other for various reasons. Further advancement into ways to fuse modeling and testing together can produce a more comprehensive understanding of dynamic systems subject to environmental excitation while remaining inexpensive in terms of computation, financial cost, and time.
Due to this, the presented work aims to develop ways to merge the two branches to include test data in models for operational systems. This is done through a series of analytical and experimental tasks examining the boundary conditions of various systems and attempting to replicate the system response using inverse approaches at first. This is then proceeded by modeling boundary stiffnesses using data-driven modeling and parametric modeling approaches. The validity and impact these methods may have are also discussed.
|
9 |
Cross-Validation of Data-Driven Correction Reduced Order ModelingMou, Changhong 03 October 2018 (has links)
In this thesis, we develop a data-driven correction reduced order model (DDC-ROM) for numerical simulation of fluid flows. The general DDC-ROM involves two stages: (1) we apply ROM filtering (such as ROM projection) to the full order model (FOM) and construct the filtered ROM (F-ROM). (2) We use data-driven modeling to model the nonlinear interactions between resolved and unresolved modes, which solves the F-ROM's closure problem.
In the DDC-ROM, a linear or quadratic ansatz is used in the data-driven modeling step. In this thesis, we propose a new cubic ansatz. To get the unknown coefficients in our ansatz, we solve an optimization problem that minimizes the difference between the FOM data and the ansatz. We test the new DDC-ROM in the numerical simulation of the one-dimensional Burgers equation with a small diffusion coefficient. Furthermore, we perform a cross-validation of the DDC-ROM to investigate whether it can be successful in computational settings that are different from the training regime. / M.S. / Practical engineering and scientific problems often require the repeated simulation of unsteady fluid flows. In these applications, the computational cost of high-fidelity full-order models can be prohibitively high. Reduced order models (ROMs) represent efficient alternatives to brute force computational approaches. In this thesis, we propose a data-driven correction ROM (DDC-ROM) in which available data and an optimization problem are used to model the nonlinear interactions between resolved and unresolved modes. In order to test the new DDC-ROM's predictability, we perform its cross-validation for the one-dimensional viscous Burgers equation and different training regimes.
|
10 |
Combining Data-driven and Theory-guided Models in Ensemble Data AssimilationPopov, Andrey Anatoliyevich 23 August 2022 (has links)
There once was a dream that data-driven models would replace their theory-guided counterparts. We have awoken from this dream. We now know that data cannot replace theory. Data-driven models still have their advantages, mainly in computational efficiency but also providing us with some special sauce that is unreachable by our current theories. This dissertation aims to provide a way in which both the accuracy of theory-guided models, and the computational efficiency of data-driven models can be combined. This combination of theory-guided and data-driven allows us to combine ideas from a much broader set of disciplines, and can help pave the way for robust and fast methods. / Doctor of Philosophy / As an illustrative example take the problem of predicting the weather. Typically a supercomputer will run a model several times to generate predictions few days into the future. Sensors such as those on satellites will then pick up observations about a few points on the globe, that are not representative of the whole atmosphere. These observations are combined, ``assimilated'' with the computer model predictions to create a better representation of our current understanding of the state of the earth. This predict-assimilate cycle is repeated every day, and is called (sequential) data assimilation. The prediction step traditional was performed by a computer model that was based on rigorous mathematics. With the advent of big-data, many have wondered if models based purely on data would take over. This has not happened. This thesis is concerned with taking traditional mathematical models and running them alongside data-driven models in the prediction step, then building a theory in which both can be used in data assimilation at the same time in order to not have a drop in accuracy and have a decrease in computational cost.
|
Page generated in 0.09 seconds