Spelling suggestions: "subject:"reducedorder modeling"" "subject:"reducedrder modeling""
11 |
Frequency-Domain Learning of Dynamical Systems From Time-Domain DataAckermann, Michael Stephen 21 June 2022 (has links)
Dynamical systems are useful tools for modeling many complex physical phenomena. In many situations, we do not have access to the governing equations to create these models. Instead, we have access to data in the form of input-output measurements. Data-driven approaches use these measurements to construct reduced order models (ROMs), a small scale model that well approximates the true system, directly from input/output data. Frequency domain data-driven methods, which require access to values (and in some cases to derivatives) of the transfer function, have been very successful in constructing high-fidelity ROMs from data. However, at times this frequency domain data can be difficult to obtain or one might have only access to time-domain data. Recently, Burohman et al. [2020] introduced a framework to approximate transfer function values using only time-domain data. We first discuss improvements to this method to allow a more efficient and more robust numerical implementation. Then, we develop an algorithm that performs optimal-H2 approximation using purely time-domain data; thus significantly extending the applicability of H2-optimal approximation without a need for frequency domain sampling. We also investigate how well other established frequency-based ROM techniques (such as the Loewner Framework, Adaptive Anderson-Antoulas Algorithm, and Vector Fitting) perform on this identified data, and compare them to the optimal-H2 model. / Master of Science / Dynamical systems are useful tools for modeling many phenomena found in physics, chemistry, biology, and other fields of science. A dynamical system is a system of ordinary differential equations (ODEs), together with a state to output mapping. These typically result from a spatial discretization of a partial differential equation (PDE). For every dynamical system, there is a corresponding transfer function in the frequency domain that directly links an input to the system with its corresponding output. For some phenomena where the underlying system does not have a known governing PDE, we are forced to use observations of system input-output behavior to construct models of the system. Such models are called data-driven models. If in addition, we seek a model that can well approximate the true system while keeping the number of degrees of freedom low (e.g., for fast simulation of the system or lightweight memory requirements), we refer to the resulting model as a reduced order model (ROM). There are well established ROM methods that assume access to transfer function input-output data, but such data may be costly or impossible to obtain. This thesis expands upon a method introduced by Burohman et al. [2020] to infer values and derivatives of the transfer function using time domain input-output data. The first contribution of this thesis is to provide a robust and efficient implementation for the data informativity framework. We then provide an algorithm for constructing a ROM that is optimal in a frequency domain sense from time domain data. Finally, we investigate how other established frequency domain ROM techniques perform on the learned frequency domain data.
|
12 |
Reduced Order Modeling for Efficient Stability Analysis in Structural OptimizationSanmugadas, Varakini 15 October 2024 (has links)
Design optimization involving complex structures can be a very resource-intensive task. Convex optimization problems could be solved using gradient-based approaches, whereas non-convex problems require heuristic methods. Over the past few decades, many optimization techniques have been presented in the literature to improve the efficiency of both these approaches. The present work focuses on the non-convex optimization problem involving eigenvalues that arises in structural design optimization. Parametric Model Order Reduction (PMOR) was identified as a potential tool for improving the efficiency of the optimization process. Its suitability was investigated by applying it to different eigenvalue optimization techniques. First, a truss topology optimization study was conducted that reformulated the weight minimization problem with a non-convex lower-bound constraint on the fundamental frequency into the standard convex optimization form of semidefinite programming. Applying PMOR to this, it was found the reduced system was able to converge to the correct final designs, given a reduced basis vector of suitable size was chosen. At the same time, it was shown that preserving the sparse nature of the mass and stiffness matrices was crucial to achieving reduced solution times. In addition, the reformulation to convex optimization form, while possible with the discretized form of vibrational governing equations, is not straightforward with the buckling problem. This is due to the non-linear dependence of the geometric stiffness matrix on the design variables. Hence, we turned to a metaheuristic approach as an alternative and explored the applicability of PMOR in improving its performance. A two-step optimization procedure was developed. In the first step, a set of projection vectors that can be used to project the solutions of the governing higher-order partial differential equations to a lower manifold was assembled. Invariant components of the system matrices that do not depend on the design variables were identified and reduced using the projection vectors. In the second (online) step, the buckling analysis problem was assembled and solved directly in the reduced form. This approach was applied to the design of variable angle tow (VAT) fiber composite structures. Affine matrix decompositions were derived for the linear and geometric stiffness matrices of VAT composites. The resulting optimization framework can rapidly assemble the reduced order matrices related to new designs encountered by the optimizer, perform the physics analysis efficiently in the reduced space, evaluate heuristics related to the objective function, and determine the search direction and convergence based on these evaluations. It was shown that the design space can be traversed efficiently by the developed PMOR-based approach by ensuring a uniform error distribution in objective values throughout the design space. / Doctor of Philosophy / When designing complex structures, designers often have specific performance criteria based on which they improve their preliminary conceptual designs. This could be done by varying some features of the initial designs in a way that these performance criteria are improved. However, it is not always intuitive or efficient to do this manually. Design optimization techniques provide efficient mathematical algorithms that can extract useful information from the governing partial differential equations of the structure and use it to identify the optimal combination of values for a certain set of features, called the design variables, to achieve the optimal performance criteria, referred to as the objective function. As the complexity and size of the structural design problem further increases, typical optimization techniques become slow and resource-intensive. In this work, we propose an optimization framework that uses parametric model order reduction (PMOR) to address this bottleneck. In essence, PMOR filters the large order matrices that arise in these structural analysis problems and provides the optimizer with smaller order matrices that retain the most important features of the original system. This was applied to a truss topology optimization and fiber-composite plate optimization study, both conducted with different types of optimization solvers. It was shown that PMOR resulted in significant efficiency improvements in the design optimization process when paired with an appropriate optimization algorithm.
|
13 |
Reduced order constitutive modeling of a directionally-solidified nickel-base superalloyNeal, Sean Douglas 01 March 2013 (has links)
Hot section components of land-based gas turbines are subject to extremely harsh, high temperature environments and require the use of advanced materials. Directionally solidified Ni-base superalloys are often chosen as materials for these hot section components due to their excellent creep resistance and fatigue properties at high temperatures. These blades undergo complex thermomechanical loading conditions throughout their service life, and the influences of blade geometry and variable operation can make life prediction difficult. Accurate predictions of material response under thermomechanical loading conditions is essential for life prediction of these components. Complex crystal viscoplasticity models are often used to capture the behavior of Ni-base superalloys. While accurate, these models are computationally expensive and are not suitable for all phases of design. This work involves the calibration of a previously developed reduced-order, macroscale transversely isotropic viscoplasticity model to a directionally solidified Ni-base superalloy. The unified model is capable of capturing isothermal and thermomechanical responses in addition to secondary creep behavior. An extreme reduced order microstructure-sensitive constitutive model is also developed using an artificial neural network to provide a rapid first-order approximation of material response under various temperatures, rates of loading, and material orientation from the axis of solidification.
|
14 |
On the Asymptotic Reduction of Classical Modal Analysis for Nonlinear and Coupled Dynamical SystemsCulver, Dean Rogers January 2016 (has links)
<p>Asymptotic Modal Analysis (AMA) is a computationally efficient and accurate method for studying the response of dynamical systems experiencing banded, random harmonic excitation at high frequencies when the number of responding modes is large. In this work, AMA has been extended to systems of coupled continuous components as well as nonlinear systems. Several prototypical cases are considered to advance the technique from the current state-of-the-art. The nonlinear problem is considered in two steps. First, a method for solving problems involving nonlinear continuous multi-mode components, called Iterative Modal Analysis (IMA), is outlined. Secondly, the behavior of a plate carrying a nonlinear spring-mass system is studied, showing how nonlinear effects on system natural frequencies may be accounted for in AMA. The final chapters of this work consider the coupling of continuous systems. For example, two parallel plates coupled at a point are studied. The principal novel element of the two-plate investigation reduces transfer function sums of the coupled system to an analytic form in the AMA approximation. Secondly, a stack of three parallel plates where adjacent plates are coupled at a point are examined. The three-plate investigation refines the reduction of transfer function sums, studies spatial intensification in greater detail, and offers insight into the diminishing response amplitudes in networks of continuous components excited at one location. These chapters open the door for future work in networks of vibrating components responding to banded, high-frequency, random harmonic excitation in the linear and nonlinear regimes.</p> / Dissertation
|
15 |
Practical Aspects of the Implementation of Reduced-Order Models Based on Proper Orthogonal DecompositionBrenner, Thomas Andrew 2011 May 1900 (has links)
This work presents a number of the practical aspects of developing reduced-
order models (ROMs) based on proper orthogonal decomposition (POD). ROMS are
derived and implemented for multiphase flow, quasi-2D nozzle flow and 2D inviscid
channel flow. Results are presented verifying the ROMs against existing full-order
models (FOM).
POD is a method for separating snapshots of a flow field that varies in both time
and space into spatial basis functions and time coefficients. The partial differential
equations that govern fluid flow can then be pro jected onto these basis functions,
generating a system of ordinary differential equations where the unknowns are the
time coefficients. This results in the reduction of the number of equations to be solved from hundreds of thousands or more to hundreds or less.
A ROM is implemented for three-dimensional and non-isothermal multiphase
flows. The derivation of the ROM is presented. Results are compared against the
FOM and show that the ROM agrees with the FOM.
While implementing the ROM for multiphase flow, moving discontinuities were
found to be a ma jor challenge when they appeared in the void fraction around gas
bubbles. A point-mode POD approach is proposed and shown to have promise. A
simple test case for moving discontinuities, the first order wave equation, is used to
test an augmentation method for capturing the discontinuity exactly. This approach
is shown to remove the unphysical oscillations that appear around the discontinuityin traditional approaches.
A ROM for quasi-2D inviscid nozzle flow is constructed and the results are com-
pared to a FOM. This ROM is used to test two approaches, POD-Analytical and
POD-Discretized. The stability of each approach is assessed and the results are used
in the implementation of a ROM for the Navier-Stokes equations.
A ROM for a Navier-Stokes solver is derived and implemented using the results
of the nozzle flow case. Results are compared to the FOM for channel flow with a
bump. The computational speed-up of the ROM is discussed.
Two studies are presented with practical aspects of the implementation of POD-
based ROMs. The first shows the effect of the snapshot sampling on the accuracy
of the POD basis functions. The second shows that for multiphase flow, the cross-
coupling between field variables should not be included when computing the POD
basis functions.
|
16 |
A New Approach to Model Order Reduction of the Navier-Stokes EquationsBalajewicz, Maciej January 2012 (has links)
<p>A new method of stabilizing low-order, proper orthogonal decomposition based reduced-order models of the Navier Stokes equations is proposed. Unlike traditional approaches, this method does not rely on empirical turbulence modeling or modification of the Navier-Stokes equations. It provides spatial basis functions different from the usual proper orthogonal decomposition basis function in that, in addition to optimally representing the solution, the new proposed basis functions also provide stable reduced-order models. The proposed approach is illustrated with two test cases: two-dimensional flow inside a square lid-driven cavity and a two-dimensional mixing layer.</p> / Dissertation
|
17 |
Advanced computational techniques for unsteady aerodynamic-dynamic interactions of bluff bodiesProsser, Daniel T. 21 September 2015 (has links)
Interactions between the aerodynamics and dynamics of bluff bodies are important in many engineering applications, including suspension bridges, tall buildings, oil platforms, wind turbine towers, air drops, and construction with cranes. In the rotorcraft field, bluff bodies are commonly suspended underneath the vehicle by tethers. This approach is often the only practical way to deliver a payload in a reasonable amount of time in disaster relief efforts, search-and-rescue operations, and military operations. However, currently a fundamental understanding of the aerodynamics of these bluff bodies is lacking, and accurate dynamic simulation models for predicting the safe flying speed are not available. In order to address these shortcomings, two main advancements are presented in this thesis.
The aerodynamics of several three-dimensional canonical bluff bodies are examined over a range of Reynolds numbers representative of wind-tunnel-scale to full-scale models. Numerical experiments are utilized, with a focus on uncertainty analysis and validation of the computations. Mean and unsteady forces and moments for these bluff bodies have been evaluated, and empirical models of the shear layer characteristics have been extracted to quantify the behaviors and provide predictive capability. In addition, a physics-based reduced-order simulation model has been developed for bluff bodies. The physics-based approach is necessary to ensure that the predicted behavior of new configurations is accurate, and it is made possible by the breakthroughs in three-dimensional bluff body aerodynamics presented in this thesis. The integrated aerodynamic forces and moments and dynamic behavior predicted by model are extensively validated with data from wind tunnels, flight tests, and high-fidelity computations. Furthermore, successful stability predictions for tethered loads are demonstrated. The model is applicable to the simulation of any generic bluff body configuration, is readily extensible, and has low computational cost.
|
18 |
Measurement of Thermal Diffusivities Using the Distributed Source, Finite Absorption ModelHall, James B. 27 November 2012 (has links)
Thermal diffusivity in an important thermophysical property that quantifies the ratio of the rate at which heat is conducted through a material to the amount of energy stored in a material. The pulsed laser diffusion (PLD) method is a widely used technique for measuring thermal diffusivities of materials. This technique is based on the fact that the diffusivity of a sample may be inferred from measurement of the time-dependent temperature profile at a point on the surface of a sample that has been exposed to a pulse of radiant energy from a laser or flash lamp. An accepted standard approach for the PLD method is based on a simple model of a PLD measurement system. However, the standard approach is based on idealizations that are difficult to achieve in practice. Therefore, models that treat a PLD measurement system with greater fidelity are desired. The objective of this research is to develop and test a higher fidelity model that more accurately represents the spatial and temporal variations in the input power. This higher fidelity model is referred to as Distributed Source Finite Absorption (DSFA) model. The cost of the increased fidelity associated with the DSFA model is an increase in the complexity of inferring values of the thermal diffusivity. A new method of extracting values from time dependent temperature measurements based on a genetic algorithm and on reduced order modeling was developed. The primary contribution of this thesis is a detailed discussion of the development and numerical verification of this proposed new method for measuring the thermal diffusivity of various materials. Verification of the proposed new method was conducted using numerical experiments. A detailed model of a PLD system was created using advanced engineering software, and detailed simulations, including conjugate heat transfer and solution of the full Navier-Stokes equations, were used to generate multiple numerical data sets. These numerical data sets were then used to infer the thermal diffusivity and other properties of the sample using the proposed new method. These numerical data sets were also used as inputs to the standard approach. The results of this verification study show that the proposed new method is able to infer the thermal diffusivity of samples to within 4.93%, the absorption coefficient to within 10.57 % and the heat capacity of the samples to within 5.37 %. Application of the standard approach to these same data sets gave much poorer estimates of the thermal diffusivity, particularly when the absorption coefficient of the material was relatively low.
|
19 |
Efficient Uncertainty Characterization Framework in Neutronics Core Simulation with Application to Thermal-Spectrum Reactor SystemsDongli Huang (7473860) 16 April 2020 (has links)
<div>This dissertation is devoted to developing a first-of-a-kind uncertainty characterization framework (UCF) providing comprehensive, efficient and scientifically defendable methodologies for uncertainty characterization (UC) in best-estimate (BE) reactor physics simulations. The UCF is designed with primary application to CANDU neutronics calculations, but could also be applied to other thermal-spectrum reactor systems. The overarching goal of the UCF is to propagate and prioritize all sources of uncertainties, including those originating from nuclear data uncertainties, modeling assumptions, and other approximations, in order to reliably use the results of BE simulations in the various aspects of reactor design, operation, and safety. The scope of this UCF is to propagate nuclear data uncertainties from the multi-group format, representing the input to lattice physics calculations, to the few-group format, representing the input to nodal diffusion-based core simulators and quantify the uncertainties in reactor core attributes.</div><div>The main contribution of this dissertation addresses two major challenges in current uncertainty analysis approaches. The first is the feasibility of the UCF due to the complex nature of nuclear reactor simulation and computational burden of conventional uncertainty quantification (UQ) methods. The second goal is to assess the impact of other sources of uncertainties that are typically ignored in the course of propagating nuclear data uncertainties, such as various modeling assumptions and approximations.</div>To deal with the first challenge, this thesis work proposes an integrated UC process employing a number of approaches and algorithms, including the physics-guided coverage mapping (PCM) method in support of model validation, and the reduced order modeling (ROM) techniques as well as the sensitivity analysis (SA) on uncertainty sources, to reduce the dimensionality of uncertainty space at each interface of neutronics calculations. In addition to the efficient techniques to reduce the computational cost, the UCF aims to accomplish four primary functions in uncertainty analysis of neutronics simulations. The first function is to identify all sources of uncertainties, including nuclear data uncertainties, modeling assumptions, numerical approximations and technological parameter uncertainties. Second, the proposed UC process will be able to propagate the identified uncertainties to the responses of interest in core simulation and provide uncertainty quantifications (UQ) analysis for these core attributes. Third, the propagated uncertainties will be mapped to a wide range of reactor core operation conditions. Finally, the fourth function is to prioritize the identified uncertainty sources, i.e., to generate a priority identification and ranking table (PIRT) which sorts the major sources of uncertainties according to the impact on the core attributes’ uncertainties. In the proposed implementation, the nuclear data uncertainties are first propagated from multi-group level through lattice physics calculation to generate few-group parameters uncertainties, described using a vector of mean values and a covariance matrix. Employing an ROM-based compression of the covariance matrix, the few-group uncertainties are then propagated through downstream core simulation in a computationally efficient manner.<div>To explore on the impact of uncertainty sources except for nuclear data uncertainties on the UC process, a number of approximations and assumptions are investigated in this thesis, e.g., modeling assumptions such as resonance treatment, energy group structure, etc., and assumptions associated with the uncertainty analysis itself, e.g., linearity assumption, level of ROM reduction and associated number of degrees of freedom employed. These approximations and assumptions have been employed in the literature of neutronic uncertainty analysis yet without formal verifications. The major argument here is that these assumptions may introduce another source of uncertainty whose magnitude needs to be quantified in tandem with nuclear data uncertainties. In order to assess whether modeling uncertainties have an impact on parameter uncertainties, this dissertation proposes a process to evaluate the influence of various modeling assumptions and approximations and to investigate the interactions between the two major uncertainty sources. To explore this endeavor, the impact of a number of modeling assumptions on core attributes uncertainties is quantified.</div><div>The proposed UC process has first applied to a BWR application, in order to test the uncertainty propagation and prioritization process with the ROM implementation in a wide range of core conditions. Finally, a comprehensive uncertainty library for CANDU uncertainty analysis with NESTLE-C as core simulator is generated compressed uncertainty sources from the proposed UCF. The modeling uncertainties as well as their impact on the parameter uncertainty propagation process are investigated on the CANDU application with the uncertainty library.</div>
|
20 |
Computationally Efficient Modeling of Transient Radiation in a Purely Scattering Foam LayerLarson, Rudolph Scott 07 June 2007 (has links) (PDF)
An efficient solution method for evaluating radiative transport in a foam layer is a valuable tool for predicting the properties of the layer. Two different solution methods have been investigated. First, a reverse Monte Carlo (RMC) simulation has been developed. In the RMC simulation photon bundles are traced backwards from a detector to the source where they were emitted. The RMC method takes advantage of time reflection symmetry, allowing the photons to be traced backwards in the same manner they are tracked in a standard forward Monte Carlo scheme. Second, a reduced order model based on the singular value decomposition (ROM) has been developed. ROM uses solutions of the reflectance-time profiles found for specific values of the governing parameters to form a solution basis that can be used to generate the profile for any arbitrary values of the parameter set. The governing parameters that were used in this study include the foam layer thickness, the asymmetry parameter, and the scattering coefficient. Layer thicknesses between 4 cm and 20 cm were considered. Values of the asymmetry parameter varied between 0.2 and .08, while the scattering coefficient ranged from 2800 m-1 to 14000 m-1. Ten blind test cases with parameters chosen randomly from these ranges were run and compared to an established forward Monte Carlo (FMC) solution to determine the accuracy and efficiency of both methods. For both RMC and ROM methods the agreement with FMC is good. The average difference in areas under the curves relative to the FMC curve for the ten cases of RMC is 7.1% and for ROM is 7.6%. One of the ten cases causes ROM to extrapolate outside of its data set. If this case is excluded the average error for the remaining nine cases is 5.3%. While the efficiency of RMC for this case is not much greater than that of FMC, it is advantageous in that a solution over a specified time range can be found, as opposed to the FMC where the entire profile must be found. ROM is a very efficient solution method. After a library of solutions is developed, a separated solution with different parameters can be found essentially in real-time. Because of the efficiency of this ROM it is a very promising solution technique for property analysis using inverse methods.
|
Page generated in 0.0935 seconds