• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 377
  • 153
  • 69
  • 59
  • 39
  • 30
  • 13
  • 11
  • 8
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 970
  • 204
  • 170
  • 136
  • 103
  • 81
  • 67
  • 63
  • 63
  • 59
  • 59
  • 58
  • 57
  • 56
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Development of Reduced-Order Flame Models for Prediction of Combustion Instability

Huang, Xinming 30 November 2001 (has links)
Lean-premixed combustion has the advantage of low emissions for modern gas turbines, but it is susceptible to thermoacoustic instabilities, which can result in large amplitude pressure oscillations in the combustion chamber. The thermoacoustic limit cycle is generated by the unsteady heat release dynamics coupled to the combustor acoustics. In this dissertation, we focused on reduced-order modeling of the dynamics of a laminar premixed flame. From first principles of combustion dynamics, a physically-based, reduced-order, nonlinear model was developed based on the proper orthogonal decomposition technique and generalized Galerkin method. In addition, the describing function for the flame was measured experimentally and used to identify an empirical nonlinear flame model. Furthermore, a linear acoustic model was developed and identified for the Rijke tube experiment. Closed-loop thermoacoustic modeling using the first principles flame model coupled to the linear acoustics successfully reproduced the linear instability and predicted the thermoacoustic limit cycle amplitude. With the measured experimental flame data and the modeled linear acoustics, the describing function technique was applied for limit cycle analysis. The thermoacoustic limit cycle amplitude was predicted with reasonable accuracy, and the closed-loop model also predicted the performance for a phase shift controller. Some problems found in the predictions for high heat release cases were documented. / Ph. D.
122

Robust Implementations of the Multistage Wiener Filter

Hiemstra, John David 11 April 2003 (has links)
The research in this dissertation addresses reduced rank adaptive signal processing, with specific emphasis on the multistage Wiener filter (MWF). The MWF is a generalization of the classical Wiener filter that performs a stage-by-stage decomposition based on orthogonal projections. Truncation of this decomposition produces a reduced rank filter with many benefits, for example, improved performance. This dissertation extends knowledge of the MWF in four areas. The first area is rank and sample support compression. This dissertation examines, under a wide variety of conditions, the size of the adaptive subspace required by the MWF (i.e., the rank) as well as the required number of training samples. Comparisons are made with other algorithms such as the eigenvector-based principal components algorithm. The second area investigated in this dissertation concerns "soft stops", i.e., the insertion of diagonal loading into the MWF. Several methods for inserting loading into the MWF are described, as well as methods for choosing the amount of loading. The next area investigated is MWF rank selection. The MWF will outperform the classical Wiener filter when the rank is properly chosen. This dissertation presents six approaches for selecting MWF rank. The algorithms are compared to one another and an overall design space taxonomy is presented. Finally, as digital modelling capabilities become more sophisticated there is emerging interest in augmenting adaptive processing algorithms to incorporate prior knowledge. This dissertation presents two methods for augmenting the MWF, one based on linear constraints and a second based on non-zero weight vector initialization. Both approaches are evaluated under ideal and perturbed conditions. Together the research described in this dissertation increases the utility and robustness of the multistage Wiener filter. The analysis is presented in the context of adaptive array processing, both spatial array processing and space-time adaptive processing for airborne radar. The results, however, are applicable across the entire spectrum of adaptive signal processing applications. / Ph. D.
123

In Pursuit of Local Correlation for Reduced-Scaling Electronic Structure Methods in Molecules and Periodic Solids

Clement, Marjory Carolena 05 August 2021 (has links)
Over the course of the last century, electronic structure theory (or, alternatively, computational quantum chemistry) has grown from being a fledgling field to being a "full partner with experiment" [Goddard Science 1985, 227 (4689), 917--923]. Numerous instances of theory matching experiment to very high accuracy abound, with one excellent example being the high-accuracy ab initio thermochemical data laid out in the 2004 work of Tajti and co-workers [Tajti et al. J. Chem. Phys. 2004, 121, 11599] and another being the heats of formation and molecular structures computed by Feller and co-workers in 2008 [Feller et al. J. Chem. Phys. 2008, 129, 204105]. But as the authors of both studies point out, this very high accuracy comes at a very high cost. In fact, at this point in time, electronic structure theory does not suffer from an accuracy problem (as it did in its early days) but a cost problem; or, perhaps more precisely, it suffers from an accuracy-to-cost ratio problem. We can compute electronic energies to nearly any precision we like, as long as we are willing to pay the associated cost. And just what are these high computational costs? For the purposes of this work, we are primarily concerned with the way in which the computational cost of a given method scales with the system size; for notational purposes, we will often introduce a parameter, N, that is proportional to the system size. In the case of Hartree-Fock, a one-body wavefunction-based method, the scaling is formally N⁴, and post-Hartree-Fock methods fare even worse. The coupled cluster singles, doubles, and perturbative triples method [CCSD(T)], which is frequently referred to as the "gold standard" of quantum chemistry, has an N⁷ scaling, making it inapplicable to many systems of real-world import. If highly accurate correlated wavefunction methods are to be applied to larger systems of interest, it is crucial that we reduce their computational scaling. One very successful means of doing this relies on the fact that electron correlation is fundamentally a local phenomenon, and the recognition of this fact has led to the development of numerous local implementations of conventional many-body methods. One such method, the DLPNO-CCSD(T) method, was successfully used to calculate the energy of the protein crambin [Riplinger, et al. J. Chem. Phys 2013, 139, 134101]. In the following work, we discuss how the local nature of electron correlation can be exploited, both in terms of the occupied orbitals and the unoccupied (or virtual) orbitals. In the case of the former, we highlight some of the historical developments in orbital localization before applying orbital localization robustly to infinite periodic crystalline systems [Clement, et al. 2021, Submitted to J. Chem. Theory Comput.]. In the case of the latter, we discuss a number of different ways in which the virtual space can be compressed before presenting our pioneering work in the area of iteratively-optimized pair natural orbitals ("iPNOs") [Clement, et al. J. Chem. Theory Comput. 2018, 14 (9), 4581--4589]. Concerning the iPNOs, we were able to recover significant accuracy with respect to traditional PNOs (which are unchanged throughout the course of a correlated calculation) at a comparable truncation level, indicating that our improved PNOs are, in fact, an improved representation of the coupled cluster doubles amplitudes. For example, when studying the percent errors in the absolute correlation energies of a representative sample of weakly bound dimers chosen from the S66 test suite [Řezác, et al. J. Chem. Theory Comput. 2011, 7 (8), 2427--2438], we found that our iPNO-CCSD scheme outperformed the standard PNO-CCSD scheme at every truncation threshold (τ<sub>PNO</sub>) studied. Both PNO-based methods were compared to the canonical CCSD method, with the iPNO-CCSD method being, on average, 1.9 times better than the PNO-CCSD method at τ<sub>PNO</sub> = 10⁻⁷ and more than an order of magnitude better for τ<sub>PNO</sub> < 10⁻¹⁰ [Clement, et al. J. Chem. Theory Comput 2018, 14 (9), 4581--4589]. When our improved PNOs are combined with the PNO-incompleteness correction proposed by Neese and co-workers [Neese, et al. J. Chem. Phys. 2009, 130, 114108; Neese, et al. J. Chem. Phys. 2009, 131, 064103], the results are truly astounding. For a truncation threshold of τ<sub>PNO</sub> = 10⁻⁶, the mean average absolute error in binding energy for all 66 dimers from the S66 test set was 3 times smaller when the incompleteness-corrected iPNO-CCSD method was used relative to the incompleteness-corrected PNO-CCSD method [Clement, et al. J. Chem. Theory Comput. 2018, 14 (9), 4581--4589]. In the latter half of this work, we present our implementation of a limited-memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) based Pipek-Mezey Wannier function (PMWF) solver [Clement, et al. 2021 }, Submitted to J. Chem. Theory Comput.]. Although orbital localization in the context of the linear combination of atomic orbitals (LCAO) representation of periodic crystalline solids is not new [Marzari, et al. Rev. Mod. Phys. 2012, 84 (4), 1419--1475; Jònsson, et al. J. Chem. Theory Comput. 2017, 13} (2), 460--474], to our knowledge, this is the first implementation to be based on a BFGS solver. In addition, we are pleased to report that our novel BFGS-based solver is extremely robust in terms of the initial guess and the size of the history employed, with the final results and the time to solution, as measured in number of iterations required, being essentially independent of these initial choices. Furthermore, our BFGS-based solver converges much more quickly and consistently than either a steepest ascent (SA) or a non-linear conjugate gradient (CG) based solver, with this fact demonstrated for a number of 1-, 2-, and 3-dimensional systems. Armed with our real, localized Wannier functions, we are now in a position to pursue the application of local implementations of correlated many-body methods to the arena of periodic crystalline solids; a first step toward this goal will, most likely, be the study of PNOs, both conventional and iteratively-optimized, in this context. / Doctor of Philosophy / Increasingly, the study of chemistry is moving from the traditional wet lab to the realm of computers. The physical laws that govern the behavior of chemical systems, along with the corresponding mathematical expressions, have long been known. Rapid growth in computational technology has made solving these equations, at least in an approximate manner, relatively easy for a large number of molecular and solid systems. That the equations must be solved approximately is an unfortunate fact of life, stemming from the mathematical structure of the equations themselves, and much effort has been poured into developing better and better approximations, each trying to balance an acceptable level of accuracy loss with a realistic level of computational cost and complexity. But though there has been much progress in developing approximate computational chemistry methods, there is still great work to be done. Many chemical systems of real-world import (particularly biomolecules and potential pharmaceuticals) are simply too large to be treated with any methods that consistently deliver acceptable accuracy. As an example of the difficulties that come with trying to apply accurate computational methods to systems of interest, consider the seminal 2013 work of Riplinger and co-workers [Riplinger, et al. J. Chem. Phys. 2013, 139, 134101]. In this paper, they present the results of a calculation performed on the protein crambin. The method used was DLPNO-CCSD(T), an approximation to the "gold standard" computational method CCSD(T). The acronym DLPNO-CCSD(T) stands for "`domain-based local pair natural orbital coupled cluster with singles, doubles, and perturbative triples." In essence, this method exploits the fact that electron-electron interactions ("electron correlation") are a short-range phenomenon in order to represent the system in a mathematically more compact way. This focus on the locality of electron correlation is a crucial piece in the effort to bring down computational cost. When talking about computational cost, we will often talk about how the cost scales with the approximate system size N. In the case of CCSD(T), the cost scales as N⁷. To see what this means, consider two chemical systems A and B. If system B is twice as large as system A, then the same calculation run on both systems will take 2⁷ = 128 times longer on system B than on system A. The DLPNO-CCSD(T) method, on the other hand, scales linearly with the system size, provided the system is sufficiently large (we say that it is "asymptotically linearly scaling"), and so, for our example systems A and B, the calculation run on system B should only take twice as long as the calculation run on system A. But despite the favorable scaling afforded by the DLPNO-CCSD(T) method, the time to solution is still prohibitive. In the case of crambin, a relatively small protein with 644 atoms, the calculation took a little over 30 days. Clearly, such timescales are unworkable for the field of biochemical research, where the focus is often on the interactions between multiple proteins or other large biomolecules and where many more data points are required. In the work that follows, we discuss in more detail the genesis of the high costs that are associated with highly accurate computational methods, as well as some of the approximation techniques that have already been employed, with an emphasis on local correlation techniques. We then build off this foundation to discuss our own work and how we have extended such approximation techniques in an attempt to further increase the possible accuracy to cost ratio. In particular, we discuss how iteratively-optimized pair natural orbitals (the PNOs of the DLPNO-CCSD(T) method) can provide a more accurate but also more compact mathematical representation of the system relative to static PNOs [Clement, et al. J. Chem. Theory Comput. 2018, 14 (9), 4581--4589]. Additionally, we turn our attention to the problem of periodic infinite crystalline systems, a class of materials less commonly studied in the field of computational chemistry, and discuss how the local correlation techniques that have already been applied with great success to molecular systems can potentially be applied in this domain as well [Clement, et al. 2021, Submitted to J. Chem. Theory Comput.].
124

Explicitly Correlated Methods for Large Molecular Systems

Pavosevic, Fabijan 02 February 2018 (has links)
Wave function based electronic structure methods have became a robust and reliable tool for the prediction and interpretation of the results of chemical experiments. However, they suffer from very steep scaling behavior with respect to an increase in the size of the system as well as very slow convergence of the correlation energy with respect to the basis set size. Thus these methods are limited to small systems of up to a dozen atoms. The first of these issues can be efficiently resolved by exploiting the local nature of electron correlation effects while the second problem is alleviated by the use of explicitly correlated R12/F12 methods. Since R12/F12 methods are central to this work, we start by reviewing their modern formulation. Next, we present the explicitly correlated second-order Mo ller-Plesset (MP2-F12) method in which all nontrivial post-mean-field steps are formulated with linear computational complexity in system size [Pavov{s}evi'c et al., {em J. Chem. Phys.} {bf 144}, 144109 (2016)]. The two key ideas are the use of pair-natural orbitals for compact representation of wave function amplitudes and the use of domain approximation to impose the block sparsity. This development utilizes the concepts for sparse representation of tensors described in the context of the DLPNO-MP2 method by Neese, Valeev and co-workers [Pinski et al., {em J. Chem. Phys.} {bf 143}, 034108 (2015)]. Novel developments reported here include the use of domains not only for the projected atomic orbitals, but also for the complementary auxiliary basis set (CABS) used to approximate the three- and four-electron integrals of the F12 theory, and a simplification of the standard B intermediate of the F12 theory that avoids computation of four-index two-electron integrals that involve two CABS indices. For quasi-1-dimensional systems (n-alkanes) the bigO{N} DLPNO-MP2-F12 method becomes less expensive than the conventional bigO{N^{5}} MP2-F12 for $n$ between 10 and 15, for double- and triple-zeta basis sets; for the largest alkane, C$_{200}$H$_{402}$, in def2-TZVP basis the observed computational complexity is $N^{sim1.6}$, largely due to the cubic cost of computing the mean-field operators. The method reproduces the canonical MP2-F12 energy with high precision: 99.9% of the canonical correlation energy is recovered with the default truncation parameters. Although its cost is significantly higher than that of DLPNO-MP2 method, the cost increase is compensated by the great reduction of the basis set error due to explicit correlation. We extend this formalism to develop a linear-scaling coupled-cluster singles and doubles with perturbative inclusion of triples and explicitly correlated geminals [Pavov{s}evi'c et al., {em J. Chem. Phys.} {bf 146}, 174108 (2017)]. Even for conservative truncation levels, the method rapidly reaches near-linear complexity in realistic basis sets; e.g., an effective scaling exponent of 1.49 was obtained for n-alkanes with up to 200 carbon atoms in a def2-TZVP basis set. The robustness of the method is benchmarked against the massively parallel implementation of the conventional explicitly correlated coupled-cluster for a 20-water cluster; the total dissociation energy of the cluster ($sim$186 kcal/mol) is affected by the reduced-scaling approximations by only $sim$0.4 kcal/mol. The reduced-scaling explicitly correlated CCSD(T) method is used to examine the binding energies of several systems in the L7 benchmark data set of noncovalent interactions. Additionally, we discuss a massively parallel implementation of the Laplace transform perturbative triple correction (T) to the DF-CCSD energy within density fitting framework. This work is closely related to the work by Scuseria and co-workers [Constans et al., {em J. Chem. Phys.} {bf 113}, 10451 (2000)]. The accuracy of quadrature with respect to the number of quadrature points has been investigated on systems of the 18-water cluster, uracil dimer and pentacene dimer. In the case of the 18-water cluster, the $mu text{E}_{text{h}}$ accuracy is achieved with only 3 quadrature points. For the uracil dimer and pentacene dimer, 6 or more quadrature points are required to achieve $mu text{E}_{text{h}}$ accuracy; however, binding energy of $<$1 kcal/mol is obtained with 4 quadrature points. We observe an excellent strong scaling behavior on distributed-memory commodity cluster for the 18-water cluster. Furthermore, the Laplace transform formulation of (T) performs faster than the canonical (T) in the case of studied systems. The efficiency of the method has been furthermore tested on a DNA base-pair, a system with more than one thousand basis functions. Lastly, we discuss an explicitly correlated formalism for the second-order single-particle Green's function method (GF2-F12) that does not assume the popular diagonal approximation, and describes the energy dependence of the explicitly correlated terms [Pavov{s}evi'c et al., {em J. Chem. Phys.} {bf 147}, 121101 (2017)]. For small and medium organic molecules the basis set errors of ionization potentials of GF2-F12 are radically improved relative to GF2: the performance of GF2-F12/aug-cc-pVDZ is better than that of GF2/aug-cc-pVQZ, at a significantly lower cost. / Ph. D.
125

An Efficient Reduced Order Modeling Method for Analyzing Composite Beams Under Aeroelastic Loading

Names, Benjamin Joseph 29 June 2016 (has links)
Composite materials hold numerous advantages over conventional aircraft grade metals. These include high stiffness/strength-to-weight ratios and beneficial stiffness coupling typically used for aeroelastic tailoring. Due to the complexity of modeling composites, designers often select safe, simple geometry and layup schedules for their wing/blade cross-sections. An example of this might be a box-beam made up of 4 laminates, all of which are quasi-isotropic. This results in neglecting more complex designs that might yield a more effective solution, but require a greater analysis effort. The present work aims to show that the incorporation of complex cross-sections are feasible in the early design process through the use of cross-sectional analysis in conjunction with Timoshenko beam theory. It is important to note that in general, these cross-sections can be inhomogeneous: made up of any number of various materials systems. In addition, these materials could all be anisotropic in nature. The geometry of the cross-sections can take on any shape. Through this reduced order modeling scheme, complex structures can be reduced to 1 dimensional beams. With this approach, the elastic behavior of the structure can be captured, while also allowing for accurate 3D stress and strain recovery. This efficient structural modeling would be ideal in the preliminary design optimization of a wing structure. Furthermore, in conjunction with an efficient unsteady aerodynamic model such as the doublet lattice method, the dynamic aeroelastic stability can also be efficiently captured. This work introduces a comprehensively verified, open source python API called AeroComBAT (Aeroelastic Composite Beam Analysis Tool). By leveraging cross-sectional analysis, Timoshenko beam theory, and unsteady doublet-lattice method, this package is capable of efficiently conducting linear static structural analysis, normal mode analysis, and dynamic aeroelastic analysis. AeroComBAT can have a significant impact on the design process of a composite structure, and would be ideally implemented as part of a design optimization. / Master of Science
126

Daily Self-Monitoring During the Winter Holiday Period: A Strategy for Holiday Weight Maintenance in Reduced-Obese Older Adults?

Cornett, Rachel Ann 22 March 2011 (has links)
Weight management is problematic among Americans, as the number of overweight adults has risen to two-thirds of the population (1). Without the identification of successful approaches to promote weight stability, it is predicted that 86% of American adults will be overweight or obese by 2030 (2). Body-weight influenced diseases, such as diabetes and cardiovascular disease, are now leading causes of death (3). Annually, adult Americans are thought to increase their body weight by 0.5-0.9 kg (4). Of this gain, 52% is believed to occur during the winter holiday period of mid-late November to early January (5). Unfortunately, obesity research specific to this high-risk period is limited. Older adults and weight-reduced individuals are thought to be highly susceptible to significant holiday body weight gains (1, 6). To date, little research has investigated effective interventions that may be used to assist in successful body weight maintenance during the winter holiday period. Therefore, our purpose was to determine if daily self-monitoring of body weight, physical activity, and step counts is a feasible and effective tool to prevent weight gain in older, weight-reduced adults during the winter holiday period. This intervention represents a holiday weight maintenance approach that may be translatable to larger, more diverse populations. / Master of Science
127

Performance Assessment of Operations in the North Atlantic Organized Track System and Chicago O'Hare International Airport Noise Study

Tsikas, Nikolaos 13 August 2016 (has links)
This thesis consists of two topics. The first topic is a performance assessment study of the flight operations in the North Atlantic Organized Track System. This study begins with the demand shortfall analysis of demand sets provided by the Federal Aviation Association (FAA). These sets were used to simulate OTS traffic for a number of scenarios that consider different separation minima. For this reason, algorithms were developed to modify the NAT OTS configuration applying reduced lateral separation between tracks and estimate the probability that any given flight that traverses the Atlantic will use the OTS. The preliminary results showed that the scenario with reduced lateral separation minimum (RLatSM) (25 nm) and the reduced longitudinal separation minimum (RLongSM) (8 nm) was the most optimal among all five that were simulated. The application of RLatSM also decrease the mean fuel consumption of flights that shift from traversing the OTS to flying random routes. The second topic is a noise study performed for the Chicago O'Hare International Airport. The contributions to this topic were three fold: 1) we analyzed data to understand the current operations at ORD airport 2) we verified the noise contours produced in 2002 by the FAA, Chicago Department of Aviation (CDA) and the engineering contractors 3) we produced noise contours for today's airport activity. / Master of Science
128

Augmented Neural Network Surrogate Models for Polynomial Chaos Expansions and Reduced Order Modeling

Cooper, Rachel Gray 20 May 2021 (has links)
Mathematical models describing real world processes are becoming increasingly complex to better match the dynamics of the true system. While this is a positive step towards more complete knowledge of our world, numerical evaluations of these models become increasingly computationally inefficient, requiring increased resources or time to evaluate. This has led to the need for simplified surrogates to these complex mathematical models. A growing surrogate modeling solution is with the usage of neural networks. Neural networks (NN) are known to generalize an approximation across a diverse dataset and minimize the solution along complex nonlinear boundaries. Additionally, these surrogate models can be found using only incomplete knowledge of the true dynamics. However, NN surrogates often suffer from a lack of interpretability, where the decisions made in the training process are not fully understood, and the roles of individual neurons are not well defined. We present two solutions towards this lack of interpretability. The first focuses on mimicking polynomial chaos (PC) modeling techniques, modifying the structure of a NN to produce polynomial approximations of the underlying dynamics. This methodology allows for an extractable meaning from the network and results in improvement in accuracy over traditional PC methods. Secondly, we examine the construction of a reduced order modeling scheme using NN autoencoders, guiding the decisions of the training process to better match the real dynamics. This guiding process is performed via a physics-informed (PI) penalty, resulting in a speed-up in training convergence, but still results in poor performance compared to traditional schemes. / Master of Science / The world is an elaborate system of relationships between diverse processes. To accurately represent these relationships, increasingly complex models are defined to better match what is physically seen. These complex models can lead to issues when trying to use them to predict a realistic outcome, either requiring immensely powerful computers to run the simulations or long amounts of time to present a solution. To fix this, surrogates or approximations to these complex models are used. These surrogate models aim to reduce the resources needed to calculate a solution while remaining as accurate to the more complex model as possible. One way to make these surrogate models is through neural networks. Neural networks try to simulate a brain, making connections between some input and output given to the network. In the case of surrogate modeling, the input is some current state of the true process, and the output is what is seen later from the same system. But much like the human brain, the reasoning behind why choices are made when connecting the input and outputs is often largely unknown. Within this paper, we seek to add meaning to neural network surrogate models in two different ways. In the first, we change what each piece in a neural network represents to build large polynomials (e.g., $x^5 + 4x^2 + 2$) to approximate the larger complex system. We show that the building of these polynomials via neural networks performs much better than traditional ways to construct them. For the second, we guide the choices made by the neural network by enforcing restrictions in what connections it can make. We do this by using additional information from the larger system to ensure the connections made focus on the most important information first before trying to match the less important patterns. This guiding process leads to more information being captured when the surrogate model is compressed into only a few dimensions compared to traditional methods. Additionally, it allows for a faster learning time compared to similar surrogate models without the information.
129

Local Correlation Approaches and Coupled Cluster Linear Response Theory

McAlexander, Harley R. 15 June 2015 (has links)
Quantum mechanical methods are becoming increasingly useful and applicable tools to complement and support experiment. Nonetheless, some barriers to further applications of theoretical models still remain. A coupled cluster singles and doubles (CCSD) calculation, a reliable ab initio method, scales approximately on the order of 𝑂(𝑁⁶), where 𝑁 is a measure of the system size. This unfortunately limits the use of such high-accuracy methods to relatively small systems. Coupled cluster property calculations must be used in conjunction with reduced-scaling methods in order to broaden the range of applications to larger systems. In this work, we introduce some of the underlying theory behind such calculations and test the performance of several local correlation techniques for polarizabilities, optical rotations, and excited state properties. In general, when the computational cost is significantly reduced, the necessary accuracy is lost. Polarizabilities are less sensitive to the truncation schemes than optical rotations, and the excitation data is often only in agreement with the canonical result for the first few excited states. Additionally, we present a novel application of equation-of-motion coupled cluster singles and doubles to simulated circularly polarized luminescence spectra of eight chiral ketones. Both the absorption in the ground state and emission from the excited states were examined. Extensive geometry analyses were performed, revealing that optimized structures at the density functional theory were adequate for the calculation accurate coupled cluster excitation data. / Ph. D.
130

Cross-Validation of Data-Driven Correction Reduced Order Modeling

Mou, Changhong 03 October 2018 (has links)
In this thesis, we develop a data-driven correction reduced order model (DDC-ROM) for numerical simulation of fluid flows. The general DDC-ROM involves two stages: (1) we apply ROM filtering (such as ROM projection) to the full order model (FOM) and construct the filtered ROM (F-ROM). (2) We use data-driven modeling to model the nonlinear interactions between resolved and unresolved modes, which solves the F-ROM's closure problem. In the DDC-ROM, a linear or quadratic ansatz is used in the data-driven modeling step. In this thesis, we propose a new cubic ansatz. To get the unknown coefficients in our ansatz, we solve an optimization problem that minimizes the difference between the FOM data and the ansatz. We test the new DDC-ROM in the numerical simulation of the one-dimensional Burgers equation with a small diffusion coefficient. Furthermore, we perform a cross-validation of the DDC-ROM to investigate whether it can be successful in computational settings that are different from the training regime. / M.S. / Practical engineering and scientific problems often require the repeated simulation of unsteady fluid flows. In these applications, the computational cost of high-fidelity full-order models can be prohibitively high. Reduced order models (ROMs) represent efficient alternatives to brute force computational approaches. In this thesis, we propose a data-driven correction ROM (DDC-ROM) in which available data and an optimization problem are used to model the nonlinear interactions between resolved and unresolved modes. In order to test the new DDC-ROM's predictability, we perform its cross-validation for the one-dimensional viscous Burgers equation and different training regimes.

Page generated in 0.0359 seconds