30 March 2022
Numerical models of the atmosphere discretize space and time, and are unable to resolve processes smaller than model resolution. As such, the aggregate effects of these sub-grid scale processes must be parameterized when their effects are manifest at resolved grid scale. However, it is known that the enhancement of sea-surface fluxes by sub-grid scale wind variations is difficult to appropriately parameterize deterministically. This limitation can be realized in a numerical model by the use of stochasticity, explicitly accounting for the randomness in how sea surface wind variability enhances sea-surface fluxes. The robustness of stochastically parameterizing sea surface flux enhancement due to wind speed variability is investigated by applying an established statistical model to coarse grained global convection permitting numerical model output from six different numerical models and four different geographical regions, to determine if there exists any sensitivities to region, time period, or model type. The sensitivity of the deterministic part of a surface flux parameterization studied is quantified via correlation, where different ten-day periods have the highest correlations and thus the least sensitivity, followed by differences in numerical models and differences in geographical regions. Results suggest that the choice of cumulus parameterization employed by a numerical model may contribute to statistical model sensitivity and consequent regression fit portability. The robustness of a Gaussian process fit applied to the stochastic part of the sea-surface flux enhancement parameterization reveals spatial non-stationarity, which provides insight into the potential for further improvements to the sea-surface flux parameterization studied. Results suggest that the stochastic parameterization studied is broadly robust, supporting implementation of such sea surface flux parameterization in operational weather and climate models. Results are also used to identify specific methods that may be utilized for improvements of the stochastic parameterization. / Graduate / 2023-01-27
03 September 2014
A design engineer has a desire to obtain the best possible design configuration producing the most desirable result. This is especially true in designs involving aerodynamics. This thesis presents a way to design the optimum airfoil for a non-lifting strut-like application. This is achieved by combining the governing laws of aerodynamics with appropriate numerical models to simulate an inputted steady flow regime. By using a robust yet simple parameterization method to represent airfoils and by implementing a genetic algorithm, optimization is achieved and occurs in a timely manner. Performing the optimization across a range of flow fields and for struts in different applications also allows some trends to be deduced, thus providing valuable knowledge to design engineers.
Multiscale Spectral-Domain Parameterization for History Matching in Structured and Unstructured Grid GeometriesBhark, Eric Whittet 2011 August 1900 (has links)
Reservoir model calibration to production data, also known as history matching, is an essential tool for the prediction of fluid displacement patterns and related decisions concerning reservoir management and field development. The history matching of high resolution geologic models is, however, known to define an ill-posed inverse problem such that the solution of geologic heterogeneity is always non-unique and potentially unstable. A common approach to improving ill-posedness is to parameterize the estimable geologic model components, imposing a type of regularization that exploits geologic continuity by explicitly or implicitly grouping similar properties while retaining at least the minimum heterogeneity resolution required to reproduce the data. This dissertation develops novel methods of model parameterization within the class of techniques based on a linear transformation. Three principal research contributions are made in this dissertation. First is the development of an adaptive multiscale history matching formulation in the frequency domain using the discrete cosine parameterization. Geologic model calibration is performed by its sequential refinement to a spatial scale sufficient to match the data. The approach enables improvement in solution non-uniqueness and stability, and further balances model and data resolution as determined by a parameter identifiability metric. Second, a model-independent parameterization based on grid connectivity information is developed as a generalization of the cosine parameterization for applicability to generic grid geometries. The parameterization relates the spatial reservoir parameters to the modal shapes or harmonics of the grid on which they are defined, merging with a Fourier analysis in special cases (i.e., for rectangular grid cells of constant dimensions), and enabling a multiscale calibration of the reservoir model in the spectral domain. Third, a model-dependent parameterization is developed to combine grid connectivity with prior geologic information within a spectral domain representation. The resulting parameterization is capable of reducing geologic models while imposing prior heterogeneity on the calibrated model using the adaptive multiscale workflow. In addition to methodological developments of the parameterization methods, an important consideration in this dissertation is their applicability to field scale reservoir models with varying levels of prior geologic complexity on par with current industry standards.
Dawson, Nicholas, Broxton, Patrick, Zeng, Xubin
Snow initialization is crucial for weather and seasonal prediction, but the National Centers for Environmental Prediction (NCEP) operational models have been found to produce too little snow water equivalent, partly because they assume a constant and unrealistically low snow density for the snowpack. One possible solution is to use the snow density formulation from the Noah land model used in NCEP operational forecast models. While this solution is better than the constant density assumption, the seasonal evolution of snow density in Noah is still found to be unrealistic, through the evaluation of both the offline Noah model output and the Noah snow density formulation itself. A physically based snow density parameterization is then developed, which performs considerably better than the Noah parameterization based on the measurements from the SNOTEL network over the western United States and Alaska. It also performs better than the snow density schemes used in three other models. This parameterization could be easily implemented in NCEP operational snow initialization. With the consideration of up to 10 snow layers, this parameterization can also be applied to multilayer snowpack initiation or to estimate snow water equivalent from in situ and airborne snow depth measurements.
Understanding the genetic basis of complex polygenic traits through Bayesian model selection of multiple genetic models and network modeling of family-based genetic dataBae, Harold Taehyun 12 March 2016 (has links)
The global aim of this dissertation is to develop advanced statistical modeling to understand the genetic basis of complex polygenic traits. In order to achieve this goal, this dissertation focuses on the development of (i) a novel methodology to detect genetic variants with different inheritance patterns formulated as a Bayesian model selection problem, (ii) integration of genetic data and non-genetic data to dissect the genotype-phenotype associations using Bayesian networks with family-based data, and (iii) an efficient technique to model the family-based data in the Bayesian framework. In the first part of my dissertation, I present a coherent Bayesian framework for selection of the most likely model from the five genetic models (genotypic, additive, dominant, co-dominant, and recessive) used in genetic association studies. The approach uses a polynomial parameterization of genetic data to simultaneously fit the five models and save computations. I provide a closed-form expression of the marginal likelihood for normally distributed data, and evaluate the performance of the proposed method and existing methods through simulated and real genome-wide data sets. The second part of this dissertation presents an integrative analytic approach that utilizes Bayesian networks to represent the complex probabilistic dependency structure among many variables from family-based data. I propose a parameterization that extends mixed effects regression models to Bayesian networks by using random effects as additional nodes of the networks to model the between-subjects correlations. I also present results of simulation studies to compare different model selection metrics for mixed models that can be used for learning BNs from correlated data and application of this methodology to real data from a large family-based study. In the third part of this dissertation, I describe an efficient way to account for family structure in Bayesian inference Using Gibbs Sampling (BUGS). In linear mixed models, a random effects vector has a variance-covariance matrix whose dimension is as large as the sample size. However, a direct handling of this multivariate normal distribution is not computationally feasible in BUGS. Therefore, I propose a decomposition of this multivariate normal distribution into univariate normal distributions using singular value decomposition, and implementation in BUGS is presented.
Sensitivity of Physical Parameterization Schemes to Stochastic Initial Conditions in WRF Tornado Outbreak SimulationsElmore, Michelle Anne 12 August 2016 (has links)
A better understanding of the performance in precision of physical parameterizations in NWP models is necessary for improving forecasts of tornadic outbreaks. For this study, WRF simulations of tornadic outbreaks were run using configurations of three microphysics, three convective physics, and two PBL physics schemes. Each configuration was subjected to ten iterations of SKEBS. The means of the ten perturbation members of each parameterization configuration were bootstrapped for SB CAPE, SB CIN, and 0-3km SRH to find 95% confidence interval widths at each grid point. Maps of these spreads provided a spatial analysis of the uncertainty. Analyses on correlations and clusters were performed to determine how the configurations related spatially and in magnitude. These uncertainties were further bootstrapped to compare the mean of each configuration in boxplots. The effect on the uncertainty produced by each configuration varied according to the diagnostic variable being analyzed.
Parameterization, Pores, and Processes: Simulation and Optimization of Materials for Gas Separations and StorageCollins, Sean 08 July 2019 (has links)
This thesis explores the use of computational chemistry to aid in the design of metal-organic frameworks (MOFs) and other materials. A focus is placed on finding exceptional materials to be used for removing CO2 from fossil fuel burning power plants, with other avenues like vehicular methane storage and landfill gas separation being explored as well. These applications are under the umbrella of carbon capture and storage (CCS) which aims to reduce carbon emissions through selective sequestration. We utilize high-throughput screenings, as well as machine learning assisted discovery, to identify ideal candidate materials using a holistic approach instead of relying on conventional gas adsorption properties. The development of ideal materials for CCS requires all aspects of a material to be considered, which can be time-consuming. A large portion of this work has been with high-throughput, or machine learning assisted discovery of ideal candidates for CCS applications. The chapters of this thesis are connected by the goal of finding ideal materials for CCS. They are primarily arranged in increasing complexity of how this research can be done, from using high-throughput screenings with more simple metrics, up to multi-scale machine learning optimization of pressure swing adsorption systems. The work is not presented chronologically, but in a way to tell the best story. Work was done by first applying high-throughput computational screening on a set of experimentally realized MOFs for vehicular methane storage, post-combustion carbon capture, and landfill gas separation. Whenever possible, physically motivated figures of merits were used to give a better ranking and consideration of the materials. From this work, we were able to determine what the realistic limits might be for current MOFs. The work was continued by looking at carbon-based materials (primarily carbon nanoscrolls) for post-combustion carbon capture and vehicular methane storage. The carbon-based materials were found to outperform MOFs; however, further studies are needed to verify the results. Next, we looked at ways to improve the high-throughput screening methodology. One problem area was in the charge calculation, which could lead to unrealistic gas adsorption results. Using the split-charge equilibration method, we developed a robust way to calculate the partial atomic charges that were more accurate than its quick calculation counterparts. This led to gas adsorption properties which more closely mimicked the results determined from time-consuming quantum mechanically derived charges. Simplistic process optimization was then applied to nearly ~3500 experimental structures. To the best of our knowledge, this is the first time that any process optimization has been applied to more than 10s of materials for a study. The process optimization was done by evaluating the desorption at various pressures and choosing the value which gave the lowest energetic cost. It was found that a material synthesized by our collaborators, IISERP-MOF2, was the single best experimentally realized material for post-combustion carbon capture. What made this an interesting result is that by conventional metrics IISERP-MOF2 does not appear to be outstanding. Next, functionalized versions of MOFs were tested in a high-throughput manner, and some of those structures were found to outperform IISERP-MOF2. Although high-throughput computational screenings can be used to determine high-performance materials, it would be impossible to test all functionalized versions of some MOFs, let alone all MOFs. Functionalized MOFs are noteworthy because MOFs are highly tuneable through functionalization and can be made into ideal materials for a given application. We developed a genetic algorithm which, given a base structure and a target parameter, would be able to find the ideal functionalization to optimize the parameter while testing only a small fraction of all structures. In some cases, the CO2 adsorption was found to more than quadruple when functionalized. A better understanding of how materials perform in a PSA system was achieved by performing multi-scale optimizations. Experimentally realized MOFs were tested using atomistic simulations to derive gas adsorption properties. After passing through a few sensible filters, they were then screened using macro-scale pressure swing adsorption simulators, which model how gas separation may occur at a power plant. Using another genetic algorithm, the conditions that the pressure swing adsorption system runs at was optimized for over 200 materials. To the best of our knowledge, this is the highest amount of materials that have had been optimized for process conditions. IISERP-MOF2 was found to perform the best based on many relevant metrics, such as the energetic cost and how much CO2 was captured. It was also found that conventional metrics were unable to be used to predict a material’s pressure swing adsorption performance.
Niedfeldt, John Clyde
01 September 2016
The spatial response function (SRF) of the backscatter measurements for a radar scatterometer is often used in reconstruction. It has been found that in many cases the SRF can be approximated as a binary function that is 1 inside the - 6 dB contour of the SRF and 0 outside. This improves the computation speed of reconstruction. Computing the SRF contour can still be a lengthy computation, which can be simplified by precomputing and tabulating key SRF contours. The tabular parameterization for many spinning scatterometers, i.e., QuikSCAT, is straight-forward. For RapidSCAT, this estimation is more involved than other radars due to the irregular orbit of its host platform, the International Space Station (ISS). This thesis presents a process for parameterizing the slice contours for RapidSCAT that are acceptable for reconstruction purposes. This thesis develops a new process for parameterizing slice contours. First, RapidSCAT SRFs are calculated using XfactorRS3, and -6 dB slice contours are found using matplotlib. Then, a suitable filter is found for reducing noise present in slice contours due to quantization error and interpolation inaccuracies. Afterwards, the polygon comparison algorithm is used to determine a set of approximation points. With the approximation points selected, the 3-rd order linear approximation is calculated using parameters available in the L1B data files for RapidSCAT. Finally, analysis of the parameterization is performed. Overall, I developed a process that parameterizes RapidSCAT slice contours with an average root mean square (RMS) error of roughly 1.5 km. This is acceptable for the application of the slice parameterization algorithm and significantly reduces computation compared to fully computing the SRF.
18 October 2013
Our limited knowledge of convection and its poor representation in climate models is one of the factors that most hamper our ability to understand and predict the climate system. In this thesis, the dynamics of shallow cumulus convection are probed using Large-eddy simulations (LES) and simple models. / Earth and Planetary Sciences
Gonzalez Castro, Gabriela, Spares, Robert, Ugail, Hassan, Whiteside, Benjamin R., Sweeney, John
Page generated in 0.1466 seconds