• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 453
  • 274
  • 163
  • 47
  • 25
  • 22
  • 19
  • 10
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 1190
  • 259
  • 193
  • 143
  • 124
  • 87
  • 74
  • 67
  • 61
  • 61
  • 61
  • 61
  • 57
  • 54
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
541

Blast Performance Quantification Strategies For Reinforced Masonry Shear Walls With Boundary Elements

El-Hashimy, Tarek January 2019 (has links)
Structural systems have been evolving in terms of material properties and construction techniques, and their levels of protection against hazardous events have been the focus of different studies. For instance, the performance of the lateral force resisting systems has been investigated extensively to ensure that such systems would provide an adequate level of strength ductility capacity when subjected to seismic loading. However, with the increased occurrence of accidental and deliberate explosion incidents globally by more than three fold from 2004 to 2012, more studies have been focusing on the performance of such systems to blast loads and the different methods to quantify the inflicted damage. Although both blast and seismic design requires structures to sustain a level of ductility to withstand the displacement demands, the distributions of such demands from seismic ground excitation and blast loading throughout the structural system are completely different. Therefore, a ductile seismic force resisting system may not necessarily be sufficient to resist a blast wave. To address this concern, North American standards ASCE 59-11, CSA S850-12 provide response limits that define the different damage states that components may exhibit prior to collapse. Over the past ten years, a new configuration of reinforced masonry (RM) shear walls utilizing boundary elements (BEs) at the vertical edges of the wall has been investigated as an innovative configuration that enhances the wall’s in-plane performance. As such, they are included in the North American Masonry design standards, CSA S304-14 and TMS 402-16 as an alternative means to enhance the ductility of seismic force resisting systems. However, investigations regarding the out-of-plane performance of such walls are generally scarce in literature which hindered the blast design standards from providing unique response limits that can quantify the different damage states for RM walls with BEs. This dissertation has highlighted that some relevant knowledge gaps may lead to unconservative designs. Such gaps include (a) the RM wall with BEs out-of-plane behavior and damage sequence; and more specifically, (b) the BEs influence on the wall load-displacement response; as well as, (c) the applicability of using of the current response limits originally assigned for conventional RM walls to assess RM walls with BEs. Addressing these knowledge gaps is the main motivation behind this dissertation. In this respect, this dissertation reports an experimental program, that focuses on bridging the knowledge gap pertaining to the out-of-plane performance of seismically-detailed RM shear walls with BEs, which were not designed to withstand blast loads. Meanwhile, from the analytical perspective, plastic analyses were carried out taking into account the different mechanisms that the wall may undergo until peak resistance is achieved. This approach was adopted in order to quantify the resistance function of such walls and determine the contribution of the BEs and web to the overall wall resistance. In addition, the experimental results of the tested walls were used to validate a numerical finite element model developed to compare the resistance function of RM walls with and without BEs. Afterwards, the model was further refined to capture the walls’ performance under blast loads. The pressure impulse diagrams were generated to assess the capability of the current response limits in quantifying the different damage states for walls with different design parameters. Furthermore, new response limits were proposed to account for the out-of-plane ductility capacities of different wall components. Finally, a comparison between conventional rectangular walls and their counterparts with BEs using the proposed limits was conducted in the form of pressure-impulse diagram to highlight the major differences between both wall configurations. / Thesis / Doctor of Philosophy (PhD)
542

Probabilistic Flood Forecast Using Bayesian Methods

Han, Shasha January 2019 (has links)
The number of flood events and the estimated costs of floods have increased dramatically over the past few decades. To reduce the negative impacts of flooding, reliable flood forecasting is essential for early warning and decision making. Although various flood forecasting models and techniques have been developed, the assessment and reduction of uncertainties associated with the forecast remain a challenging task. Therefore, this thesis focuses on the investigation of Bayesian methods for producing probabilistic flood forecasts to accurately quantify predictive uncertainty and enhance the forecast performance and reliability. In the thesis, hydrologic uncertainty was quantified by a Bayesian post-processor - Hydrologic Uncertainty Processor (HUP), and the predictability of HUP with different hydrologic models under different flow conditions were investigated. Followed by an extension of HUP into an ensemble prediction framework, which constitutes the Bayesian Ensemble Uncertainty Processor (BEUP). Then the BEUP with bias-corrected ensemble weather inputs was tested to improve predictive performance. In addition, the effects of input and model type on BEUP were investigated through different combinations of BEUP with deterministic/ensemble weather predictions and lumped/semi-distributed hydrologic models. Results indicate that Bayesian method is robust for probabilistic flood forecasting with uncertainty assessment. HUP is able to improve the deterministic forecast from the hydrologic model and produces more accurate probabilistic forecast. Under high flow condition, a better performing hydrologic model yields better probabilistic forecast after applying HUP. BEUP can significantly improve the accuracy and reliability of short-range flood forecasts, but the improvement becomes less obvious as lead time increases. The best results for short-range forecasts are obtained by applying both bias correction and BEUP. Results also show that bias correcting each ensemble member of weather inputs generates better flood forecast than only bias correcting the ensemble mean. The improvement on BEUP brought by the hydrologic model type is more significant than the input data type. BEUP with semi-distributed model is recommended for short-range flood forecasts. / Dissertation / Doctor of Philosophy (PhD) / Flood is one of the top weather related hazards and causes serious property damage and loss of lives every year worldwide. If the timing and magnitude of the flood event could be accurately predicted in advance, it will allow time to get well prepared, and thus reduce its negative impacts. This research focuses on improving flood forecasts through advanced Bayesian techniques. The main objectives are: (1) enhancing reliability and accuracy of flood forecasting system; and (2) improving the assessment of predictive uncertainty associated with the flood forecasts. The key contributions include: (1) application of Bayesian forecasting methods in a semi-urban watershed to advance the predictive uncertainty quantification; and (2) investigation of the Bayesian forecasting methods with different inputs and models and combining bias correction technique to further improve the forecast performance. It is expected that the findings from this research will benefit flood impact mitigation, watershed management and water resources planning.
543

Saltwater-freshwater mixing fluctuation in shallow beach aquifers

Han, Q., Chen, D., Guo, Yakun, Hu, W. 03 April 2018 (has links)
Yes / Field measurements and numerical simulations demonstrate the existence of an upper saline plume in tidally dominated beaches. The effect of tides on the saltwater-freshwater mixing occurring at both the upper saline plume and lower salt wedge is well understood. However, it is poorly understood whether the tidal driven force acts equally on the mixing behaviours of above two regions and what factors control the mixing fluctuation features. In this study, variable-density, saturated-unsaturated, transient groundwater flow and solute transport numerical models are proposed and performed for saltwater-freshwater mixing subject to tidal forcing on a sloping beach. A range of tidal amplitude, fresh groundwater flux, hydraulic conductivity, beach slope and dispersivity anisotropy are simulated. Based on time sequential salinity data, the gross mixing features are quantified by computing the spatial moments in three different aspects, namely, the centre point, length and width, and the volume (or area in a two-dimensional case). Simulated salinity distribution varies significantly at saltwater-freshwater interfaces. Mixing characteristics of the upper saline plume greatly differ from those in the salt wedge for both the transient and quasi-steady state. The mixing of the upper saline plume largely inherits the fluctuation characteristics of the sea tide in both the transverse and longitudinal directions when the quasi-steady state is reached. On the other hand, the mixing in the salt wedge is relatively steady and shows little fluctuation. The normalized mixing width and length, mixing volume and the fluctuation amplitude of the mass centre in the upper saline plume are, in general, one-magnitude-order larger than those in the salt wedge region. In the longitudinal direction, tidal amplitude, fresh groundwater flux, hydraulic conductivity and beach slope are significant control factors of fluctuation amplitude. In the transverse direction, tidal amplitude and beach slope are the main control parameters. Very small dispersivity anisotropy (e.g., α_L⁄α_T <5) could greatly suppress mixing fluctuation in the longitudinal direction. This work underlines the close connection between the sea tides and the upper saline plume in the aspect of mixing, thereby enhancing understanding of the interplay between tidal oscillations and mixing mechanisms in tidally dominated sloping beach systems. / Shenzhen Key Laboratory for Coastal Ocean Dynamics and Environment (No. ZDSY20130402163735964), National High Technology Research & Development Program of China (No. 2012AA09A409).
544

Uncertainty Quantification and Uncertainty Reduction Techniques for Large-scale Simulations

Cheng, Haiyan 03 August 2009 (has links)
Modeling and simulations of large-scale systems are used extensively to not only better understand a natural phenomenon, but also to predict future events. Accurate model results are critical for design optimization and policy making. They can be used effectively to reduce the impact of a natural disaster or even prevent it from happening. In reality, model predictions are often affected by uncertainties in input data and model parameters, and by incomplete knowledge of the underlying physics. A deterministic simulation assumes one set of input conditions, and generates one result without considering uncertainties. It is of great interest to include uncertainty information in the simulation. By ``Uncertainty Quantification,'' we denote the ensemble of techniques used to model probabilistically the uncertainty in model inputs, to propagate it through the system, and to represent the resulting uncertainty in the model result. This added information provides a confidence level about the model forecast. For example, in environmental modeling, the model forecast, together with the quantified uncertainty information, can assist the policy makers in interpreting the simulation results and in making decisions accordingly. Another important goal in modeling and simulation is to improve the model accuracy and to increase the model prediction power. By merging real observation data into the dynamic system through the data assimilation (DA) technique, the overall uncertainty in the model is reduced. With the expansion of human knowledge and the development of modeling tools, simulation size and complexity are growing rapidly. This poses great challenges to uncertainty analysis techniques. Many conventional uncertainty quantification algorithms, such as the straightforward Monte Carlo method, become impractical for large-scale simulations. New algorithms need to be developed in order to quantify and reduce uncertainties in large-scale simulations. This research explores novel uncertainty quantification and reduction techniques that are suitable for large-scale simulations. In the uncertainty quantification part, the non-sampling polynomial chaos (PC) method is investigated. An efficient implementation is proposed to reduce the high computational cost for the linear algebra involved in the PC Galerkin approach applied to stiff systems. A collocation least-squares method is proposed to compute the PC coefficients more efficiently. A novel uncertainty apportionment strategy is proposed to attribute the uncertainty in model results to different uncertainty sources. The apportionment results provide guidance for uncertainty reduction efforts. The uncertainty quantification and source apportionment techniques are implemented in the 3-D Sulfur Transport Eulerian Model (STEM-III) predicting pollute concentrations in the northeast region of the United States. Numerical results confirm the efficacy of the proposed techniques for large-scale systems and the potential impact for environmental protection policy making. ``Uncertainty Reduction'' describes the range of systematic techniques used to fuse information from multiple sources in order to increase the confidence one has in model results. Two DA techniques are widely used in current practice: the ensemble Kalman filter (EnKF) and the four-dimensional variational (4D-Var) approach. Each method has its advantages and disadvantages. By exploring the error reduction directions generated in the 4D-Var optimization process, we propose a hybrid approach to construct the error covariance matrix and to improve the static background error covariance matrix used in current 4D-Var practice. The updated covariance matrix between assimilation windows effectively reduces the root mean square error (RMSE) in the solution. The success of the hybrid covariance updates motivates the hybridization of EnKF and 4D-Var to further reduce uncertainties in the simulation results. Numerical tests show that the hybrid method improves the model accuracy and increases the model prediction quality. / Ph. D.
545

Exploring the Stochastic Performance of Metallic Microstructures With Multi-Scale Models

Senthilnathan, Arulmurugan 01 June 2023 (has links)
Titanium-7%wt-Aluminum (Ti-7Al) has been of interest to the aerospace industry owing to its good structural and thermal properties. However, extensive research is still needed to study the structural behavior and determine the material properties of Ti-7Al. The homogenized macro-scale material properties are directly related to the crystallographic structure at the micro-scale. Furthermore, microstructural uncertainties arising from experiments and computational methods propagate on the material properties used for designing aircraft components. Therefore, multi-scale modeling is employed to characterize the microstructural features of Ti-7Al and computationally predict the macro-scale material properties such as Young's modulus and yield strength using machine learning techniques. Investigation of microstructural features across large domains through experiments requires rigorous and tedious sample preparation procedures that often lead to material waste. Therefore, computational microstructure reconstruction methods that predict the large-scale evolution of microstructural topology given the small-scale experimental information are developed to minimize experimental cost and time. However, it is important to verify the synthetic microstructures with respect to the experimental data by characterizing microstructural features such as grain size and grain shape. While the relationship between homogenized material properties and grain sizes of microstructures is well-studied through the Hall-Petch effect, the influences of grain shapes, especially in complex additively manufactured microstructure topologies, are yet to be explored. Therefore, this work addresses the gap in the mathematical quantification of microstructural topology by developing measures for the computational characterization of microstructures. Moreover, the synthesized microstructures are modeled through crystal plasticity simulations to determine the material properties. However, such crystal plasticity simulations require significant computing times. In addition, the inherent uncertainty of experimental data is propagated on the material properties through the synthetic microstructure representations. Therefore, the aforementioned problems are addressed in this work by explicitly quantifying the microstructural topology and predicting the material properties and their variations through the development of surrogate models. Next, this work extends the proposed multi-scale models of microstructure-property relationships to magnetic materials to investigate the ferromagnetic-paramagnetic phase transition. Here, the same Ising model-based multi-scale approach used for microstructure reconstruction is implemented for investigating the ferromagnetic-paramagnetic phase transition of magnetic materials. The previous research on the magnetic phase transition problem neglects the effects of the long-range interactions between magnetic spins and external magnetic fields. Therefore, this study aims to build a multi-scale modeling environment that can quantify the large-scale interactions between magnetic spins and external fields. / Doctor of Philosophy / Titanium-Aluminum (Ti-Al) alloys are lightweight and temperature-resistant materials with a wide range of applications in aerospace systems. However, there is still a lack of thorough understanding of the microstructural behavior and mechanical performance of Titanium-7wt%-Aluminum (Ti-7Al), a candidate material for jet engine components. This work investigates the multi-scale mechanical behavior of Ti-7Al by computationally characterizing the micro-scale material features, such as crystallographic texture and grain topology. The small-scale experimental data of Ti-7Al is used to predict the large-scale spatial evolution of the microstructures, while the texture and grain topology is modeled using shape moment invariants. Moreover, the effects of the uncertainties, which may arise from measurement errors and algorithmic randomness, on the microstructural features are quantified through statistical parameters developed based on the shape moment invariants. A data-driven surrogate model is built to predict the homogenized mechanical properties and the associated uncertainty as a function of the microstructural texture and topology. Furthermore, the presented multi-scale modeling technique is applied to explore the ferromagnetic-paramagnetic phase transition of magnetic materials, which causes permanent failure of magneto-mechanical components used in aerospace systems. Accordingly, a computational solution is developed based on an Ising model that considers the long-range spin interactions in the presence of external magnetic fields.
546

Contributions to Efficient Statistical Modeling of Complex Data with Temporal Structures

Hu, Zhihao 03 March 2022 (has links)
This dissertation will focus on three research projects: Neighborhood vector auto regression in multivariate time series, uncertainty quantification for agent-based modeling networked anagrams, and a scalable algorithm for multi-class classification. The first project studies the modeling of multivariate time series, with the applications in the environmental sciences and other areas. In this work, a so-called neighborhood vector autoregression (NVAR) model is proposed to efficiently analyze large-dimensional multivariate time series. The time series are assumed to have underlying distances among them based on the inherent setting of the problem. When this distance matrix is available or can be obtained, the proposed NVAR method is demonstrated to provides a computationally efficient and theoretically sound estimation of model parameters. The performance of the proposed method is compared with other existing approaches in both simulation studies and a real application of stream nitrogen study. The second project focuses on the study of group anagram games. In a group anagram game, players are provided letters to form as many words as possible. In this work, the enhanced agent behavior models for networked group anagram games are built, exercised, and evaluated under an uncertainty quantification framework. Specifically, the game data for players is clustered based on their skill levels (forming words, requesting letters, and replying to requests), the multinomial logistic regressions for transition probabilities are performed, and the uncertainty is quantified within each cluster. The result of this process is a model where players are assigned different numbers of neighbors and different skill levels in the game. Simulations of ego agents with neighbors are conducted to demonstrate the efficacy of the proposed methods. The third project aims to develop efficient and scalable algorithms for multi-class classification, which achieve a balance between prediction accuracy and computing efficiency, especially in high dimensional settings. The traditional multinomial logistic regression becomes slow in high dimensional settings where the number of classes (M) and the number of features (p) is large. Our algorithms are computing efficiently and scalable to data with even higher dimensions. The simulation and case study results demonstrate that our algorithms have huge advantage over traditional multinomial logistic regressions, and maintains comparable prediction performance. / Doctor of Philosophy / In many data-central applications, data often have complex structures involving temporal structures and high dimensionality. Modeling of complex data with temporal structures have attracted great attention in many applications such as enviromental sciences, network sciences, data mining, neuroscience, and economics. However, modeling such complex data is quite challenging due to large uncertainty and dimensionality of complex data. This dissertation focuses on modeling and prediction of complex data with temporal structures. Three different types of complex data are modeled. For example, the nitrogen of multiple streams are modeled in a joint manner, human actions in networked group anagrams are modeled and the uncertainty is quantified, and data with multiple labels are classified. Different models are proposed and they are demonstrated to be efficient through simulation and case study.
547

Optimization Under Uncertainty and Total Predictive Uncertainty for a Tractor-Trailer Base-Drag Reduction Device

Freeman, Jacob Andrew 07 September 2012 (has links)
One key outcome of this research is the design for a 3-D tractor-trailer base-drag reduction device that predicts a 41% reduction in wind-averaged drag coefficient at 57 mph (92 km/h) and that is relatively insensitive to uncertain wind speed and direction and uncertain deflection angles due to mounting accuracy and static aeroelastic loading; the best commercial device of non-optimized design achieves a 12% reduction at 65 mph. Another important outcome is the process by which the optimized design is obtained. That process includes verification and validation of the flow solver, a less complex but much broader 2-D pathfinder study, and the culminating 3-D aerodynamic shape optimization under uncertainty (OUU) study. To gain confidence in the accuracy and precision of a computational fluid dynamics (CFD) flow solver and its Reynolds-averaged Navier-Stokes (RANS) turbulence models, it is necessary to conduct code verification, solution verification, and model validation. These activities are accomplished using two commercial CFD solvers, Cobalt and RavenCFD, with four turbulence models: Spalart-Allmaras (S-A), S-A with rotation and curvature, Menter shear-stress transport (SST), and Wilcox 1998 k-ω. Model performance is evaluated for three low subsonic 2-D applications: turbulent flat plate, planar jet, and NACA 0012 airfoil at α = 0°. The S-A turbulence model is selected for the 2-D OUU study. In the 2-D study, a tractor-trailer base flap model is developed that includes six design variables with generous constraints; 400 design candidates are evaluated. The design optimization loop includes the effect of uncertain wind speed and direction, and post processing addresses several other uncertain effects on drag prediction. The study compares the efficiency and accuracy of two optimization algorithms, evolutionary algorithm (EA) and dividing rectangles (DIRECT), twelve surrogate models, six sampling methods, and surrogate-based global optimization (SBGO) methods. The DAKOTA optimization and uncertainty quantification framework is used to interface the RANS flow solver, grid generator, and optimization algorithm. The EA is determined to be more efficient in obtaining a design with significantly reduced drag (as opposed to more efficient in finding the true drag minimum), and total predictive uncertainty is estimated as ±11%. While the SBGO methods are more efficient than a traditional optimization algorithm, they are computationally inefficient due to their serial nature, as implemented in DAKOTA. Because the S-A model does well in 2-D but not in 3-D under these conditions, the SST turbulence model is selected for the 3-D OUU study that includes five design variables and evaluates a total of 130 design candidates. Again using the EA, the study propagates aleatory (wind speed and direction) and epistemic (perturbations in flap deflection angle) uncertainty within the optimization loop and post processes several other uncertain effects. For the best 3-D design, total predictive uncertainty is +15/-42%, due largely to using a relatively coarse (six million cell) grid. That is, the best design drag coefficient estimate is within 15 and 42% of the true value; however, its improvement relative to the no-flaps baseline is accurate within 3-9% uncertainty. / Ph. D.
548

Advanced Sampling Methods for Solving Large-Scale Inverse Problems

Attia, Ahmed Mohamed Mohamed 19 September 2016 (has links)
Ensemble and variational techniques have gained wide popularity as the two main approaches for solving data assimilation and inverse problems. The majority of the methods in these two approaches are derived (at least implicitly) under the assumption that the underlying probability distributions are Gaussian. It is well accepted, however, that the Gaussianity assumption is too restrictive when applied to large nonlinear models, nonlinear observation operators, and large levels of uncertainty. This work develops a family of fully non-Gaussian data assimilation algorithms that work by directly sampling the posterior distribution. The sampling strategy is based on a Hybrid/Hamiltonian Monte Carlo (HMC) approach that can handle non-normal probability distributions. The first algorithm proposed in this work is the "HMC sampling filter", an ensemble-based data assimilation algorithm for solving the sequential filtering problem. Unlike traditional ensemble-based filters, such as the ensemble Kalman filter and the maximum likelihood ensemble filter, the proposed sampling filter naturally accommodates non-Gaussian errors and nonlinear model dynamics, as well as nonlinear observations. To test the capabilities of the HMC sampling filter numerical experiments are carried out using the Lorenz-96 model and observation operators with different levels of nonlinearity and differentiability. The filter is also tested with shallow water model on the sphere with linear observation operator. Numerical results show that the sampling filter performs well even in highly nonlinear situations where the traditional filters diverge. Next, the HMC sampling approach is extended to the four-dimensional case, where several observations are assimilated simultaneously, resulting in the second member of the proposed family of algorithms. The new algorithm, named "HMC sampling smoother", is an ensemble-based smoother for four-dimensional data assimilation that works by sampling from the posterior probability density of the solution at the initial time. The sampling smoother naturally accommodates non-Gaussian errors and nonlinear model dynamics and observation operators, and provides a full description of the posterior distribution. Numerical experiments for this algorithm are carried out using a shallow water model on the sphere with observation operators of different levels of nonlinearity. The numerical results demonstrate the advantages of the proposed method compared to the traditional variational and ensemble-based smoothing methods. The HMC sampling smoother, in its original formulation, is computationally expensive due to the innate requirement of running the forward and adjoint models repeatedly. The proposed family of algorithms proceeds by developing computationally efficient versions of the HMC sampling smoother based on reduced-order approximations of the underlying model dynamics. The reduced-order HMC sampling smoothers, developed as extensions to the original HMC smoother, are tested numerically using the shallow-water equations model in Cartesian coordinates. The results reveal that the reduced-order versions of the smoother are capable of accurately capturing the posterior probability density, while being significantly faster than the original full order formulation. In the presence of nonlinear model dynamics, nonlinear observation operator, or non-Gaussian errors, the prior distribution in the sequential data assimilation framework is not analytically tractable. In the original formulation of the HMC sampling filter, the prior distribution is approximated by a Gaussian distribution whose parameters are inferred from the ensemble of forecasts. The Gaussian prior assumption in the original HMC filter is relaxed. Specifically, a clustering step is introduced after the forecast phase of the filter, and the prior density function is estimated by fitting a Gaussian Mixture Model (GMM) to the prior ensemble. The base filter developed following this strategy is named cluster HMC sampling filter (ClHMC ). A multi-chain version of the ClHMC filter, namely MC-ClHMC , is also proposed to guarantee that samples are taken from the vicinities of all probability modes of the formulated posterior. These methodologies are tested using a quasi-geostrophic (QG) model with double-gyre wind forcing and bi-harmonic friction. Numerical results demonstrate the usefulness of using GMMs to relax the Gaussian prior assumption in the HMC filtering paradigm. To provide a unified platform for data assimilation research, a flexible and a highly-extensible testing suite, named DATeS , is developed and described in this work. The core of DATeS is implemented in Python to enable for Object-Oriented capabilities. The main components, such as the models, the data assimilation algorithms, the linear algebra solvers, and the time discretization routines are independent of each other, such as to offer maximum flexibility to configure data assimilation studies. / Ph. D.
549

Nanocomposite Dispersion: Quantifying the Structure-Function Relationship

Gibbons, Luke J. 04 November 2011 (has links)
The dispersion quality of nanoinclusions within a matrix material is often overlooked when relating the effect of nanoscale structures on functional performance and processing/property relationships for nanocomposite materials. This is due in part to the difficulty in visualizing the nanoinclusion and ambiguity in the description of dispersion. Understanding the relationships between the composition of the nanofiller, matrix chemistry, processing procedures and resulting dispersion is a necessary step to tailor the physical properties. A method is presented that incorporates high-contrast imaging, an emerging scanning electron microscopy technique to visualize conductive nanofillers deep within insulating materials, with various image processing procedures to allow for the quantification and validation of dispersion parameters. This method makes it possible to quantify the dispersion of various single wall carbon nanotube (SWCNT)-polymer composites as a function of processing conditions, composition of SWCNT and polymer matrix chemistry. Furthermore, the methodology is utilized to show that SWCNT dispersion exhibits fractal-like behavior thus allowing for simplified quantitative dispersion analysis. The dispersion analysis methodology will be corroborated through comparison to results from small angle neutron scattering dispersion analysis. Additionally, the material property improvement of SWCNT nanocomposites are linked to the dispersion state of the nanostructure allowing for correlation between dispersion techniques, quantified dispersion of SWCNT at the microscopic scale and the material properties measured at the macroscopic scale. / Ph. D.
550

Multiscale Modeling and Uncertainty Quantification of Multiphase Flow and Mass Transfer Processes

Donato, Adam Armido 10 January 2015 (has links)
Most engineering systems have some degree of uncertainty in their input and operating parameters. The interaction of these parameters leads to the uncertain nature of the system performance and outputs. In order to quantify this uncertainty in a computational model, it is necessary to include the full range of uncertainty in the model. Currently, there are two major technical barriers to achieving this: (1) in many situations -particularly those involving multiscale phenomena-the stochastic nature of input parameters is not well defined, and is usually approximated by limited experimental data or heuristics; (2) incorporating the full range of uncertainty across all uncertain input and operating parameters via conventional techniques often results in an inordinate number of computational scenarios to be performed, thereby limiting uncertainty analysis to simple or approximate computational models. This first objective is addressed through combining molecular and macroscale modeling where the molecular modeling is used to quantify the stochastic distribution of parameters that are typically approximated. Specifically, an adsorption separation process is used to demonstrate this computational technique. In this demonstration, stochastic molecular modeling results are validated against a diverse range of experimental data sets. The stochastic molecular-level results are then shown to have a significant role on the macro-scale performance of adsorption systems. The second portion of this research is focused on reducing the computational burden of performing an uncertainty analysis on practical engineering systems. The state of the art for uncertainty analysis relies on the construction of a meta-model (also known as a surrogate model or reduced order model) which can then be sampled stochastically at a relatively minimal computational burden. Unfortunately these meta-models can be very computationally expensive to construct, and the complexity of construction can scale exponentially with the number of relevant uncertain input parameters. In an effort to dramatically reduce this effort, a novel methodology "QUICKER (Quantifying Uncertainty In Computational Knowledge Engineering Rapidly)" has been developed. Instead of building a meta-model, QUICKER focuses exclusively on the output distributions, which are always one-dimensional. By focusing on one-dimensional distributions instead of the multiple dimensions analyzed via meta-models, QUICKER is able to handle systems with far more uncertain inputs. / Ph. D.

Page generated in 0.2564 seconds