• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 756
  • 295
  • 122
  • 78
  • 67
  • 42
  • 16
  • 12
  • 12
  • 12
  • 12
  • 12
  • 12
  • 7
  • 7
  • Tagged with
  • 1814
  • 1814
  • 328
  • 316
  • 293
  • 273
  • 259
  • 252
  • 236
  • 227
  • 194
  • 187
  • 180
  • 178
  • 171
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Irradiation-Induced Composition Patterns and Segregation in Binary Solid Solutions

Unknown Date (has links)
A theoretical-computational model is developed to study irradiation-induced composition patterns and segregation in binary solid solutions under irradiation, which is motivated by the fact that such composition changes alter a wide range of metallurgical properties of structural alloys used in the nuclear industry. For a binary alloy system, the model is based on a coupled, nonlinear set of reaction-diffusion equations for six defect and atomic species, which include vacancies, three interstitial dumbbell configurations, and the two alloy elements. Two sets of boundary conditions have been considered: periodic boundary conditions, which are used to investigate composition patterning in bulk alloys under irradiation, and reaction boundary conditions to study the radiation-induced segregation at surfaces. Reactions are considered to be either between defects, which is called recombination, or between defects and alloying elements, which result in change in the interstitial dumbbell type. Long range diffusion of all the species is considered to happen by vacancy and interstitialcy mechanisms. As such, diffusion of the alloy elements is coupled to the diffusion of vacancies and interstitials. Defect generation is considered to be associated with collision cascade events that occur randomly in space and time. Each event brings about a change in the local concentration of all the species over the mesoscale material volume affected by the cascade. Stiffly-stable Gear's method has been implemented to solve the reaction-diffusion model numerically. Gear's method is a variant of higher order implicit linear multi-step method, implemented in predictor-corrector fashion. The resulting model has been tested with a miscible CuAu solid solution. For this alloy, and in the absence of boundaries, steady state composition patterns of several nanometers have been observed. Fourier space properties of these patterns have been found to depend on irradiation-specific control parameters, temperature, and initial state of the alloy. Linear stability analysis of the set of reaction-diffusion equations confirms the findings of the numerical simulations. In the presence of boundaries, radiation-induced segregation of alloying species has been observed near in the boundary layer: enrichment of faster diffusing species and depletion of slower diffusing species. Radiation-induced segregation has also been found to depend upon irradiation-specific control parameters and temperature. The results show that the degree of segregation is spatially non-uniform and hence it should be studied in higher dimensions. Proper formulation of the boundary conditions showed that segregation of the alloy elements to the boundary is coupled to the boundary motion. With both patterning and segregation investigations, the irradiated sample has been found to recover its uniform state with time when irradiation is turned off. The inference drawn out from this observation is that in miscible solid solutions irradiation-induced composition patterning and radiation-induced segregation are not realizable in the absence of irradiation. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester, 2012. / June 22, 2012. / Binary alloys, Composition patterning, Irradiation, Reaction-Diffusion, Segregation, Stiffness / Includes bibliographical references. / Anter El Azab, Professor Directing Thesis; Per Arne Rikvold, University Representative; Sachin Shanbhag, Committee Member; Gordon Erlebacher, Committee Member; Tomasz Plewa, Committee Member.
22

Generalizes Procrustes Surface Analysis: A Landmark-Free Approach to Superimposition and Shape Analysis

Unknown Date (has links)
The tools and techniques used in shape analysis have constantly evolved, but their objective remains fixed: to quantify the differences in shape between two objects in a consistent and meaningful manner. The hand-measurements of calipers and protractors of the past have yielded to laser scanners and landmark-placement software, but the process still involves transforming an object's physical shape into a concise set of numerical data that can be readily analyzed by mathematical means [Rohlf 1993]. In this paper, we present a new method to perform this transformation by taking full advantage of today's high-power computers and high-resolution scanning technology. This method uses surface scans to calculate a shape-difference metric and perform superimposition rather than relying on carefully (and tediously) placed manual landmarks. This is accomplished by building upon and extending the Iterative Closest Point algorithm. We also examine some new ways this data may be used; we can, for example, calculate an averaged surface directly and visualize point-wise shape information over this surface. Finally, we demonstrate the use of this method on a set of primate skulls and compare the results of the new methodology with traditional geometric morphometric analysis. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2013. / October 11, 2013. / GPSA, Heat Maps, ICP, Morphometrics, Procrustes / Includes bibliographical references. / Dennis Slice, Professor Directing Thesis; Peter Beerli, Committee Member; Sachin Shanbhag, Committee Member.
23

The Solution of a Burgers' Equation Inverse Problem with Reduced-Order Modeling Proper Orthogonal Decomposition

Unknown Date (has links)
This thesis presents and evaluates methods for solving the 1D viscous Burgers' partial differential equation with finite difference, finite element, and proper orthogonal decomposition (POD) methods in the context of an optimal control inverse problem. Based on downstream observations, the initial conditions that optimize a lack-of-fit cost functional are reconstructed for a variety of different Reynolds numbers. For moderate Reynolds numbers, our POD method proves to be not only fast and accurate, it also demonstrates a regularizing effect on the inverse problem. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Summer Semester, 2009. / May 20, 2009. / Reduced Order Modeling, Proper Orthogonal Decomposition, Inverse Problem, Partial Differential Equations, pde, Optimization, Optimal Control, Fluid Dynamics, Finite Difference, Finite Element / Includes bibliographical references. / Ionel M. Navon, Professor Directing Thesis; Max Gunzburger, Committee Member; Gordon Erlebacher, Committee Member.
24

Characterization of Metallocene-Catalyzed Polyethylenes from Rheological Measurements Using a Bayesian Formulation

Unknown Date (has links)
Long-chain branching affects the rheological properties of the polyethylenes strongly. Branching structure - density of branch points, branch length, and the locations of the branches - is complicated, therefore, without controlled branching structure it is almost impossible to study the effect of long-chain branching on the rheological properties. Single-site catalysts now make it possible to prepare samples in which the molecular weight distribution is relatively narrow and quite reproducible. In addition, a particular type of single-site catalyst, the constrained geometry catalyst, makes it possible to introduce low and well-controlled levels of long chain branching while keeping the molecular weight distribution narrow. Linear viscoelastic properties (LVE) of rheological properties contain a rich amount of data regarding molecular structure of the polymers. A computational algorithm that seeks to invert the linear viscoelastic spectrum of single-site metallocene-catalyzed polyethylenes is presented in this work. The algorithm uses a general linear rheological model of branched polymers as its underlying engine, and is based on a Bayesian formulation that transforms the inverse problem into a sampling problem. Given experimental rheological data on unknown single-site metallocene-catalyzed polyethylenes, it is able to quantitatively describe the range of values of weight-averaged molecular weight, MW, and average branching density, bm, consistent with the data. The algorithm uses a Markov-chain Monte Carlo method to simulate the sampling problem. If, and when information about the molecular weight is available through supplementary experiments, such as chromatography or light scattering, it can easily be incorporated into the algorithm, as demonstrated. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Summer Semester, 2011. / June 3, 2011. / Bayesian, Polethylenes, Metallocene / Includes bibliographical references. / Sachin Shanbhag, Professor Directing Thesis; Anter El-Azab, Committee Member; Peter Beerli, Committee Member.
25

Edge-Weighted Centroidal Voronoi Tessellation Based Algorithms for Image Segmentation

Unknown Date (has links)
Centroidal Voronoi tessellations (CVTs) are special Voronoi tessellations whose generators are also the centers of mass (centroids) of the Voronoi regions with respect to a given density function. CVT-based algorithms have been proved very useful in the context of image processing. However when dealing with the image segmentation problems, classic CVT algorithms are sensitive to noise. In order to overcome this limitation, we develop an edge-weighted centroidal Voronoi Tessellation (EWCVT) model by introducing a new energy term related to the boundary length which is called "edge energy". The incorporation of the edge energy is equivalent to add certain form of compactness constraint in the physical space. With this compactness constraint, we can effectively control the smoothness of the clusters' boundaries. We will provide some numerical examples to demonstrate the effectiveness, efficiency, flexibility and robustness of EWCVT. Because of its simplicity and flexibility, we can easily embed other mechanisms with EWCVT to tackle more sophisticated problems. Two models based on EWCVT are developed and discussed. The first one is "local variation and edge-weighted centroidal Voronoi Tessellation" (LVEWCVT) model by encoding the information of local variation of colors. For the classic CVTs or its generalizations (like EWCVT), pixels inside a cluster share the same centroid. Therefore the set of centroids can be viewed as a piecewise constant function over the computational domain. And the resulting segmentation have to be roughly the same with respect to the corresponding centroids. Inspired by this observation, we propose to calculate the centroids for each pixel separately and locally. This scheme greatly improves the algorithms' tolerance of within-cluster feature variations. By extensive numerical examples and quantitative evaluations, we demonstrate the excellent performance of LVEWCVT method compared with several state-of-art algorithms. LVEWCVT model is especially suitable for detection of inhomogeneous targets with distinct color distributions and textures. Based on EWCVT, we build another model for "Super-pixels" which is in fact a "regularization" of highly inhomogeneous images. We call our algorithm for super-pixels as "VCells" which is the abbreviation of "Voronoi cells". For a wide range of images, VCells is capable to generate roughly uniform sub-regions and meanwhile nicely preserves local image boundaries. The under-segmentation error is effectively limited in a controllable manner. Moreover, VCells is very efficient. The computational cost is roughly linear in image size with small constant coefficient. For megapixel sized images, VCells is able to generate very dense superpixels in a matter of seconds. We demonstrate that VCells outperforms several state-of-art algorithms through extensive qualitative and quantitative results on a wide range of complex images. Another important contribution of this work is the "Detecting-Segment-Breaking" (DSB) algorithm which can be used to guarantee the spatial connectedness of resulting segments generated by CVT based algorithms. Since the metric is usually defined on the color space, the resulting segments by CVT based algorithms are not necessarily spatially connected. For some applications, this feature is useful and conceptually meaningful, e.g., the foreground objects are not spatially connected. But for some other applications, like the superpixel problem, this "good" feature becomes unacceptable. By simple "extracting-connected-component" and "relabeling" schemes, DSB successfully overcomes the above difficulty. Moreover, the computational cost of DSB is roughly linear in image size with a small constant coefficient. From the theoretical perspective, the innovative idea of EWCVT greatly enriches the methodology of CVTs. (The idea of EWCVT has already been used for variational curve smoothing and reconstruction problems.) For applications, this work shows the great power of EWCVT for image segmentation related problems. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester, 2011. / June 24, 2011. / Image Segmentation, Centroidal Voronoi Tessellation, Clusters, Kmeans, Computer Vision, Superpixels, Inhomogeneity, Edge Detection, Active Contours / Includes bibliographical references. / Xiaoqiang Wang, Professor Directing Dissertation; Xiaoming Wang, University Representative; Max Gunzburger, Committee Member; Janet Peterson, Committee Member; Anter El-Azab, Committee Member.
26

Parametric Uncertainty Analysis of Uranium Transport Surface Complexation Models

Unknown Date (has links)
Parametric uncertainty analysis of surface complexation modeling (SCM) has been studied using linear and nonlinear analysis. A computational SCM model was developed by Kohler et al. (1996) to simulate the breakthrough of Uranium(VI) in a column of quartz. Calibration of parameters which describe the reactions involved during reactive-transport simulation has been found to fit experimental data well. Further uncertainty analysis has been conducted which determines the predictive capability of these models. It was concluded that nonlinear analysis results in a more accurate prediction interval coverage than linear analysis. An assumption made by both linear and nonlinear analysis is that the parameters follow a normal distribution. In a preliminary study, when using Monte Carlo sampling a uniform distribution among a known feasible parameter range, the model exhibits no predictive capability. Due to high parameter sensitivity, few realizations exhibit accuracy to the known data. This results in a high confidence of the calibrated parameters, but poor understanding of the parametric distributions. This study first calibrates these parameters using a global optimization technique, multi-start quasi-newton BFGS method. Second, a Morris method (MOAT) analysis is used to screen parametric sensitivity. It is seen from MOAT that all parameters exhibit nonlinear effects on the simulation. To achieve an approximation of the simulated behavior of SCM parameters without the assumption of a normal distribution, this study employs the use of a Covariance-Adaptive Monte Carlo Markov chain algorithm. It is seen from posterior distributions generated from accepted parameter sets that the parameters do not necessarily follow a normal distribution. Likelihood surfaces confirm the calibration of the models, but shows that responses to parameters are complex. This complex surface is due to a nonlinear model and high correlations between parameters. The posterior parameter distributions are then used to find prediction intervals about an experiment not used to calibrate the model. The predictive capability of Adaptive MCMC is found to be better than that of linear and non-linear analysis, showing a better understanding of parametric uncertainty than previous study. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Spring Semester, 2011. / November 18, 2010. / Groundwater contamination, Hydrology / Includes bibliographical references. / Ming Ye, Professor Directing Thesis; Robert van Engelen, Committee Member; Tomasz Plewa, Committee Member.
27

Reduced Order Modeling of Reactive Transport in a Column Using Proper Orthogonal Decomposition

Unknown Date (has links)
Estimating parameters for reactive contaminant transport models can be a very computationally intensive. Typically this involves solving a forward problem many times, with many degrees of freedom that must be computed each time. We show that reduced order modeling (ROM) by proper orthogonal decomposition (POD) can be used to approximate the solution to the forward model using many fewer degrees of freedom. We provide background on the finite element method and reduced order modeling in one spatial dimension, and apply both methods to a system of linear uncoupled time-dependent equations simulating reactive transport in a column. By comparing the reduced order and finite element approximations, we demonstrate that the reduced model, while having many fewer degrees of freedom to compute, gives a good approximation of the high-dimensional (finite element) model. Our results indicate that one may substitute a reduced model in place of a high-dimensional model to solve the forward problem in parameter estimation with many fewer degrees of freedom. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2011. / November 4, 2011. / column experiment, computational hydrology, parameter estimation, proper orthogonal decomposition, reactive transport, reduced order modeling / Includes bibliographical references. / Janet Peterson, Professor Directing Thesis; Ming Ye, Professor Co-Directing Thesis; Sachin Shanbhag, Committee Member.
28

Assessment of Parameteric and Model Uncertainty in Groundwater Modeling

Unknown Date (has links)
Groundwater systems are open and complex, rendering them prone to multiple conceptual interpretations and mathematical descriptions. When multiple models are acceptable based on available knowledge and data, model uncertainty arises. One way to assess the model uncertainty is postulating several alternative hydrologic models for a site and using model selection criteria to (1) rank these models, (2) eliminate some of them, and/or (3) weight and average predictions statistics generated by multiple models based on their model probabilities. This multimodel analysis has led to some debate among hydrogeologists about the merits and demerits of common model selection criteria such as AIC, AICc, BIC, and KIC. This dissertation contributes to the discussion by comparing the abilities of the two common Bayesian criteria (BIC and KIC) theoretically and numerically. The comparison results indicate that, using MCMC results as a reference, KIC yields more accurate approximations of model probability than does BIC. Although KIC reduces asymptotically to BIC, KIC provides consistently more reliable indications of model quality for a range of sample sizes. In the multimodel analysis, the model averaging predictive uncertainty is a weighted average of predictive uncertainties of individual models. So it is important to properly quantify individual model's predictive uncertainty. Confidence intervals based on regression theories and credible intervals based on Bayesian theories are conceptually different ways to quantify predictive uncertainties, and both are widely used in groundwater modeling. This dissertation explores their differences and similarities theoretically and numerically. The comparison results indicate that given Gaussian distributed observation errors, for linear or linearized nonlinear models, linear confidence and credible intervals are numerically identical when consistent prior parameter information is used. For nonlinear models, nonlinear confidence and credible intervals can be numerically identical if parameter confidence and credible regions based on approximate likelihood method are used and intrinsic model nonlinearity is small; but they differ in practice due to numerical difficulties in calculating both confidence and credible intervals. Model error is a more vital issue than differences between confidence and credible intervals for individual models, suggesting the importance of considering alternative models. Model calibration results are the basis for the model selection criteria to discriminate between models. However, how to incorporate calibration data errors into the calibration process is an unsettled problem. It has been seen that due to the improper use of the error probability structure in the calibration, the model selection criteria lead to an unrealistic situation in which one model receives overwhelmingly high averaging weight (even 100%), which cannot be justified by available data and knowledge. This dissertation finds that the errors reflected in the calibration should include two parts, measurement errors and model errors. To consider the probability structure of the total errors, I propose an iterative calibration method with two stages of parameter estimation. The multimodel analysis based on the estimation results leads to more reasonable averaging weights and better averaging predictive performance, compared to those with considering only measurement errors. Traditionally, data-worth analyses have relied on a single conceptual-mathematical model with prescribed parameters. Yet this renders model predictions prone to statistical bias and underestimation of uncertainty and thus affects the groundwater management decision. This dissertation proposes a multimodel approach to optimum data-worth analyses that is based on model averaging within a Bayesian framework. The developed multimodel Bayesian approach to data-worth analysis works well in a real geostatistical problem. In particular, the selection of target for additional data collection based on the approach is validated against actual data collected. The last part of the dissertation presents an efficient method of Bayesian uncertainty analysis. While Bayesian analysis is vital to quantify predictive uncertainty in groundwater modeling, its application has been hindered in multimodel uncertainty analysis because of computational cost of numerous models executions and the difficulty in sampling from the complicated posterior probability density functions of model parameters. This dissertation develops a new method to improve computational efficiency of Bayesian uncertainty analysis using sparse-grid method. The developed sparse-grid-based method for Bayesian uncertainty analysis demonstrates its superior accuracy and efficiency to classic importance sampling and MCMC sampler when applied to a groundwater flow model. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Spring Semester, 2012. / March 29, 2012. / Bayesian model averaging, Data worth, Model selection criteria, Multimodel analysis, Uncertainty measure / Includes bibliographical references. / Ming Ye, Professor Directing Dissertation; Xufeng Niu, University Representative; Peter Beerli, Committee Member; Gary Curtis, Committee Member; Michael Navon, Committee Member; Tomasz Plewa, Committee Member.
29

Integrating Two-Way Interaction Between Fluids and Rigid Bodies in the Real-Time Particle Systems Library

Unknown Date (has links)
In the last 15 years, Video games have become a dominate form of entertainment. The popularity of video games means children are spending more of their free time play video games. Usually, the time spent on homework or studying is decreased to allow for the extended time spent on video games. In an effort to solve the problem, researchers have begun creating educational video games. Some studies have shown a significant increase in learning ability from video games or other interactive instruction. Educational games can be used in conjunction with formal educational methods to improve the retention among students. To facilitate the creation of games for science education, the RTPS library was created by Ian Johnson to simulate fluid dynamics in real-time. This thesis seeks to extend the RTPS library, to provide more realistic simulations. Rigid body dynamics have been added to the simulation framework. In addition, a two-way coupling between the rigid bodies and fluids have been implemented. Another contribution to the library, was the addition of fluid surface rendering to provide a more realistic looking simulation. Finally, a Qt interface was added to allow for modification of simulation parameters in real-time. In order to perform these simulations in real-time one must have a significant amount of computational power. Though processing power has seen consistent growth for many years, the demands for higher performance desktops grew faster than CPUs could satisfy. In 2006, general purpose graphics processing(GPGPU) was introduced with the CUDA programming language. This new language allowed developers access to an incredible amount of processing power. Some researchers were reporting up to 10 times speed-ups over a CPU. With this power, one can perform simulations on their desktop computers that were previously only feasible on super computers. GPGPU technology is utilized in this thesis to enable real-time simulations. / A Thesis submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Master of Science. / Fall Semester, 2012. / September 4, 2012. / Fluid Dynamics, Fluid Rendering, GPGPU, Physics Simulation, Real-Time, SPH / Includes bibliographical references. / Gordon Erlebacher, Professor Directing Thesis; Tomasz Plewa, Committee Member; Sachin Shanbhag, Committee Member.
30

Sparse-Grid Methods for Several Types of Stochastic Differential Equations

Unknown Date (has links)
This work focuses on developing and analyzing novel, efficient sparse-grid algorithms for solving several types of stochastic ordinary/partial differential equations and corresponding inverse problem, such as parameter identification. First, we consider linear parabolic partial differential equations with random diffusion coefficients, forcing term and initial condition. Error analysis for a stochastic collocation method is carried out in a wider range of situations than previous literatures, including input data that depend nonlinearly on the random variables and random variables that are correlated or even unbounded. We provide a rigorous convergence analysis and demonstrate the exponential decay of the interpolation error in the probability space for both semi-discrete and fully-discrete solutions. Second, we consider multi-dimensional backward stochastic differential equations driven by a vector of white noise. A sparse-grid scheme are proposed to discretize the target equation in the multi-dimensional time-space domain. In our scheme, the time discretization is conducted by the multi-step scheme. In the multi-dimensional spatial domain, the conditional mathematical expectations derived from the original equation are approximated using sparse-grid Gauss-Hermite quadrature rule and adaptive hierarchical sparse-grid interpolation. Error estimates are rigorously proved for the proposed fully-discrete scheme for multi-dimensional BSDEs with certain types of simplified generator functions. Third, we investigate the propagation of input uncertainty through nonlocal diffusion models. Since the stochastic local diffusion equations, e.g. heat equations, have already been well studied, we are interested in extending the existing numerical methods to solve nonlocal diffusion problems. In this work, we use sparse-grid stochastic collocation method to solve nonlocal diffusion equations with colored noise and Monte-Carlo method to solve the ones with white noise. Our numerical experiments show that the existing methods can achieve the desired accuracy in the nonlocal setting. Moreover, in the white noise case, the nonlocal diffusion operator can reduce the variance of the solution because the nonlocal diffusion operator has "smoothing" effect on the random field. At last, stochastic inverse problem is investigated. We propose sparse-grid Bayesian algorithm to improve the efficiency of the classic Bayesian methods. Using sparse-grid interpolation and integration, we construct a surrogate posterior probability density function and determine an appropriate alternative density which can capture the main features of the true PPDF to improve the simulation efficiency in the framework of indirect sampling. By applying this method to a groundwater flow model, we demonstrate its better accuracy when compared to brute-force MCMC simulation results. / A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester, 2012. / June 22, 2012. / Beysian analysis, inverse problem, nonlocal diffusion, sparse grid, stochastic differential equations, uncertainty quantification / Includes bibliographical references. / Max D. Gunzburger, Professor Directing Dissertation; Xiaoming Wang, University Representative; Janet Peterson, Committee Member; Xiaoqiang Wang, Committee Member; Ming Ye, Committee Member; Clayton Webster, Committee Member; John Burkardt, Committee Member.

Page generated in 0.0839 seconds