• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 244
  • 237
  • 37
  • 32
  • 18
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 658
  • 658
  • 151
  • 80
  • 59
  • 51
  • 50
  • 43
  • 40
  • 38
  • 38
  • 38
  • 37
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Batch Sequencing Methods for Computer Experiments

Quan, Aaron 14 November 2014 (has links)
No description available.
312

Optimization of shape rolling processes using finite element analysis and experimental design methodology

Osio, Ignacio G. January 1992 (has links)
No description available.
313

Equivalence of symmetric factorial designs and characterization and ranking of two-level Split-lot designs

Katsaounis, Parthena I. 28 November 2006 (has links)
No description available.
314

Discovering interpretable topics in free-style text: diagnostics, rare topics, and topic supervision

Zheng, Ning 07 January 2008 (has links)
No description available.
315

Bayesian and Semi-Bayesian regression applied to manufacturing wooden products

Tseng, Shih-Hsien 08 January 2008 (has links)
No description available.
316

Experimental Design Optimization and Thermophysical Parameter Estimation of Composite Materials Using Genetic Algorithms

Garcia, Sandrine 30 June 1999 (has links)
Thermophysical characterization of anisotropic composite materials is extremely important in the control of today fabrication processes and in the prediction of structure failure due to thermal stresses. Accuracy in the estimation of the thermal properties can be improved if the experiments are designed carefully. However, on one hand, the typically used parametric study for the design optimization is tedious and time intensive. On the other hand, commonly used gradient-based estimation methods show instabilities resulting in nonconvergence when used with models that contain correlated or nearly correlated parameters. The objectives of this research were to develop systematic and reliable methodologies for both Experimental Design Optimization (EDO) used for the determination of thermal properties, and Simultaneous Parameter Estimation (SPE). Because of their advantageous features, Genetic Algorithms (GAs) were investigated for use as a strategy for both EDO and SPE. The EDO and SPE approaches used involved the maximization of an optimality criterion associated with the sensitivity matrix of the unknown parameters, and the minimization of the ordinary least squares error, respectively. Two versions of a general-purpose genetic-based program were developed: one is designed for the analysis of any EDO / SPE problems for which a mathematical model can be provided, while the other incorporates a control-volume finite difference scheme allowing for the practical analysis of complex problems. The former version was used to illustrate the genetic performance on the optimization of a difficult mathematical test function. Two test cases previously solved in the literature were first analyzed to demonstrate and assess the GA-based {EDO/SPE} methodology. These problems included the optimization of one and two dimensional designs for the estimation at ambient temperature of two and three thermal properties, respectively (effective thermal conductivity parallel and perpendicular to the fibers plane and effective volumetric heat capacity), of anisotropic carbon/epoxy composite materials. The two dimensional case was further investigated to evaluate the effects of the optimality criterion used for the experimental design on the accuracy of the estimated properties. The general-purpose GA-based program was then successively applied to three advanced studies involving the thermal characterization of carbon/epoxy anisotropic composites. These studies included the SPE of successively three, seven and nine thermophysical parameters, with for the latter case, a two dimensional EDO with seven experimental key parameters. In two of the three studies, the parameters were defined to represent the dependence of the thermal properties with temperature. Finally, the kinetic characterization of the curing of three thermosetting materials (an epoxy, a polyester and a rubber compound) was accomplished resulting in the SPE of six kinetic parameters. Overall, the GA method was found to perform extremely well despite the high degree of correlation and low sensitivity of many parameters in all cases studied. This work therefore validates the use of GAs for the thermophysical characterization of anisotropic composite materials. The significance in using such algorithms is not only the solution to ill-conditioned problems but also, a drastically cost savings in both experimental and time expenses as they allow for the EDO and SPE of several parameters at once. / Ph. D.
317

Computational Framework for Uncertainty Quantification, Sensitivity Analysis and Experimental Design of Network-based Computer Simulation Models

Wu, Sichao 29 August 2017 (has links)
When capturing a real-world, networked system using a simulation model, features are usually omitted or represented by probability distributions. Verification and validation (V and V) of such models is an inherent and fundamental challenge. Central to V and V, but also to model analysis and prediction, are uncertainty quantification (UQ), sensitivity analysis (SA) and design of experiments (DOE). In addition, network-based computer simulation models, as compared with models based on ordinary and partial differential equations (ODE and PDE), typically involve a significantly larger volume of more complex data. Efficient use of such models is challenging since it requires a broad set of skills ranging from domain expertise to in-depth knowledge including modeling, programming, algorithmics, high- performance computing, statistical analysis, and optimization. On top of this, the need to support reproducible experiments necessitates complete data tracking and management. Finally, the lack of standardization of simulation model configuration formats presents an extra challenge when developing technology intended to work across models. While there are tools and frameworks that address parts of the challenges above, to the best of our knowledge, none of them accomplishes all this in a model-independent and scientifically reproducible manner. In this dissertation, we present a computational framework called GENEUS that addresses these challenges. Specifically, it incorporates (i) a standardized model configuration format, (ii) a data flow management system with digital library functions helping to ensure scientific reproducibility, and (iii) a model-independent, expandable plugin-type library for efficiently conducting UQ/SA/DOE for network-based simulation models. This framework has been applied to systems ranging from fundamental graph dynamical systems (GDSs) to large-scale socio-technical simulation models with a broad range of analyses such as UQ and parameter studies for various scenarios. Graph dynamical systems provide a theoretical framework for network-based simulation models and have been studied theoretically in this dissertation. This includes a broad range of stability and sensitivity analyses offering insights into how GDSs respond to perturbations of their key components. This stability-focused, structure-to-function theory was a motivator for the design and implementation of GENEUS. GENEUS, rooted in the framework of GDS, provides modelers, experimentalists, and research groups access to a variety of UQ/SA/DOE methods with robust and tested implementations without requiring them to necessarily have the detailed expertise in statistics, data management and computing. Even for research teams having all the skills, GENEUS can significantly increase research productivity. / Ph. D.
318

The generalized inbreeding coefficient and the generalized heterozygosity index in a recurrent selection program

Cain, Rolene LaHayne January 1969 (has links)
Methods of calculating the inbreeding coefficient In a finite population undergoing recurrent selection (self-select-intercross in succeeding generations) were investigated. It was noted that, in a population under selection, the inbreeding coefficient does not provide the experimenter with a measure of expected degree of variability; instead an index of total heterozygosity is required, and such an Index was derived. Formulas necessary to calculate both the inbreeding coefficients and the heterozygosity indexes were derived for the cases: one-locus, two-allele, random selection; k independent loci and random selection; one-locus, two-allele and effective directional selection; and k linked loci with effective directional selection. These formulas Involved defining a generalized inbreeding coefficient and a generalized index of homozygosity (or heterozygosity) in terms of vectors whose components reflected the various possible patterns of genes identical by descent at a given stage of the recurrent selection breeding program. Formulas were derived whereby the mean and the variance of the total number of loci homozygous (or heterozygous) by descent or in state may be obtained. The progress of the panmictic index and/or the index of total heterozygosity through at least twenty-five cycles of recurrent selection was observed in computer-simulated populations ranging in sizes from ten through one hundred, assuming varying recombination probabilities both in the one-locus and in the two linked-loci case and assuming both minimum and maximum inbreeding selection patterns. Tables resulting from these simulated studies could be used to estimate minimum and maximum inbreeding coefficients and/or minimum and maximum heterozygosity indexes in experimental populations for which the initial conditions approximate those assumed in the simulated populations. It was observed that the coefficient of relationship in the source population was extremely important in tracing the progress of the degree of Inbreeding and/or total homozygosity, that linkage played a major role in promoting heterozygosity in a recurrent selection system, and that careful intercrossing rather than random mating in alternate generations of the recurrent selection cycle was important in promoting maximum heterozygosity in the selected population. In the simulated populations the effect of small population sizes was observed and, in general, indications were that unless more than five complete recurrent cycles are contemplated, increasing population size results In only relatively minor increases in panmixia, especially when linked loci are involved in the selected trait and when care Is taken to avoid a maximum inbreeding selection pattern. / Ph. D.
319

Computer Experimental Design for Gaussian Process Surrogates

Zhang, Boya 01 September 2020 (has links)
With a rapid development of computing power, computer experiments have gained popularity in various scientific fields, like cosmology, ecology and engineering. However, some computer experiments for complex processes are still computationally demanding. A surrogate model or emulator, is often employed as a fast substitute for the simulator. Meanwhile, a common challenge in computer experiments and related fields is to efficiently explore the input space using a small number of samples, i.e., the experimental design problem. This dissertation focuses on the design problem under Gaussian process surrogates. The first work demonstrates empirically that space-filling designs disappoint when the model hyperparameterization is unknown, and must be estimated from data observed at the chosen design sites. A purely random design is shown to be superior to higher-powered alternatives in many cases. Thereafter, a new family of distance-based designs are proposed and their superior performance is illustrated in both static (one-shot design) and sequential settings. The second contribution is motivated by an agent-based model(ABM) of delta smelt conservation. The ABM is developed to assist in a study of delta smelt life cycles and to understand sensitivities to myriad natural variables and human interventions. However, the input space is high-dimensional, running the simulator is time-consuming, and its outputs change nonlinearly in both mean and variance. A batch sequential design scheme is proposed, generalizing one-at-a-time variance-based active learning, as a means of keeping multi-core cluster nodes fully engaged with expensive runs. The acquisition strategy is carefully engineered to favor selection of replicates which boost statistical and computational efficiencies. Design performance is illustrated on a range of toy examples before embarking on a smelt simulation campaign and downstream high-fidelity input sensitivity analysis. / Doctor of Philosophy / With a rapid development of computing power, computer experiments have gained popularity in various scientific fields, like cosmology, ecology and engineering. However, some computer experiments for complex processes are still computationally demanding. Thus, a statistical model built upon input-output observations, i.e., a so-called surrogate model or emulator, is needed as a fast substitute for the simulator. Design of experiments, i.e., how to select samples from the input space under budget constraints, is also worth studying. This dissertation focuses on the design problem under Gaussian process (GP) surrogates. The first work demonstrates empirically that commonly-used space-filling designs disappoint when the model hyperparameterization is unknown, and must be estimated from data observed at the chosen design sites. Thereafter, a new family of distance-based designs are proposed and their superior performance is illustrated in both static (design points are allocated at one shot) and sequential settings (data are sampled sequentially). The second contribution is motivated by a stochastic computer simulator of delta smelt conservation. This simulator is developed to assist in a study of delta smelt life cycles and to understand sensitivities to myriad natural variables and human interventions. However, the input space is high-dimensional, running the simulator is time-consuming, and its outputs change nonlinearly in both mean and variance. An innovative batch sequential design method is proposed, generalizing one-at-a-time sequential design to one-batch-at-a-time scheme with the goal of parallel computing. The criterion for subsequent data acquisition is carefully engineered to favor selection of replicates which boost statistical and computational efficiencies. The design performance is illustrated on a range of toy examples before embarking on a smelt simulation campaign and downstream input sensitivity analysis.
320

Outliers and robust response surface designs

O'Gorman, Mary Ann January 1984 (has links)
A commonly occurring problem in response surface methodology is that of inconsistencies in the response variable. These inconsistencies, or maverick observations, are referred to here as outliers. Many models exist for describing these outliers. Two of these models, the mean shift and the variance inflation outlier models, are employed in this research. Several criteria are developed for determining when the outlying observation is detrimental to the analysis. These criteria all lead to the same condition which is used to develop statistical tests of the null hypothesis that the outlier is not detrimental to the analysis. These results are extended to the multiple outlier case for both models. The robustness of response surface designs is also investigated. Robustness to outliers, missing data and errors in control are examined for first order models. The orthogonal designs with large second moments, such as the 2ᵏ factorial designs, are optimal in all three cases. In the second order case, robustness to outliers and to missing data are examined. Optimal design parameters are obtained by computer for the central composite, Box-Behnken, hybrid, small composite and equiradial designs. Similar results are seen for both robustness to outliers and to missing data. The central composite turns out to be the optimal design type and of the two economical design types the small composite is preferred to the hybrid. / Ph. D.

Page generated in 0.1063 seconds