• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 8
  • 7
  • 7
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

"Almost Like Swimming Upstream": A Mixed Methods Investigation of Body Image and Disordered Eating in Black Military Women

Gaines, April Barnes January 2020 (has links)
No description available.
12

Adaptive Design for Global Fit of Non-stationary Surfaces

Frazier, Marian L. 03 September 2013 (has links)
No description available.
13

Sequential Design of Computer Experiments for Robust Parameter Design

Lehman, Jeffrey S. 11 September 2002 (has links)
No description available.
14

Precision Aggregated Local Models

Edwards, Adam Michael 28 January 2021 (has links)
Large scale Gaussian process (GP) regression is infeasible for larger data sets due to cubic scaling of flops and quadratic storage involved in working with covariance matrices. Remedies in recent literature focus on divide-and-conquer, e.g., partitioning into sub-problems and inducing functional (and thus computational) independence. Such approximations can speedy, accurate, and sometimes even more flexible than an ordinary GPs. However, a big downside is loss of continuity at partition boundaries. Modern methods like local approximate GPs (LAGPs) imply effectively infinite partitioning and are thus pathologically good and bad in this regard. Model averaging, an alternative to divide-and-conquer, can maintain absolute continuity but often over-smooth, diminishing accuracy. Here I propose putting LAGP-like methods into a local experts-like framework, blending partition-based speed with model-averaging continuity, as a flagship example of what I call precision aggregated local models (PALM). Using N_C LAGPs, each selecting n from N data pairs, I illustrate a scheme that is at most cubic in n, quadratic in N_C, and linear in N, drastically reducing computational and storage demands. Extensive empirical illustration shows how PALM is at least as accurate as LAGP, can be much faster in terms of speed, and furnishes continuous predictive surfaces. Finally, I propose sequential updating scheme which greedily refines a PALM predictor up to a computational budget, and several variations on the basic PALM that may provide predictive improvements. / Doctor of Philosophy / Occasionally, when describing the relationship between two variables, it may be helpful to use a so-called ``non-parametric" regression that is agnostic to the function that connects them. Gaussian Processes (GPs) are a popular method of non-parametric regression used for their relative flexibility and interpretability, but they have the unfortunate drawback of being computationally infeasible for large data sets. Past work into solving the scaling issues for GPs has focused on ``divide and conquer" style schemes that spread the data out across multiple smaller GP models. While these model make GP methods much more accessible to large data sets they do so either at the expense of local predictive accuracy of global surface continuity. Precision Aggregated Local Models (PALM) is a novel divide and conquer method for GP models that is scalable for large data while maintaining local accuracy and a smooth global model. I demonstrate that PALM can be built quickly, and performs well predictively compared to other state of the art methods. This document also provides a sequential algorithm for selecting the location of each local model, and variations on the basic PALM methodology.
15

Computer Experimental Design for Gaussian Process Surrogates

Zhang, Boya 01 September 2020 (has links)
With a rapid development of computing power, computer experiments have gained popularity in various scientific fields, like cosmology, ecology and engineering. However, some computer experiments for complex processes are still computationally demanding. A surrogate model or emulator, is often employed as a fast substitute for the simulator. Meanwhile, a common challenge in computer experiments and related fields is to efficiently explore the input space using a small number of samples, i.e., the experimental design problem. This dissertation focuses on the design problem under Gaussian process surrogates. The first work demonstrates empirically that space-filling designs disappoint when the model hyperparameterization is unknown, and must be estimated from data observed at the chosen design sites. A purely random design is shown to be superior to higher-powered alternatives in many cases. Thereafter, a new family of distance-based designs are proposed and their superior performance is illustrated in both static (one-shot design) and sequential settings. The second contribution is motivated by an agent-based model(ABM) of delta smelt conservation. The ABM is developed to assist in a study of delta smelt life cycles and to understand sensitivities to myriad natural variables and human interventions. However, the input space is high-dimensional, running the simulator is time-consuming, and its outputs change nonlinearly in both mean and variance. A batch sequential design scheme is proposed, generalizing one-at-a-time variance-based active learning, as a means of keeping multi-core cluster nodes fully engaged with expensive runs. The acquisition strategy is carefully engineered to favor selection of replicates which boost statistical and computational efficiencies. Design performance is illustrated on a range of toy examples before embarking on a smelt simulation campaign and downstream high-fidelity input sensitivity analysis. / Doctor of Philosophy / With a rapid development of computing power, computer experiments have gained popularity in various scientific fields, like cosmology, ecology and engineering. However, some computer experiments for complex processes are still computationally demanding. Thus, a statistical model built upon input-output observations, i.e., a so-called surrogate model or emulator, is needed as a fast substitute for the simulator. Design of experiments, i.e., how to select samples from the input space under budget constraints, is also worth studying. This dissertation focuses on the design problem under Gaussian process (GP) surrogates. The first work demonstrates empirically that commonly-used space-filling designs disappoint when the model hyperparameterization is unknown, and must be estimated from data observed at the chosen design sites. Thereafter, a new family of distance-based designs are proposed and their superior performance is illustrated in both static (design points are allocated at one shot) and sequential settings (data are sampled sequentially). The second contribution is motivated by a stochastic computer simulator of delta smelt conservation. This simulator is developed to assist in a study of delta smelt life cycles and to understand sensitivities to myriad natural variables and human interventions. However, the input space is high-dimensional, running the simulator is time-consuming, and its outputs change nonlinearly in both mean and variance. An innovative batch sequential design method is proposed, generalizing one-at-a-time sequential design to one-batch-at-a-time scheme with the goal of parallel computing. The criterion for subsequent data acquisition is carefully engineered to favor selection of replicates which boost statistical and computational efficiencies. The design performance is illustrated on a range of toy examples before embarking on a smelt simulation campaign and downstream input sensitivity analysis.
16

Review and Extension for the O’Brien Fleming Multiple Testing procedure

Hammouri, Hanan 22 November 2013 (has links)
O'Brien and Fleming (1979) proposed a straightforward and useful multiple testing procedure (group sequential testing procedure) for comparing two treatments in clinical trials where subject responses are dichotomous (e.g. success and failure). O'Brien and Fleming stated that their group sequential testing procedure has the same Type I error rate and power as that of a fixed one-stage chi-square test, but gives the opportunity to terminate the trial early when one treatment is clearly performing better than the other. We studied and tested the O'Brien and Fleming procedure specifically by correcting the originally proposed critical values. Furthermore, we updated the O’Brien Fleming Group Sequential Testing procedure to make it more flexible via three extensions. The first extension is combining the O’Brien Fleming Group Sequential Testing procedure with the Optimal allocation, where the idea is to allocate more patients to the better treatment after each interim analysis. The second extension is combining the O’Brien Fleming Group Sequential Testing procedure with the Neyman allocation which aims to minimize the variance of the difference in sample proportions. The last extension is that we can allow for different sample weights for different stages, as opposed to equal allocation for different stages. Simulation studies showed that the O’Brien Fleming Group Sequential Testing procedure is relatively robust to the added features.
17

Multi-fidelity Gaussian process regression for computer experiments

Le Gratiet, Loic 04 October 2013 (has links) (PDF)
This work is on Gaussian-process based approximation of a code which can be run at different levels of accuracy. The goal is to improve the predictions of a surrogate model of a complex computer code using fast approximations of it. A new formulation of a co-kriging based method has been proposed. In particular this formulation allows for fast implementation and for closed-form expressions for the predictive mean and variance for universal co-kriging in the multi-fidelity framework, which is a breakthrough as it really allows for the practical application of such a method in real cases. Furthermore, fast cross validation, sequential experimental design and sensitivity analysis methods have been extended to the multi-fidelity co-kriging framework. This thesis also deals with a conjecture about the dependence of the learning curve (ie the decay rate of the mean square error) with respect to the smoothness of the underlying function. A proof in a fairly general situation (which includes the classical models of Gaussian-process based metamodels with stationary covariance functions) has been obtained while the previous proofs hold only for degenerate kernels (ie when the process is in fact finite-dimensional). This result allows for addressing rigorously practical questions such as the optimal allocation of the budget between different levels of codes in the multi-fidelity framework.
18

Design and analysis of response selective samples in observational studies

Grünewald, Maria January 2011 (has links)
Outcome dependent sampling may increase efficiency in observational studies. It is however not always obvious how to sample efficiently, and how to analyze the resulting data without introducing bias. This thesis describes a general framework for efficiency calculations in multistage sampling, with focus on what is sometimes referred to as ascertainment sampling. A method for correcting for the sampling scheme in analysis of ascertainment samples is also presented. Simulation based methods are used to overcome computational issues in both efficiency calculations and analysis of data. / At the time of doctoral defense, the following paper was unpublished and had a status as follows: Paper 1: Submitted.
19

Information Technology Sourcing Across Cultures: Preparing Leaders for Cross-Cultural Engagements and Implementing Best Practices with Cultural Sensitivity

Moran, Wayne Gordon 30 September 2014 (has links)
No description available.
20

Redesign of Alpha Class Glutathione Transferases to Study Their Catalytic Properties

Nilsson, Lisa O January 2001 (has links)
<p>A number of active site mutants of human Alpha class glutathione transferase A1-1 (hGST A1-1) were made and characterized to determine the structural determinants for alkenal activity. The choice of mutations was based on primary structure alignments of hGST A1-1 and the Alpha class enzyme with the highest alkenal activity, hGST A4-4, from three different species and crystal structure comparisons between the human enzymes. The result was an enzyme with a 3000-fold change in substrate specificity for nonenal over 1-chloro-2,4-dinitrobenzene (CDNB).</p><p>The C-terminus of the Alpha class enzymes is an α-helix that folds over the active site upon substrate binding. The rate-determining step is product release, which is influenced by the movements of the C-terminus, thereby opening the active site. Phenylalanine 220, near the end of the C-terminus, forms an aromatic cluster with tyrosine 9 and phenylalanine 10, positioning the β-carbon of the cysteinyl moiety of glutathione. The effects of phenylalanine 220 mutations on the mobility of the C-terminus were studied by the viscosity dependence of k<sub>cat</sub> and k<sub>cat</sub>/K<sub>m</sub> with glutathione and CDNB as the varied substrates. </p><p>The compatibility of slightly different subunit interfaces within the Alpha class has been studied by heterodimerization between monomers from hGST A1-1 and hGST A4-4. The heterodimer was temperature sensitive, and rehybridized into homodimers at 40 ˚C. The heterodimers did not show strictly additive activities with alkenals and CDNB. This result combined with further studies indicates that there are factors at the subunit interface influencing the catalytic properties of hGST A1-1.</p>

Page generated in 0.0573 seconds