• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 3
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 8
  • 8
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Valid estimation and prediction inference in analysis of a computer model

Nagy, Béla 11 1900 (has links)
Computer models or simulators are becoming increasingly common in many fields in science and engineering, powered by the phenomenal growth in computer hardware over the past decades. Many of these simulators implement a particular mathematical model as a deterministic computer code, meaning that running the simulator again with the same input gives the same output. Often running the code involves some computationally expensive tasks, such as solving complex systems of partial differential equations numerically. When simulator runs become too long, it may limit their usefulness. In order to overcome time or budget constraints by making the most out of limited computational resources, a statistical methodology has been proposed, known as the "Design and Analysis of Computer Experiments". The main idea is to run the expensive simulator only at a relatively few, carefully chosen design points in the input space, and based on the outputs construct an emulator (statistical model) that can emulate (predict) the output at new, untried locations at a fraction of the cost. This approach is useful provided that we can measure how much the predictions of the cheap emulator deviate from the real response surface of the original computer model. One way to quantify emulator error is to construct pointwise prediction bands designed to envelope the response surface and make assertions that the true response (simulator output) is enclosed by these envelopes with a certain probability. Of course, to be able to make such probabilistic statements, one needs to introduce some kind of randomness. A common strategy that we use here is to model the computer code as a random function, also known as a Gaussian stochastic process. We concern ourselves with smooth response surfaces and use the Gaussian covariance function that is ideal in cases when the response function is infinitely differentiable. In this thesis, we propose Fast Bayesian Inference (FBI) that is both computationally efficient and can be implemented as a black box. Simulation results show that it can achieve remarkably accurate prediction uncertainty assessments in terms of matching coverage probabilities of the prediction bands and the associated reparameterizations can also help parameter uncertainty assessments.
2

Valid estimation and prediction inference in analysis of a computer model

Nagy, Béla 11 1900 (has links)
Computer models or simulators are becoming increasingly common in many fields in science and engineering, powered by the phenomenal growth in computer hardware over the past decades. Many of these simulators implement a particular mathematical model as a deterministic computer code, meaning that running the simulator again with the same input gives the same output. Often running the code involves some computationally expensive tasks, such as solving complex systems of partial differential equations numerically. When simulator runs become too long, it may limit their usefulness. In order to overcome time or budget constraints by making the most out of limited computational resources, a statistical methodology has been proposed, known as the "Design and Analysis of Computer Experiments". The main idea is to run the expensive simulator only at a relatively few, carefully chosen design points in the input space, and based on the outputs construct an emulator (statistical model) that can emulate (predict) the output at new, untried locations at a fraction of the cost. This approach is useful provided that we can measure how much the predictions of the cheap emulator deviate from the real response surface of the original computer model. One way to quantify emulator error is to construct pointwise prediction bands designed to envelope the response surface and make assertions that the true response (simulator output) is enclosed by these envelopes with a certain probability. Of course, to be able to make such probabilistic statements, one needs to introduce some kind of randomness. A common strategy that we use here is to model the computer code as a random function, also known as a Gaussian stochastic process. We concern ourselves with smooth response surfaces and use the Gaussian covariance function that is ideal in cases when the response function is infinitely differentiable. In this thesis, we propose Fast Bayesian Inference (FBI) that is both computationally efficient and can be implemented as a black box. Simulation results show that it can achieve remarkably accurate prediction uncertainty assessments in terms of matching coverage probabilities of the prediction bands and the associated reparameterizations can also help parameter uncertainty assessments.
3

Valid estimation and prediction inference in analysis of a computer model

Nagy, Béla 11 1900 (has links)
Computer models or simulators are becoming increasingly common in many fields in science and engineering, powered by the phenomenal growth in computer hardware over the past decades. Many of these simulators implement a particular mathematical model as a deterministic computer code, meaning that running the simulator again with the same input gives the same output. Often running the code involves some computationally expensive tasks, such as solving complex systems of partial differential equations numerically. When simulator runs become too long, it may limit their usefulness. In order to overcome time or budget constraints by making the most out of limited computational resources, a statistical methodology has been proposed, known as the "Design and Analysis of Computer Experiments". The main idea is to run the expensive simulator only at a relatively few, carefully chosen design points in the input space, and based on the outputs construct an emulator (statistical model) that can emulate (predict) the output at new, untried locations at a fraction of the cost. This approach is useful provided that we can measure how much the predictions of the cheap emulator deviate from the real response surface of the original computer model. One way to quantify emulator error is to construct pointwise prediction bands designed to envelope the response surface and make assertions that the true response (simulator output) is enclosed by these envelopes with a certain probability. Of course, to be able to make such probabilistic statements, one needs to introduce some kind of randomness. A common strategy that we use here is to model the computer code as a random function, also known as a Gaussian stochastic process. We concern ourselves with smooth response surfaces and use the Gaussian covariance function that is ideal in cases when the response function is infinitely differentiable. In this thesis, we propose Fast Bayesian Inference (FBI) that is both computationally efficient and can be implemented as a black box. Simulation results show that it can achieve remarkably accurate prediction uncertainty assessments in terms of matching coverage probabilities of the prediction bands and the associated reparameterizations can also help parameter uncertainty assessments. / Science, Faculty of / Statistics, Department of / Graduate
4

Near-optimal designs for Gaussian Process regression models

Nguyen, Huong January 2018 (has links)
No description available.
5

On A-optimal Designs for Discrete Choice Experiments and Sensitivity Analysis for Computer Experiments

Sun, Fangfang 30 August 2012 (has links)
No description available.
6

Computer Experimental Design for Gaussian Process Surrogates

Zhang, Boya 01 September 2020 (has links)
With a rapid development of computing power, computer experiments have gained popularity in various scientific fields, like cosmology, ecology and engineering. However, some computer experiments for complex processes are still computationally demanding. A surrogate model or emulator, is often employed as a fast substitute for the simulator. Meanwhile, a common challenge in computer experiments and related fields is to efficiently explore the input space using a small number of samples, i.e., the experimental design problem. This dissertation focuses on the design problem under Gaussian process surrogates. The first work demonstrates empirically that space-filling designs disappoint when the model hyperparameterization is unknown, and must be estimated from data observed at the chosen design sites. A purely random design is shown to be superior to higher-powered alternatives in many cases. Thereafter, a new family of distance-based designs are proposed and their superior performance is illustrated in both static (one-shot design) and sequential settings. The second contribution is motivated by an agent-based model(ABM) of delta smelt conservation. The ABM is developed to assist in a study of delta smelt life cycles and to understand sensitivities to myriad natural variables and human interventions. However, the input space is high-dimensional, running the simulator is time-consuming, and its outputs change nonlinearly in both mean and variance. A batch sequential design scheme is proposed, generalizing one-at-a-time variance-based active learning, as a means of keeping multi-core cluster nodes fully engaged with expensive runs. The acquisition strategy is carefully engineered to favor selection of replicates which boost statistical and computational efficiencies. Design performance is illustrated on a range of toy examples before embarking on a smelt simulation campaign and downstream high-fidelity input sensitivity analysis. / Doctor of Philosophy / With a rapid development of computing power, computer experiments have gained popularity in various scientific fields, like cosmology, ecology and engineering. However, some computer experiments for complex processes are still computationally demanding. Thus, a statistical model built upon input-output observations, i.e., a so-called surrogate model or emulator, is needed as a fast substitute for the simulator. Design of experiments, i.e., how to select samples from the input space under budget constraints, is also worth studying. This dissertation focuses on the design problem under Gaussian process (GP) surrogates. The first work demonstrates empirically that commonly-used space-filling designs disappoint when the model hyperparameterization is unknown, and must be estimated from data observed at the chosen design sites. Thereafter, a new family of distance-based designs are proposed and their superior performance is illustrated in both static (design points are allocated at one shot) and sequential settings (data are sampled sequentially). The second contribution is motivated by a stochastic computer simulator of delta smelt conservation. This simulator is developed to assist in a study of delta smelt life cycles and to understand sensitivities to myriad natural variables and human interventions. However, the input space is high-dimensional, running the simulator is time-consuming, and its outputs change nonlinearly in both mean and variance. An innovative batch sequential design method is proposed, generalizing one-at-a-time sequential design to one-batch-at-a-time scheme with the goal of parallel computing. The criterion for subsequent data acquisition is carefully engineered to favor selection of replicates which boost statistical and computational efficiencies. The design performance is illustrated on a range of toy examples before embarking on a smelt simulation campaign and downstream input sensitivity analysis.
7

Multi-layer designs and composite gaussian process models with engineering applications

Ba, Shan 21 May 2012 (has links)
This thesis consists of three chapters, covering topics in both the design and modeling aspects of computer experiments as well as their engineering applications. The first chapter systematically develops a new class of space-filling designs for computer experiments by splitting two-level factorial designs into multiple layers. The new design is easy to generate, and our numerical study shows that it can have better space-filling properties than the optimal Latin hypercube design. The second chapter proposes a novel modeling approach for approximating computationally expensive functions that are not second-order stationary. The new model is a composite of two Gaussian processes, where the first one captures the smooth global trend and the second one models local details. The new predictor also incorporates a flexible variance model, which makes it more capable of approximating surfaces with varying volatility. The third chapter is devoted to a two-stage sequential strategy which integrates analytical models with finite element simulations for a micromachining process.
8

Hidden Markov model with application in cell adhesion experiment and Bayesian cubic splines in computer experiments

Wang, Yijie Dylan 20 September 2013 (has links)
Estimation of the number of hidden states is challenging in hidden Markov models. Motivated by the analysis of a specific type of cell adhesion experiments, a new frame-work based on hidden Markov model and double penalized order selection is proposed. The order selection procedure is shown to be consistent in estimating the number of states. A modified Expectation-Maximization algorithm is introduced to efficiently estimate parameters in the model. Simulations show that the proposed framework outperforms existing methods. Applications of the proposed methodology to real data demonstrate the accuracy of estimating receptor-ligand bond lifetimes and waiting times which are essential in kinetic parameter estimation. The second part of the thesis is concerned with prediction of a deterministic response function y at some untried sites given values of y at a chosen set of design sites. The intended application is to computer experiments in which y is the output from a computer simulation and each design site represents a particular configuration of the input variables. A Bayesian version of the cubic spline method commonly used in numerical analysis is proposed, in which the random function that represents prior uncertainty about y is taken to be a specific stationary Gaussian process. An MCMC procedure is given for updating the prior given the observed y values. Simulation examples and a real data application are given to compare the performance of the Bayesian cubic spline with that of two existing methods.
9

以機器學習方法估計電腦實驗之目標區域 / Estimation of Target Regions in Computer Experiments: A Machine Learning Approach

林家立, Lin, Chia Li Unknown Date (has links)
電腦實驗(computer experiment)是探索複雜系統輸出反應值和輸入參數之間關係的重要工具,其重要特性是每一次的實驗非常耗費時間及運算的成本。一般在電腦實驗中,研究者較常關心的多是反應曲面的配適和輸出反應值的最佳化等問題(如極大或極小值)。借由一真實平行分散處理系統的啟發,本文所關心的是如何找出系統反應值的局部目標區域。此目標區域有一個非常重要的特性,即區域內外的輸出值所呈現的反應曲面並不連續,因此一般傳統的反應曲面法(response surface methodology)無法適用。本文提出一個新的、可估計不同類型電腦實驗目標區域的有效方法,其中包含了逐步均勻設計和建立分類模型的概 念,電腦模擬的結果也證明了所提方法準確又有效率。 / Computer experiment has been an important tool for exploring the relationships between the input factors and the output responses. It’s important feature is that conducting an experiment is usually time consuming and computationally expensive. In general, researchers are more interested in finding an adequate model for the response surface and the related output optimization problems over the entire input space. Motivated by a real-life parallel and distributed system, here we focus on finding a localized “target region” for the computer experiment. The experiment here has an important characteristic - the response surface is not continuous over the target region of interest. Thus, the traditional response surface methodology (RSM) cannot be directly applied. In this thesis, a novel and efficient methodology for estimating this type of target regions of computer experiment is proposed. The method incorporates the concept of sequential uniform design (UD) and the development of classification techniques based on support vector machines (SVM). Computer simulation shows that the proposed method can efficiently and precisely estimate the target region of computer experiment with different shapes.
10

Some new ideas on fractional factorial design and computer experiment

Su, Heng 08 June 2015 (has links)
This thesis consists of two parts. The first part is on fractional factorial design, and the second part is on computer experiment. The first part has two chapters. In the first chapter, we use the concept of conditional main effect, and propose the CME analysis to solve the problem of effect aliasing in two-level fractional factorial design. In the second chapter, we study the conversion rates of a system of webpages with the proposed funnel testing method, by using directed graph to represent the system, fractional factorial design to conduct the experiment, and a method to optimize the total conversion rate with respect to all the webpages in the system. The second part also has two chapters. In the third chapter, we use regression models to quantify the model form uncertainties in the Perez model in building energy simulations. In the last chapter, we propose a new Gaussian process that can jointly model both point and integral responses.

Page generated in 0.0814 seconds