• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 5
  • 3
  • 2
  • Tagged with
  • 35
  • 35
  • 14
  • 13
  • 12
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Adapting Response Surface Methods for the Optimization of Black-Box Systems

Zielinski, Jacob Jonathan 10 September 2010 (has links)
Complex mathematical models are often built to describe a physical process that would otherwise be extremely difficult, too costly or sometimes impossible to analyze. Generally, these models require solutions to many partial differential equations. As a result, the computer codes may take a considerable amount of time to complete a single evaluation. A time tested method of analysis for such models is Monte Carlo simulation. These simulations, however, often require many model evaluations, making this approach too computationally expensive. To limit the number of experimental runs, it is common practice to model the departure as a Gaussian stochastic process (GaSP) to develop an emulator of the computer model. One advantage for using an emulator is that once a GaSP is fit to realized outcomes, the computer model is easy to predict in unsampled regions of the input space. This is an attempt to 'characterize' the overall model of the computer code. Most of the historical work on design and analysis of computer experiments focus on the characterization of the computer model over a large region of interest. However, many practitioners seek other objectives, such as input screening (Welch et al., 1992), mapping a response surface, or optimization (Jones et al., 1998). Only recently have researchers begun to consider these topics in the design and analysis of computer experiments. In this dissertation, we explore a more traditional response surface approach (Myers, Montgomery and Anderson-Cook, 2009) in conjunction with traditional computer experiment methods to search for the optimum response of a process. For global optimization, Jones, Schonlau, and Welch's (1998) Efficient Global Optimization (EGO) algorithm remains a benchmark for subsequent research of computer experiments. We compare the proposed method in this paper to this leading benchmark. Our goal is to show that response surface methods can be effective means towards estimating an optimum response in the computer experiment framework. / Ph. D.
22

Advancements on the Interface of Computer Experiments and Survival Analysis

Wang, Yueyao 20 July 2022 (has links)
Design and analysis of computer experiments is an area focusing on efficient data collection (e.g., space-filling designs), surrogate modeling (e.g., Gaussian process models), and uncertainty quantification. Survival analysis focuses on modeling the period of time until a certain event happens. Data collection, prediction, and uncertainty quantification are also fundamental in survival models. In this dissertation, the proposed methods are motivated by a wide range of real world applications, including high-performance computing (HPC) variability data, jet engine reliability data, Titan GPU lifetime data, and pine tree survival data. This dissertation is to explore interfaces on computer experiments and survival analysis with the above applications. Chapter 1 provides a general introduction to computer experiments and survival analysis. Chapter 2 focuses on the HPC variability management application. We investigate the applicability of space-filling designs and statistical surrogates in the HPC variability management setting, in terms of design efficiency, prediction accuracy, and scalability. A comprehensive comparison of the design strategies and predictive methods is conducted to study the combinations' performance in prediction accuracy. Chapter 3 focuses on the reliability prediction application. With the availability of multi-channel sensor data, a single degradation index is needed to be compatible with most existing models. We propose a flexible framework with multi-sensory data to model the nonlinear relationship between sensors and the degradation process. We also involve the automatic variable selection to exclude sensors that have no effect on the underlying degradation process. Chapter 4 investigates inference approaches for spatial survival analysis under the Bayesian framework. The Markov chain Monte Carlo (MCMC) approaches and variational inferences performance are studied for two survival models, the cumulative exposure model and the proportional hazard (PH) model. The Titan GPU data and pine tree survival data are used to illustrate the capability of variational inference on spatial survival models. Chapter 5 provides some general conclusions. / Doctor of Philosophy / This dissertation focus on three projects related to computer experiments and survival analysis. Design and analysis of the computer experiment is an area focusing on efficient data collection, building predictive models, and uncertainty quantification. Survival analysis focuses on modeling the period of time until a certain event happens. Data collection, prediction, and uncertainty quantification are also fundamental in survival models. Thus, this dissertation aims to explore interfaces between computer experiments and survival analysis with real world applications. High performance computing systems aggregate a large number of computers to achieve high computing speed. The first project investigates the applicability of space-filling designs and statistical predictive models in the HPC variability management setting, in terms of design efficiency, prediction accuracy, and scalability. A comprehensive comparison of the design strategies and predictive methods is conducted to study the combinations' performance in prediction accuracy. The second project focuses on building a degradation index that describes the product's underlying degradation process. With the availability of multi-channel sensor data, a single degradation index is needed to be compatible with most existing models. We propose a flexible framework with multi-sensory data to model the nonlinear relationship between sensors and the degradation process. We also involve the automatic variable selection to exclude sensors that have no effect on the underlying degradation process. The spatial survival data are often observed when the survival data are collected over a spatial region. The third project studies inference approaches for spatial survival analysis under the Bayesian framework. The commonly used inference method, Markov chain Monte Carlo (MCMC) approach and the approximate inference approach, variational inference's performance are studied for two survival models. The Titan GPU data and pine tree survival data are used to illustrate the capability of variational inference on spatial survival models.
23

Batch Sequencing Methods for Computer Experiments

Quan, Aaron 14 November 2014 (has links)
No description available.
24

Adaptive Design for Global Fit of Non-stationary Surfaces

Frazier, Marian L. 03 September 2013 (has links)
No description available.
25

Sequential Design of Computer Experiments for Robust Parameter Design

Lehman, Jeffrey S. 11 September 2002 (has links)
No description available.
26

Statistical Methods for Variability Management in High-Performance Computing

Xu, Li 15 July 2021 (has links)
High-performance computing (HPC) variability management is an important topic in computer science. Research topics include experimental designs for efficient data collection, surrogate models for predicting the performance variability, and system configuration optimization. Due to the complex architecture of HPC systems, a comprehensive study of HPC variability needs large-scale datasets, and experimental design techniques are useful for improved data collection. Surrogate models are essential to understand the variability as a function of system parameters, which can be obtained by mathematical and statistical models. After predicting the variability, optimization tools are needed for future system designs. This dissertation focuses on HPC input/output (I/O) variability through three main chapters. After the general introduction in Chapter 1, Chapter 2 focuses on the prediction models for the scalar description of I/O variability. A comprehensive comparison study is conducted, and major surrogate models for computer experiments are investigated. In addition, a tool is developed for system configuration optimization based on the chosen surrogate model. Chapter 3 conducts a detailed study for the multimodal phenomena in I/O throughput distribution and proposes an uncertainty estimation method for the optimal number of runs for future experiments. Mixture models are used to identify the number of modes for throughput distributions at different configurations. This chapter also addresses the uncertainty in parameter estimation and derives a formula for sample size calculation. The developed method is then applied to HPC variability data. Chapter 4 focuses on the prediction of functional outcomes with both qualitative and quantitative factors. Instead of a scalar description of I/O variability, the distribution of I/O throughput provides a comprehensive description of I/O variability. We develop a modified Gaussian process for functional prediction and apply the developed method to the large-scale HPC I/O variability data. Chapter 5 contains some general conclusions and areas for future work. / Doctor of Philosophy / This dissertation focuses on three projects that are all related to statistical methods in performance variability management in high-performance computing (HPC). HPC systems are computer systems that create high performance by aggregating a large number of computing units. The performance of HPC is measured by the throughput of a benchmark called the IOZone Filesystem Benchmark. The performance variability is the variation among throughputs when the system configuration is fixed. Variability management involves studying the relationship between performance variability and the system configuration. In Chapter 2, we use several existing prediction models to predict the standard deviation of throughputs given different system configurations and compare the accuracy of predictions. We also conduct HPC system optimization using the chosen prediction model as the objective function. In Chapter 3, we use the mixture model to determine the number of modes in the distribution of throughput under different system configurations. In addition, we develop a model to determine the number of additional runs for future benchmark experiments. In Chapter 4, we develop a statistical model that can predict the throughout distributions given the system configurations. We also compare the prediction of summary statistics of the throughput distributions with existing prediction models.
27

Design & Analysis of a Computer Experiment for an Aerospace Conformance Simulation Study

Gryder, Ryan W 01 January 2016 (has links)
Within NASA's Air Traffic Management Technology Demonstration # 1 (ATD-1), Interval Management (IM) is a flight deck tool that enables pilots to achieve or maintain a precise in-trail spacing behind a target aircraft. Previous research has shown that violations of aircraft spacing requirements can occur between an IM aircraft and its surrounding non-IM aircraft when it is following a target on a separate route. This research focused on the experimental design and analysis of a deterministic computer simulation which models our airspace configuration of interest. Using an original space-filling design and Gaussian process modeling, we found that aircraft delay assignments and wind profiles significantly impact the likelihood of spacing violations and the interruption of IM operations. However, we also found that implementing two theoretical advancements in IM technologies can potentially lead to promising results.
28

Some contributions to latin hypercube design, irregular region smoothing and uncertainty quantification

Xie, Huizhi 21 May 2012 (has links)
In the first part of the thesis, we propose a new class of designs called multi-layer sliced Latin hypercube design (DSLHD) for running computer experiments. A general recursive strategy for constructing MLSLHD has been developed. Ordinary Latin hypercube designs and sliced Latin hypercube designs are special cases of MLSLHD with zero and one layer respectively. A special case of MLSLHD with two layers, doubly sliced Latin hypercube design, is studied in detail. The doubly sliced structure of DSLHD allows more flexible batch size than SLHD for collective evaluation of different computer models or batch sequential evaluation of a single computer model. Both finite-sample and asymptotical sampling properties of DSLHD are examined. Numerical experiments are provided to show the advantage of DSLHD over SLHD for both sequential evaluating a single computer model and collective evaluation of different computer models. Other applications of DSLHD include design for Gaussian process modeling with quantitative and qualitative factors, cross-validation, etc. Moreover, we also show the sliced structure, possibly combining with other criteria such as distance-based criteria, can be utilized to sequentially sample from a large spatial data set when we cannot include all the data points for modeling. A data center example is presented to illustrate the idea. The enhanced stochastic evolutionary algorithm is deployed to search for optimal design. In the second part of the thesis, we propose a new smoothing technique called completely-data-driven smoothing, intended for smoothing over irregular regions. The idea is to replace the penalty term in the smoothing splines by its estimate based on local least squares technique. A close form solution for our approach is derived. The implementation is very easy and computationally efficient. With some regularity assumptions on the input region and analytical assumptions on the true function, it can be shown that our estimator achieves the optimal convergence rate in general nonparametric regression. The algorithmic parameter that governs the trade-off between the fidelity to the data and the smoothness of the estimated function is chosen by generalized cross validation (GCV). The asymptotic optimality of GCV for choosing the algorithm parameter in our estimator is proved. Numerical experiments show that our method works well for both regular and irregular region smoothing. The third part of the thesis deals with uncertainty quantification in building energy assessment. In current practice, building simulation is routinely performed with best guesses of input parameters whose true value cannot be known exactly. These guesses affect the accuracy and reliability of the outcomes. There is an increasing need to perform uncertain analysis of those input parameters that are known to have a significant impact on the final outcome. In this part of the thesis, we focus on uncertainty quantification of two microclimate parameters: the local wind speed and the wind pressure coefficient. The idea is to compare the outcome of the standard model with that of a higher fidelity model. Statistical analysis is then conducted to build a connection between these two. The explicit form of statistical models can facilitate the improvement of the corresponding modules in the standard model.
29

Contributions à l'analyse de fiabilité structurale : prise en compte de contraintes de monotonie pour les modèles numériques / Contributions to structural reliability analysis : accounting for monotonicity constraints in numerical models

Moutoussamy, Vincent 13 November 2015 (has links)
Cette thèse se place dans le contexte de la fiabilité structurale associée à des modèles numériques représentant un phénomène physique. On considère que la fiabilité est représentée par des indicateurs qui prennent la forme d'une probabilité et d'un quantile. Les modèles numériques étudiés sont considérés déterministes et de type boîte-noire. La connaissance du phénomène physique modélisé permet néanmoins de faire des hypothèses de forme sur ce modèle. La prise en compte des propriétés de monotonie dans l'établissement des indicateurs de risques constitue l'originalité de ce travail de thèse. Le principal intérêt de cette hypothèse est de pouvoir contrôler de façon certaine ces indicateurs. Ce contrôle prend la forme de bornes obtenues par le choix d'un plan d'expériences approprié. Les travaux de cette thèse se concentrent sur deux thématiques associées à cette hypothèse de monotonie. La première est l'étude de ces bornes pour l'estimation de probabilité. L'influence de la dimension et du plan d'expériences utilisé sur la qualité de l'encadrement pouvant mener à la dégradation d'un composant ou d'une structure industrielle sont étudiées. La seconde est de tirer parti de l'information de ces bornes pour estimer au mieux une probabilité ou un quantile. Pour l'estimation de probabilité, l'objectif est d'améliorer les méthodes existantes spécifiques à l'estimation de probabilité sous des contraintes de monotonie. Les principales étapes d'estimation de probabilité ont ensuite été adaptées à l'encadrement et l'estimation d'un quantile. Ces méthodes ont ensuite été mises en pratique sur un cas industriel. / This thesis takes place in a structural reliability context which involves numerical model implementing a physical phenomenon. The reliability of an industrial component is summarised by two indicators of failure,a probability and a quantile. The studied numerical models are considered deterministic and black-box. Nonetheless, the knowledge of the studied physical phenomenon allows to make some hypothesis on this model. The original work of this thesis comes from considering monotonicity properties of the phenomenon for computing these indicators. The main interest of this hypothesis is to provide a sure control on these indicators. This control takes the form of bounds obtained by an appropriate design of numerical experiments. This thesis focuses on two themes associated to this monotonicity hypothesis. The first one is the study of these bounds for probability estimation. The influence of the dimension and the chosen design of experiments on the bounds are studied. The second one takes into account the information provided by these bounds to estimate as best as possible a probability or a quantile. For probability estimation, the aim is to improve the existing methods devoted to probability estimation under monotonicity constraints. The main steps built for probability estimation are then adapted to bound and estimate a quantile. These methods have then been applied on an industrial case.
30

ATIVIDADES BASEADAS EM ANIMAÇÃO E SIMULAÇÃO COMPUTACIONAL NO ENSINO-APRENDIZAGEM DE CINEMÁTICA EM NÍVEL MÉDIO

Lunelli, Gisele Bordignon 11 September 2010 (has links)
Made available in DSpace on 2018-06-27T19:13:30Z (GMT). No. of bitstreams: 3 Gisele Bordignon Lunelli.pdf: 2579038 bytes, checksum: be1ec87455cee32f02d3b8026be84cad (MD5) Gisele Bordignon Lunelli.pdf.txt: 143397 bytes, checksum: 49ceaf17ded14700ceafb2fda0767423 (MD5) Gisele Bordignon Lunelli.pdf.jpg: 3756 bytes, checksum: f6a74abdecd5bd35ead5cc3a0a944ddb (MD5) Previous issue date: 2010-09-11 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This work presents a study on the use of computacional experiments which are based in animations and in simulations for applying in didactic activities in teaching the concepts of position, displacement, velocity and acceleration related to the kinematics of the particle. These activities were developed in a meaningful learning perspective based on tne theory of conceptual fields. Participants are twenty students from the first grade of a private high school. This study aims to examine how the use of computational animations and simulations can help in meaningful learning of concepts of particle kinematics in high school. Protocols of activities and a questionnaire are used as instruments of data collection. The conclusions obtained from the vision of the students showed that: (i) the computacional experiments with educational animations and simulations brought the everyday of the student to near the theoretical physics, encouraging learning, (ii) they gave connections between situations presented in virtual experiments and the concepts of the physics through interaction, (iii) they encouraged the pleasure of learning, showing its playful character, and (iv) they showed to be an innovative approaching, making interesting the classes as compared with those tradicional ones. It is worthy emphasize that not all students found useful the lessons with this approach, and that the conceptual difficulties persisted. To these difficulties, other ones such as those of procedimental character may have contributed to hinder the conceptual learning. / Este trabalho apresenta um estudo sobre o uso de experimentos computacioinais baseados em animações e simulações implementadas em atividades didáticas no ensino dos conceitos de posição, deslocamento, velocidade e aceleração relacionados à cinemática da partícula. As atividades didáticas foram desenvolvidas em uma perspectiva de aprendizagem significativa à luz da teoria dos campos conceituais. Os participantes são vinte estudantes da primeira série de uma escola particular. Este estudo tem como objetivo analisar como o uso de animação e simulação computacional pode auxiliar na aprendizagem significativa de conceitos de cinemática da partícula em nível médio. Os instrumentos de coleta de dados são fichas de acompanhamento das atividades didáticas e um questionário aberto. As conclusões obtidas a partir da visão dos estudantes apontam que os experimentos computacionais com animações e simulações educacionais: (i) aproximaram os conhecimentos teóricos da Física com o cotidiano, favorecendo à aprendizagem, (ii) proporcionaram conexões entre as situações-problema apresentadas nos experimentos virtuais e os conteúdos físicos por meio da interatividade, (iii) despertaram o prazer pela aprendizagem, evidenciando seu caráter lúdico, e, (iv) tiveram caráter inovador, tornando as aulas de Física interessantes e diferentes das aulas tradicionais. Ressalta-se que nem todos os estudantes consideraram proveitosas as aulas com esta abordagem, e que as dificuldades conceituais persistiram. A essas dificuldades, acrescentam-se outras de caráter procedimental que podem ter contribuido para obstaculizar a aprendizagem conceitual.

Page generated in 0.0704 seconds