Spelling suggestions: "subject:"latin hypercube design"" "subject:"matin hypercube design""
1 |
Some contributions to latin hypercube design, irregular region smoothing and uncertainty quantificationXie, Huizhi 21 May 2012 (has links)
In the first part of the thesis, we propose a new class of designs called multi-layer sliced Latin hypercube design (DSLHD) for running computer experiments. A general recursive strategy for constructing MLSLHD has been developed. Ordinary Latin hypercube designs and sliced Latin hypercube designs are special cases of MLSLHD with zero and one layer respectively. A special case of MLSLHD with two layers, doubly sliced Latin hypercube design, is studied in detail. The doubly sliced structure of DSLHD allows more flexible batch size than SLHD for collective evaluation of different computer models or batch sequential evaluation of a single computer model. Both finite-sample and asymptotical sampling properties of DSLHD are examined. Numerical experiments are provided to show the advantage of DSLHD over SLHD for both sequential evaluating a single computer model and collective evaluation of different computer models. Other applications of DSLHD include design for Gaussian process modeling with quantitative and qualitative factors, cross-validation, etc. Moreover, we also show the sliced structure, possibly combining with other criteria such as distance-based criteria, can be utilized to sequentially sample from a large spatial data set when we cannot include all the data points for modeling. A data center example is presented to illustrate the idea. The enhanced stochastic evolutionary algorithm is deployed to search for optimal design.
In the second part of the thesis, we propose a new smoothing technique called completely-data-driven smoothing, intended for smoothing over irregular regions. The idea is to replace the penalty term in the smoothing splines by its estimate based on local least squares technique. A close form solution for our approach is derived. The implementation is very easy and computationally efficient. With some regularity assumptions on the input region and analytical assumptions on the true function, it can be shown that our estimator achieves the optimal convergence rate in general nonparametric regression. The algorithmic parameter that governs the trade-off between the fidelity to the data and the smoothness of the estimated function is chosen by generalized cross validation (GCV). The asymptotic optimality of GCV for choosing the algorithm parameter in our estimator is proved. Numerical experiments show that our method works well for both regular and irregular region smoothing.
The third part of the thesis deals with uncertainty quantification in building energy assessment. In current practice, building simulation is routinely performed with best guesses of input parameters whose true value cannot be known exactly. These guesses affect the accuracy and reliability of the outcomes. There is an increasing need to perform uncertain analysis of those input parameters that are known to have a significant impact on the final outcome. In this part of the thesis, we focus on uncertainty quantification of two microclimate parameters: the local wind speed and the wind pressure coefficient. The idea is to compare the outcome of the standard model with that of a higher fidelity model. Statistical analysis is then conducted to build a connection between these two. The explicit form of statistical models can facilitate the improvement of the corresponding modules in the standard model.
|
2 |
Hypercubes Latins maximin pour l’echantillonage de systèmes complexes / Maximin Latin hypercubes for experimental designLe guiban, Kaourintin 24 January 2018 (has links)
Un hypercube latin (LHD) maximin est un ensemble de points contenus dans un hypercube tel que les points ne partagent de coordonnées sur aucune dimension et tel que la distance minimale entre deux points est maximale. Les LHDs maximin sont particulièrement utilisés pour la construction de métamodèles en raison de leurs bonnes propriétés pour l’échantillonnage. Comme la plus grande partie des travaux concernant les LHD se sont concentrés sur leur construction par des algorithmes heuristiques, nous avons décidé de produire une étude détaillée du problème, et en particulier de sa complexité et de son approximabilité en plus des algorithmes heuristiques permettant de le résoudre en pratique.Nous avons généralisé le problème de construction d’un LHD maximin en définissant le problème de compléter un LHD entamé en respectant la contrainte maximin. Le sous-problème dans lequel le LHD partiel est vide correspond au problème de construction de LHD classique. Nous avons étudié la complexité du problème de complétion et avons prouvé qu’il est NP-complet dans de nombreux cas. N’ayant pas déterminé la complexité du sous-problème, nous avons cherché des garanties de performances pour les algorithmes résolvant les deux problèmes.D’un côté, nous avons prouvé que le problème de complétion n’est approximable pour aucune norme en dimensions k ≥ 3. Nous avons également prouvé un résultat d’inapproximabilité plus faible pour la norme L1 en dimension k = 2. D’un autre côté, nous avons proposé un algorithme d’approximation pour le problème de construction, et avons calculé le rapport d’approximation grâce à deux bornes supérieures que nous avons établies. En plus de l’aspect théorique de cette étude, nous avons travaillé sur les algorithmes heuristiques, et en particulier sur la méta-heuristique du recuit simulé. Nous avons proposé une nouvelle fonction d’évaluation pour le problème de construction et de nouvelles mutations pour les deux problèmes, permettant d’améliorer les résultats rapportés dans la littérature. / A maximin Latin Hypercube Design (LHD) is a set of point in a hypercube which do not share a coordinate on any dimension and such that the minimal distance between two points, is maximal. Maximin LHDs are widely used in metamodeling thanks to their good properties for sampling. As most work concerning LHDs focused on heuristic algorithms to produce them, we decided to make a detailed study of this problem, including its complexity, approximability, and the design of practical heuristic algorithms.We generalized the maximin LHD construction problem by defining the problem of completing a partial LHD while respecting the maximin constraint. The subproblem where the partial LHD is initially empty corresponds to the classical LHD construction problem. We studied the complexity of the completion problem and proved its NP-completeness for many cases. As we did not determine the complexity of the subproblem, we searched for performance guarantees of algorithms which may be designed for both problems. On the one hand, we found that the completion problem is inapproximable for all norms in dimensions k ≥ 3. We also gave a weaker inapproximation result for norm L1 in dimension k = 2. On the other hand, we designed an approximation algorithm for the construction problem which we proved using two new upper bounds we introduced.Besides the theoretical aspect of this study, we worked on heuristic algorithms adapted for these problems, focusing on the Simulated Annealing metaheuristic. We proposed a new evaluation function for the construction problem and new mutations for both the construction and completion problems, improving the results found in the literature.
|
3 |
Response Surface Analysis of Trapped-Vortex Augmented AirfoilsZope, Anup Devidas 11 December 2015 (has links)
In this study, the effect of a passive trapped-vortex cell on lift to drag (L/D) ratio of an FFA-W3-301 airfoil is studied. The upper surface of the airfoil was modified to incorporate a cavity defined by seven parameters. The L/D ratio of the airfoil is modeled using a radial basis function metamodel. This model is used to find the optimal design parameter values that give the highest L/D. The numerical results indicate that the L/D ratio is most sensitive to the position on an airfoil’s upper surface at which the cavity starts, the position of the end point of the cavity, and the vertical distance of the cavity end point relative to the airfoil surface. The L/D ratio can be improved by locating the cavity start point at the point of separation for a particular angle of attack. The optimal cavity shape (o19_aXX) is also tested for a NACA0024 airfoil.
|
4 |
Contributions to computer experiments and binary time seriesHung, Ying 19 May 2008 (has links)
This thesis consists of two parts. The first part focuses on design and analysis for computer experiments and the second part deals with binary time series and its application to kinetic studies in micropipette experiments.
The first part of the thesis addresses three problems. The first problem is concerned with optimal design of computer experiments. Latin hypercube designs (LHDs) have been used extensively for computer experiments. A multi-objective optimization approach is proposed to find good LHDs by combining correlation and distance performance measures. Several examples are presented to show that the obtained designs are good in terms of both criteria.
The second problem is related to the analysis of computer experiments. Kriging is the most popular method for approximating complex computer models. Here a modified kriging method is proposed, which has an unknown mean model. Therefore it is called blind kriging. The unknown mean model is identified from experimental data using a Bayesian variable selection technique. Many examples are presented which show remarkable improvement in prediction using blind kriging over ordinary kriging.
The third problem is related to computer experiments with nested and branching factors. Design and analysis of experiments with branching and nested factors are challenging and have not received much attention in the literature. Motivated by a computer experiment in a machining process, we develop optimal LHDs and kriging methods that can accommodate branching and nested factors. Through the application of the proposed methods, optimal machining conditions and tool edge geometry are attained, which resulted in a remarkable improvement in the machining process.
The second part of the thesis deals with binary time series analysis with application to cell adhesion frequency experiments. Motivated by the analysis of repeated adhesion tests, a binary time series model incorporating random effects is developed in this chapter. A goodness-of-fit statistic is introduced to assess the adequacy of distribution assumptions on the dependent binary data with random effects. Application of the proposed methodology to real data from a T-cell experiment reveals some interesting information. These results provide some quantitative evidence to the speculation that cells can have ¡§memory¡¨ in their adhesion behavior.
|
Page generated in 0.066 seconds