Spelling suggestions: "subject:"latin hypercube"" "subject:"latin bypercube""
11 |
Some contributions to latin hypercube design, irregular region smoothing and uncertainty quantificationXie, Huizhi 21 May 2012 (has links)
In the first part of the thesis, we propose a new class of designs called multi-layer sliced Latin hypercube design (DSLHD) for running computer experiments. A general recursive strategy for constructing MLSLHD has been developed. Ordinary Latin hypercube designs and sliced Latin hypercube designs are special cases of MLSLHD with zero and one layer respectively. A special case of MLSLHD with two layers, doubly sliced Latin hypercube design, is studied in detail. The doubly sliced structure of DSLHD allows more flexible batch size than SLHD for collective evaluation of different computer models or batch sequential evaluation of a single computer model. Both finite-sample and asymptotical sampling properties of DSLHD are examined. Numerical experiments are provided to show the advantage of DSLHD over SLHD for both sequential evaluating a single computer model and collective evaluation of different computer models. Other applications of DSLHD include design for Gaussian process modeling with quantitative and qualitative factors, cross-validation, etc. Moreover, we also show the sliced structure, possibly combining with other criteria such as distance-based criteria, can be utilized to sequentially sample from a large spatial data set when we cannot include all the data points for modeling. A data center example is presented to illustrate the idea. The enhanced stochastic evolutionary algorithm is deployed to search for optimal design.
In the second part of the thesis, we propose a new smoothing technique called completely-data-driven smoothing, intended for smoothing over irregular regions. The idea is to replace the penalty term in the smoothing splines by its estimate based on local least squares technique. A close form solution for our approach is derived. The implementation is very easy and computationally efficient. With some regularity assumptions on the input region and analytical assumptions on the true function, it can be shown that our estimator achieves the optimal convergence rate in general nonparametric regression. The algorithmic parameter that governs the trade-off between the fidelity to the data and the smoothness of the estimated function is chosen by generalized cross validation (GCV). The asymptotic optimality of GCV for choosing the algorithm parameter in our estimator is proved. Numerical experiments show that our method works well for both regular and irregular region smoothing.
The third part of the thesis deals with uncertainty quantification in building energy assessment. In current practice, building simulation is routinely performed with best guesses of input parameters whose true value cannot be known exactly. These guesses affect the accuracy and reliability of the outcomes. There is an increasing need to perform uncertain analysis of those input parameters that are known to have a significant impact on the final outcome. In this part of the thesis, we focus on uncertainty quantification of two microclimate parameters: the local wind speed and the wind pressure coefficient. The idea is to compare the outcome of the standard model with that of a higher fidelity model. Statistical analysis is then conducted to build a connection between these two. The explicit form of statistical models can facilitate the improvement of the corresponding modules in the standard model.
|
12 |
Alternative Sampling and Analysis Methods for Digital Soil Mapping in Southwestern UtahBrungard, Colby W. 01 May 2009 (has links)
Digital soil mapping (DSM) relies on quantitative relationships between easily measured environmental covariates and field and laboratory data. We applied innovative sampling and inference techniques to predict the distribution of soil attributes, taxonomic classes, and dominant vegetation across a 30,000-ha complex Great Basin landscape in southwestern Utah. This arid rangeland was characterized by rugged topography, diverse vegetation, and intricate geology. Environmental covariates calculated from digital elevation models (DEM) and spectral satellite data were used to represent factors controlling soil development and distribution. We investigated optimal sample size and sampled the environmental covariates using conditioned Latin Hypercube Sampling (cLHS). We demonstrated that cLHS, a type of stratified random sampling, closely approximated the full range of variability of environmental covariates in feature and geographic space with small sample sizes. Site and soil data were collected at 300 locations identified by cLHS. Random forests was used to generate spatial predictions and associated probabilities of site and soil characteristics. Balanced random forests and balanced and weighted random forests were investigated for their use in producing an overall soil map. Overall and class errors (referred to as out-of-bag [OOB] error) were within acceptable levels. Quantitative covariate importance was useful in determining what factors were important for soil distribution. Random forest spatial predictions were evaluated based on the conceptual framework developed during field sampling.
|
13 |
Avaliação numérica e computacional do efeito de incertezas inerentes a sistemas mecânicos / Numerical and computational evaluation of the effect of uncertainties inherent the mechanical systemsCosta, Tatiane Nunes da 25 August 2016 (has links)
Submitted by Marlene Santos (marlene.bc.ufg@gmail.com) on 2016-09-28T13:05:06Z
No. of bitstreams: 2
Dissertação - Tatiane Nunes da Costa - 2016.pdf: 5111300 bytes, checksum: 82d5b13d4c4d57e1f4850a62f149025c (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-09-30T13:03:40Z (GMT) No. of bitstreams: 2
Dissertação - Tatiane Nunes da Costa - 2016.pdf: 5111300 bytes, checksum: 82d5b13d4c4d57e1f4850a62f149025c (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2016-09-30T13:03:41Z (GMT). No. of bitstreams: 2
Dissertação - Tatiane Nunes da Costa - 2016.pdf: 5111300 bytes, checksum: 82d5b13d4c4d57e1f4850a62f149025c (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Previous issue date: 2016-08-25 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Most of the time, modern problems of engineering are nonlinear and, may also be subject to certain types of uncertainty that can directly influence in the answers of a particular system. In this sense, the stochastic methods have been thoroughly studied in order to get the best settings for a given project. Out of the stochastic techniques, the Method of Monte Carlo stands out and, especially the Latin Hypercube Sampling (LHS) which is a simpler version of the same. For this type of modeling, the Stochastic Finite Elements Method (SFEM) is becoming more frequently used, given that, an important tool for the discretization of stochastic fields can be given by the Karhunèn-Loève (KL) expansion. In this work, the following three case studies will be used: A discrete system of 2 g.d.l., a continuous system of a coupled beam type both in linear and nonlinear springs and a rotor consisting of axis, bearings and disks. In this sense, the influence of uncertainties in the systems studied will be checked, using for this, the LHS, SFEM and the KL expansion. The stochastic study in question will be used in the construction of the great project for the rotor problem already presented. / Problemas modernos de engenharia, na maioria das vezes são não lineares e, podem também estar sujeitos a certos tipos de incertezas que podem influenciar diretamente nas respostas de um dado
sistema. Nesse sentido, os métodos estocásticos têm sido exaustivamente estudados com o intuito de se obter as melhores configurações para um dado projeto. Dentre as técnicas estocásticas, destacam-se o Método de Monte Carlo e, principalmente o Método Hipercubo Latino (HCL) que é uma versão mais simples do mesmo. Para este tipo de modelagem, é cada vez mais utilizado o Método dos Elementos Finitos Estocásticos (MEFE), sendo que uma importante ferramenta para a discretização dos campos estocásticos pode ser dada pela expansão de Karhunèn-Loève (KL). Neste trabalho serão utilizados três estudos de casos, quais sejam: Um sistema discreto de 2 g.d.l., um sistema contínuo do tipo viga acoplada tanto em molas lineares quanto não lineares e um rotor composto por eixo, mancais e discos. Nesse sentido, será verificada a influência de incertezas nos sistemas estudados, utilizando para isto, o método HCL, MEFE e a expansão de KL. O estudo estocástico em questão será empregado na construção do projeto ótimo robusto para o problema do rotor já apresentado.
|
14 |
Towards multifidelity uncertainty quantification for multiobjective structural design / Vers une approche multi-fidèle de quantification de l'incertain pour l'optimisation multi-objectifLebon, Jérémy 12 December 2013 (has links)
Cette thèse a pour objectif l"établissement de méthodes numériques pour l'optimisation multi-objectif de structures soumises à des facteurs incertains. Au cœur de ce travail, nous nous sommes focalisés sur l'adaptation du chaos polynomial pour l'évaluation non intrusive de la part de l'incertain. Pour atteindre l'objectif fixé, nous sommes confrontés à deux verrous : l'un concerne les coûts élevés de calcul d'une simulation unitaire par éléments finis, l'autre sa précision limitée. Afin de limiter la charge de calcul pour la construction du chaos polynomial, nous nous sommes concentrés sur la construction d'un chaos polynomial creux. Nous avons également développé un programme d’échantillonnage basé sur l’hypercube latin personnalisé prenant en compte la précision limitée de la simulation. Du point de vue de la modélisation nous avons proposé une approche multi-fidèle impliquant une hiérarchie de modèles allant des simulations par éléments finis complètes jusqu'aux surfaces de réponses en passant par la réduction de modèles basés sur la physique. Enfin, nous avons étudié l'optimisation multi-objectif de structures sous incertitudes. Nous avons étendu le modèle PCE des fonctions objectif à la prise en compte des variables déterministes de conception. Nous avons illustré notre travail sur des exemples d'emboutissage et sur la conception optimale des structures en treillis. / This thesis aims at Multi-Objective Optimization under Uncertainty in structural design. We investigate Polynomial Chaos Expansion (PCE) surrogates which require extensive training sets. We then face two issues: high computational costs of an individual Finite Element simulation and its limited precision. From numerical point of view and in order to limit the computational expense of the PCE construction we particularly focus on sparse PCE schemes. We also develop a custom Latin Hypercube Sampling scheme taking into account the finite precision of the simulation. From the modeling point of view,we propose a multifidelity approach involving a hierarchy of models ranging from full scale simulations through reduced order physics up to response surfaces. Finally, we investigate multiobjective optimization of structures under uncertainty. We extend the PCE model of design objectives by taking into account the design variables. We illustrate our work with examples in sheet metal forming and optimal design of truss structures.
|
15 |
Hypercubes Latins maximin pour l’echantillonage de systèmes complexes / Maximin Latin hypercubes for experimental designLe guiban, Kaourintin 24 January 2018 (has links)
Un hypercube latin (LHD) maximin est un ensemble de points contenus dans un hypercube tel que les points ne partagent de coordonnées sur aucune dimension et tel que la distance minimale entre deux points est maximale. Les LHDs maximin sont particulièrement utilisés pour la construction de métamodèles en raison de leurs bonnes propriétés pour l’échantillonnage. Comme la plus grande partie des travaux concernant les LHD se sont concentrés sur leur construction par des algorithmes heuristiques, nous avons décidé de produire une étude détaillée du problème, et en particulier de sa complexité et de son approximabilité en plus des algorithmes heuristiques permettant de le résoudre en pratique.Nous avons généralisé le problème de construction d’un LHD maximin en définissant le problème de compléter un LHD entamé en respectant la contrainte maximin. Le sous-problème dans lequel le LHD partiel est vide correspond au problème de construction de LHD classique. Nous avons étudié la complexité du problème de complétion et avons prouvé qu’il est NP-complet dans de nombreux cas. N’ayant pas déterminé la complexité du sous-problème, nous avons cherché des garanties de performances pour les algorithmes résolvant les deux problèmes.D’un côté, nous avons prouvé que le problème de complétion n’est approximable pour aucune norme en dimensions k ≥ 3. Nous avons également prouvé un résultat d’inapproximabilité plus faible pour la norme L1 en dimension k = 2. D’un autre côté, nous avons proposé un algorithme d’approximation pour le problème de construction, et avons calculé le rapport d’approximation grâce à deux bornes supérieures que nous avons établies. En plus de l’aspect théorique de cette étude, nous avons travaillé sur les algorithmes heuristiques, et en particulier sur la méta-heuristique du recuit simulé. Nous avons proposé une nouvelle fonction d’évaluation pour le problème de construction et de nouvelles mutations pour les deux problèmes, permettant d’améliorer les résultats rapportés dans la littérature. / A maximin Latin Hypercube Design (LHD) is a set of point in a hypercube which do not share a coordinate on any dimension and such that the minimal distance between two points, is maximal. Maximin LHDs are widely used in metamodeling thanks to their good properties for sampling. As most work concerning LHDs focused on heuristic algorithms to produce them, we decided to make a detailed study of this problem, including its complexity, approximability, and the design of practical heuristic algorithms.We generalized the maximin LHD construction problem by defining the problem of completing a partial LHD while respecting the maximin constraint. The subproblem where the partial LHD is initially empty corresponds to the classical LHD construction problem. We studied the complexity of the completion problem and proved its NP-completeness for many cases. As we did not determine the complexity of the subproblem, we searched for performance guarantees of algorithms which may be designed for both problems. On the one hand, we found that the completion problem is inapproximable for all norms in dimensions k ≥ 3. We also gave a weaker inapproximation result for norm L1 in dimension k = 2. On the other hand, we designed an approximation algorithm for the construction problem which we proved using two new upper bounds we introduced.Besides the theoretical aspect of this study, we worked on heuristic algorithms adapted for these problems, focusing on the Simulated Annealing metaheuristic. We proposed a new evaluation function for the construction problem and new mutations for both the construction and completion problems, improving the results found in the literature.
|
16 |
An analysis of degraded communications in the Army's future forceLindquist, Joseph M. 06 1900 (has links)
Approved for public release; distribution is unlimited / The US Department of Defense is currently pursuing the most comprehensive transformation of its forces since the early years of WWII. This transformation is a holistic approach to update both the equipment that the forces will fight its conflicts with and the way in which they will fight. This transformation relies heavily on fully networked air, ground and space based platforms. While many experts agree that in the course of the next 10 years communications equipment will emerge to support the networking of these systems, there remains much uncertainty on how operations will be effected if the technology does not mature enough to meet expectations. This research shows that even a 25 percent degradation in communications range could pose significant challenges for this Future Force. Additionally, even small delays (latencies greater than one minute) and constraints on network throughput can increase the Future Force casualties and the duration of battle. While the end result in all analysis shows that the Future Force is a superior element with the same battle end state-victory, the cost of that victory depends significantly on effective communications. / Captain, United States Army
|
17 |
An exploratory analysis of convoy protection using agent-based simulationHakola, Matthew B. 06 1900 (has links)
Approved for public release, distribution is unlimited / Recent insurgent tactics during Operation Iraqi Freedom (OIF) have demonstrated that coalition logistical convoys are vulnerable targets. This thesis examines the tactics, techniques and procedures (TTPs) used in convoy operations in an attempt to identify the critical factors that lead to mission success. A ground convoy operation scenario is created in the agentbased model (ABM) Map Aware Non-uniform Automata (MANA). The scenario models a generic logistical convoy consisting of security vehicles, logistical vehicles, an unmanned aerial vehicle (UAV) and an enemy ambushing force. The convoy travels along a main supply route (MSR) where it is ambushed by a small insurgent force. We use military experience, judgment and exploratory simulation runs to identify 11 critical factors within the created scenario. The data farming process and Latin Hypercube (LHC) experimental design technique are used to thoroughly examine the 11 factors. Using the 11 factors 516 design points are created and data farmed over to produce 25,800 observations. Additive multiple linear regression is used to fit a model to the 25,800 observations. From the created scenario it is concluded that: convoy mission success may be determined by only a few factors; the actions of logistical vehicles are more critical than those of security vehicles; UAVs provide a statistically significant advantage; and ABMs coupled with LHCs and data farming are valuable tools for understanding complex problems. / Captain, United States Marine Corps
|
18 |
Using agent-based modeling to examine the logistical chain of the seabaseMilton, Rebecca M. 03 1900 (has links)
Approved for public release, distribution is unlimited / This thesis examines a 2015 Marine Expeditionary Brigade scheme of maneuver as the baseline scenario for a commercial logistics support software program called SEAWAY. Modifications to this scenario are conducted using a designed experiment in order to explore how the plan characteristics relate to eleven specified input factors. Multiple regression analysis is used to fit models to the resulting data for three different measures of performance: Total Aircraft Sorties, Total Aircraft Sortie Time and Total Aircraft Tons. The results suggest the plan performance is predicted well by a small subset of the factors and their interactions. One implication of this work is a better understanding of which factors are key determinants of the plan characteristics for variations on this specific base scenario. By using these fitted models, the number of SEAWAY runs needed to identify acceptable plans should decrease dramatically. The approach in this thesis provides a blueprint for similar analyses of other scenarios by demonstrating how information gained from models fit during an exploration phase might allow the logistician to quickly determine factor settings that yield an acceptable plan once details of an operation become available. Finally, working with the SEAWAY developers provided them with some new insights. / Lieutenant Commandeer, United States Navy
|
19 |
Analysis of the performance of an optimization model for time-shiftable electrical load scheduling under uncertaintyOlabode, John A. 12 1900 (has links)
Approved for public release; distribution is unlimited / To ensure sufficient capacity to handle unexpected demands for electric power, decision makers often over-estimate expeditionary power requirements. Therefore, we often use limited resources inefficiently by purchasing more generators and investing in more renewable energy sources than needed to run power systems on the battlefield. Improvement of the efficiency of expeditionary power units requires better managing of load requirements on the power grids and, where possible, shifting those loads to a more economical time of day. We analyze the performance of a previously developed optimization model for scheduling time-shiftable electrical loads in an expeditionary power grids model in two experiments. One experiment uses model data similar to the original baseline data, in which expected demand and expected renewable production remain constant throughout the day. The second experiment introduces unscheduled demand and realistic fluctuations in the power production and the demand distributions data that more closely reflect actual data. Our major findings show energy grid power production composition affects which uncertain factor(s) influence fuel con-sumption, and uncertainty in the energy grid system does not always increase fuel consumption by a large amount. We also discover that the generators running the most do not always have the best load factor on the grid, even when optimally scheduled. / Lieutenant Commander, United States Navy
|
20 |
Statistical Yield Analysis and Design for Nanometer VLSIJaffari, Javid January 2010 (has links)
Process variability is the pivotal factor impacting the design of high yield integrated circuits and systems in deep sub-micron CMOS technologies. The electrical and physical properties of transistors and interconnects, the building blocks of integrated circuits, are prone to significant variations that directly impact the performance and power consumption of the fabricated devices, severely impacting the manufacturing yield. However, the large number of the transistors on a single chip adds even more challenges for the analysis of the variation effects, a critical task in diagnosing the cause of failure and designing for yield. Reliable and efficient statistical analysis methodologies in various design phases are key to predict the yield before entering such an expensive fabrication process.
In this thesis, the impacts of process variations are examined at three different levels: device, circuit, and micro-architecture. The variation models are provided for each level of abstraction, and new methodologies are proposed for efficient statistical analysis and design under variation.
At the circuit level, the variability analysis of three crucial sub-blocks of today's system-on-chips, namely, digital circuits, memory cells, and analog blocks, are targeted. The accurate and efficient yield analysis of circuits is recognized as an extremely challenging task within the electronic design automation community. The large scale of the digital circuits, the extremely high yield requirement for memory cells, and the time-consuming analog circuit simulation are major concerns in the development of any statistical analysis technique. In this thesis, several sampling-based methods have been proposed for these three types of circuits to significantly improve the run-time of the traditional Monte Carlo method, without compromising accuracy. The proposed sampling-based yield analysis methods benefit from the very appealing feature of the MC method, that is, the capability to consider any complex circuit model. However, through the use and engineering of advanced variance reduction and sampling methods, ultra-fast yield estimation solutions are provided for different types of VLSI circuits. Such methods include control variate, importance sampling, correlation-controlled Latin Hypercube Sampling, and Quasi Monte Carlo.
At the device level, a methodology is proposed which introduces a variation-aware design perspective for designing MOS devices in aggressively scaled geometries. The method introduces a yield measure at the device level which targets the saturation and leakage currents of an MOS transistor. A statistical method is developed to optimize the advanced doping profiles and geometry features of a device for achieving a maximum device-level yield.
Finally, a statistical thermal analysis framework is proposed. It accounts for the process and thermal variations simultaneously, at the micro-architectural level. The analyzer is developed, based on the fact that the process variations lead to uncertain leakage power sources, so that the thermal profile, itself, would have a probabilistic nature. Therefore, by a co-process-thermal-leakage analysis, a more reliable full-chip statistical leakage power yield is calculated.
|
Page generated in 0.0683 seconds