• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • 1
  • Tagged with
  • 8
  • 8
  • 8
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Sequential Design of Experiments to Estimate a Probability of Failure.

Li, Ling 16 May 2012 (has links) (PDF)
This thesis deals with the problem of estimating the probability of failure of a system from computer simulations. When only an expensive-to-simulate model of the system is available, the budget for simulations is usually severely limited, which is incompatible with the use of classical Monte Carlo methods. In fact, estimating a small probability of failure with very few simulations, as required in some complex industrial problems, is a particularly difficult topic. A classical approach consists in replacing the expensive-to-simulate model with a surrogate model that will use little computer resources. Using such a surrogate model, two operations can be achieved. The first operation consists in choosing a number, as small as possible, of simulations to learn the regions in the parameter space of the system that will lead to a failure of the system. The second operation is about constructing good estimators of the probability of failure. The contributions in this thesis consist of two parts. First, we derive SUR (stepwise uncertainty reduction) strategies from a Bayesian-theoretic formulation of the problem of estimating a probability of failure. Second, we propose a new algorithm, called Bayesian Subset Simulation, that takes the best from the Subset Simulation algorithm and from sequential Bayesian methods based on Gaussian process modeling. The new strategies are supported by numerical results from several benchmark examples in reliability analysis. The methods proposed show good performances compared to methods of the literature.
2

A metamodeling approach for approximation of multivariate, stochastic and dynamic simulations

Hernandez Moreno, Andres Felipe 04 April 2012 (has links)
This thesis describes the implementation of metamodeling approaches as a solution to approximate multivariate, stochastic and dynamic simulations. In the area of statistics, metamodeling (or ``model of a model") refers to the scenario where an empirical model is build based on simulated data. In this thesis, this idea is exploited by using pre-recorded dynamic simulations as a source of simulated dynamic data. Based on this simulated dynamic data, an empirical model is trained to map the dynamic evolution of the system from the current discrete time step, to the next discrete time step. Therefore, it is possible to approximate the dynamics of the complex dynamic simulation, by iteratively applying the trained empirical model. The rationale in creating such approximate dynamic representation is that the empirical models / metamodels are much more affordable to compute than the original dynamic simulation, while having an acceptable prediction error. The successful implementation of metamodeling approaches, as approximations of complex dynamic simulations, requires understanding of the propagation of error during the iterative process. Prediction errors made by the empirical model at earlier times of the iterative process propagate into future predictions of the model. The propagation of error means that the trained empirical model will deviate from the expensive dynamic simulation because of its own errors. Based on this idea, Gaussian process model is chosen as the metamodeling approach for the approximation of expensive dynamic simulations in this thesis. This empirical model was selected not only for its flexibility and error estimation properties, but also because it can illustrate relevant issues to be considered if other metamodeling approaches were used for this purpose.
3

Modeling of the fundamental mechanical interactions of unit load components during warehouse racking storage

Molina Montoya, Eduardo 04 February 2021 (has links)
The global supply chain has been built on the material handling capabilities provided by the use of pallets and corrugated boxes. Current pallet design methodologies frequently underestimate the load carrying capacity of pallets by assuming they will only carry uniformly distributed, flexible payloads. But, by considering the effect of various payload characteristics and their interactions during the pallet design process, the structure of pallets can be optimized. This, in turn, will reduce the material consumption required to support the pallet industry. In order to understand the mechanical interactions between stacked boxes and pallet decks, and how these interactions affect the bending moment of pallets, a finite element model was developed and validated. The model developed was two-dimensional, nonlinear and implicitly dynamic. It allowed for evaluations of the effects of different payload configurations on the pallet bending response. The model accurately predicted the deflection of the pallet segment and the movement of the packages for each scenario simulated. The second phase of the study characterized the effects, significant factors, and interactions influencing load bridging on unit loads. It provided a clear understanding of the load bridging effect and how it can be successfully included during the unit load design process. It was concluded that pallet yield strength could be increased by over 60% when accounting for the load bridging effect. To provide a more efficient and cost-effective solution, a surrogate model was developed using a Gaussian Process regression. A detailed analysis of the payloads' effects on pallet deflection was conducted. Four factors were identified as generating significant influence: the number of columns in the unit load, the height of the payload, the friction coefficient of the payload's contact with the pallet deck, and the contact friction between the packages. Additionally, it was identified that complex interactions exist between these significant factors, so they must always be considered. / Doctor of Philosophy / Pallets are a key element of an efficient global supply chain. Most products that are transported are commonly packaged in corrugated boxes and handled by stacking these boxes on pallets. Currently, pallet design methods do not take into consideration the product that is being carried, instead using generic flexible loads for the determination of the pallet's load carrying capacity. In practice, most pallets carry discrete loads, such as corrugated boxes. It has been proven that a pallet, when carrying certain types of packages, can have increased performance compared to the design's estimated load carrying capacity. This is caused by the load redistribution across the pallet deck through an effect known as load bridging. Being able to incorporate the load bridging effect on pallet performance during the design process can allow for the optimization of pallets for specific uses and the reduction in costs and in material consumption. Historically, this effect has been evaluated through physical testing, but that is a slow and cumbersome process that does not allow control of all of the variables for the development of a general model. This research study developed a computer simulation model of a simplified unit load to demonstrate and replicate the load bridging effect. Additionally, a surrogate model was developed in order to conduct a detailed analysis of the main factors and their interactions. These models provide pallet designers an efficient method to use to identify opportunities to modify the unit load's characteristics and improve pallet performance for specific conditions of use.
4

Statistical Methods for Non-Linear Profile Monitoring

Quevedo Candela, Ana Valeria 02 January 2020 (has links)
We have seen an increased interest and extensive research in the monitoring of a process over time whose characteristics are represented mathematically in functional forms such as profiles. Most of the current techniques require all of the data for each profile to determine the state of the process. Thus, quality engineers from industrial processes such as agricultural, aquacultural, and chemical cannot make process corrections to the current profile that are essential for correcting their processes at an early stage. In addition, the focus of most of the current techniques is on the statistical significance of the parameters or features of the model instead of the practical significance, which often relates to the actual quality characteristic. The goal of this research is to provide alternatives to address these two main concerns. First, we study the use of a Shewhart type control chart to monitor within profiles, where the central line is the predictive mean profile and the control limits are formed based on the prediction band. Second, we study a statistic based on a non-linear mixed model recognizing that the model leads to correlations among the estimated parameters. / Doctor of Philosophy / Checking the stability over time of the quality of a process which is best expressed by a relationship between a quality characteristic and other variables involved in the process has received increasing attention. The goal of this research is to provide alternative methods to determine the state of such a process. Both methods presented here are compared to the current methodologies. The first method will allow us to monitor a process while the data is still being collected. The second one is based on the quality characteristic of the process and takes full advantage of the model structure. Both methods seem to be more robust than the current most well-known method.
5

A computational model of engineering decision making

Heller, Collin M. 13 January 2014 (has links)
The research objective of this thesis is to formulate and demonstrate a computational framework for modeling the design decisions of engineers. This framework is intended to be descriptive in nature as opposed to prescriptive or normative; the output of the model represents a plausible result of a designer's decision making process. The framework decomposes the decision into three elements: the problem statement, the designer's beliefs about the alternatives, and the designer's preferences. Multi-attribute utility theory is used to capture designer preferences for multiple objectives under uncertainty. Machine-learning techniques are used to store the designer's knowledge and to make Bayesian inferences regarding the attributes of alternatives. These models are integrated into the framework of a Markov decision process to simulate multiple sequential decisions. The overall framework enables the designer's decision problem to be transformed into an optimization problem statement; the simulated designer selects the alternative with the maximum expected utility. Although utility theory is typically viewed as a normative decision framework, the perspective in this research is that the approach can be used in a descriptive context for modeling rational and non-time critical decisions by engineering designers. This approach is intended to enable the formalisms of utility theory to be used to design human subjects experiments involving engineers in design organizations based on pairwise lotteries and other methods for preference elicitation. The results of these experiments would substantiate the selection of parameters in the model to enable it to be used to diagnose potential problems in engineering design projects. The purpose of the decision-making framework is to enable the development of a design process simulation of an organization involved in the development of a large-scale complex engineered system such as an aircraft or spacecraft. The decision model will allow researchers to determine the broader effects of individual engineering decisions on the aggregate dynamics of the design process and the resulting performance of the designed artifact itself. To illustrate the model's applicability in this context, the framework is demonstrated on three example problems: a one-dimensional decision problem, a multidimensional turbojet design problem, and a variable fidelity analysis problem. Individual utility functions are developed for designers in a requirements-driven design problem and then combined into a multi-attribute utility function. Gaussian process models are used to represent the designer's beliefs about the alternatives, and a custom covariance function is formulated to more accurately represent a designer's uncertainty in beliefs about the design attributes.
6

Optimum Corona Ring Design for High Voltage Compact Transmission Lines Using Gaussian Process Model

January 2012 (has links)
abstract: Electric utilities are exploring new technologies to cope up with the in-crease in electricity demand and power transfer capabilities of transmission lines. Compact transmission lines and high phase order systems are few of the techniques which enhance the power transfer capability of transmission lines without requiring any additional right-of-way. This research work investigates the impact of compacting high voltage transmission lines and high phase order systems on the surface electric field of composite insulators, a key factor deciding service performance of insulators. The electric field analysis was done using COULOMB 9.0, a 3D software package which uses a numerical analysis technique based on Boundary Element Method (BEM). 3D models of various types of standard transmission towers used for 230 kV, 345 kV and 500 kV level were modeled with different insulators con-figurations and number of circuits. Standard tower configuration models were compacted by reducing the clearance from live parts in steps of 10%. It was found that the standard tower configuration can be compacted to 30% without violating the minimum safety clearance mandated by NESC standards. The study shows that surface electric field on insulators for few of the compact structures exceeded the maximum allowable limit even if corona rings were installed. As a part of this study, a Gaussian process model based optimization pro-gram was developed to find the optimum corona ring dimensions to limit the electric field within stipulated values. The optimization program provides the dimen-sions of corona ring, its placement from the high voltage end for a given dry arc length of insulator and system voltage. JMP, a statistical computer package and AMPL, a computer language widely used form optimization was used for optimi-zation program. The results obtained from optimization program validated the industrial standards. / Dissertation/Thesis / M.S. Electrical Engineering 2012
7

Sequential Design of Experiments to Estimate a Probability of Failure. / Planification d'expériences séquentielle pour l'estimation de probabilités de défaillance

Li, Ling 16 May 2012 (has links)
Cette thèse aborde le problème de l'estimation de la probabilité de défaillance d'un système à partir de simulations informatiques. Lorsqu'on dispose seulement d'un modèle du système coûteux à simuler, le budget de simulations est généralement très limité, ce qui est incompatible avec l’utilisation de méthodes Monte Carlo classiques. En fait, l’estimation d’une petite probabilité de défaillance à partir de simulations très coûteuses, comme on peut rencontrer dans certains problèmes industriels complexes, est un sujet particulièrement difficile. Une approche classique consiste à remplacer le modèle coûteux à simuler par un modèle de substitution nécessitant de faibles ressources informatiques. A partir d’un tel modèle de substitution, deux opérations peuvent être réalisées. La première opération consiste à choisir des simulations, en nombre aussi petit que possible, pour apprendre les régions de l’espace des paramètres du système qui construire de bons estimateurs de la probabilité de défaillance. Cette thèse propose deux contributions. Premièrement, nous proposons des stratégies de type SUR (Stepwise Uncertainty Reduction) à partir d’une formulation bayésienne du problème d’estimation d’une probabilité de défaillance. Deuxièmement, nous proposons un nouvel algorithme, appelé Bayesian Subset Simulation, qui prend le meilleur de l’algorithme Subset Simulation et des approches séquentielles bayésiennes utilisant la modélisation du système par processus gaussiens. Ces nouveaux algorithmes sont illustrés par des résultats numériques concernant plusieurs exemples de référence dans la littérature de la fiabilité. Les méthodes proposées montrent de bonnes performances par rapport aux méthodes concurrentes. / This thesis deals with the problem of estimating the probability of failure of a system from computer simulations. When only an expensive-to-simulate model of the system is available, the budget for simulations is usually severely limited, which is incompatible with the use of classical Monte Carlo methods. In fact, estimating a small probability of failure with very few simulations, as required in some complex industrial problems, is a particularly difficult topic. A classical approach consists in replacing the expensive-to-simulate model with a surrogate model that will use little computer resources. Using such a surrogate model, two operations can be achieved. The first operation consists in choosing a number, as small as possible, of simulations to learn the regions in the parameter space of the system that will lead to a failure of the system. The second operation is about constructing good estimators of the probability of failure. The contributions in this thesis consist of two parts. First, we derive SUR (stepwise uncertainty reduction) strategies from a Bayesian-theoretic formulation of the problem of estimating a probability of failure. Second, we propose a new algorithm, called Bayesian Subset Simulation, that takes the best from the Subset Simulation algorithm and from sequential Bayesian methods based on Gaussian process modeling. The new strategies are supported by numerical results from several benchmark examples in reliability analysis. The methods proposed show good performances compared to methods of the literature.
8

Modèles de substitution spatio-temporels et multifidélité : Application à l'ingénierie thermique / Spatio-temporal and multifidelity surrogate models : Application in thermal engineering

De lozzo, Matthias 03 December 2013 (has links)
Cette thèse porte sur la construction de modèles de substitution en régimes transitoire et permanent pour la simulation thermique, en présence de peu d'observations et de plusieurs sorties.Nous proposons dans un premier temps une construction robuste de perceptron multicouche bouclé afin d'approcher une dynamique spatio-temporelle. Ce modèle de substitution s'obtient par une moyennisation de réseaux de neurones issus d'une procédure de validation croisée, dont le partitionnement des observations associé permet d'ajuster les paramètres de chacun de ces modèles sur une base de test sans perte d'information. De plus, la construction d'un tel perceptron bouclé peut être distribuée selon ses sorties. Cette construction est appliquée à la modélisation de l'évolution temporelle de la température en différents points d'une armoire aéronautique.Nous proposons dans un deuxième temps une agrégation de modèles par processus gaussien dans un cadre multifidélité où nous disposons d'un modèle d'observation haute-fidélité complété par plusieurs modèles d'observation de fidélités moindres et non comparables. Une attention particulière est portée sur la spécification des tendances et coefficients d'ajustement présents dans ces modèles. Les différents krigeages et co-krigeages sont assemblés selon une partition ou un mélange pondéré en se basant sur une mesure de robustesse aux points du plan d'expériences les plus fiables. Cette approche est employée pour modéliser la température en différents points de l'armoire en régime permanent.Nous proposons dans un dernier temps un critère pénalisé pour le problème de la régression hétéroscédastique. Cet outil est développé dans le cadre des estimateurs par projection et appliqué au cas particulier des ondelettes de Haar. Nous accompagnons ces résultats théoriques de résultats numériques pour un problème tenant compte de différentes spécifications du bruit et de possibles dépendances dans les observations. / This PhD thesis deals with the construction of surrogate models in transient and steady states in the context of thermal simulation, with a few observations and many outputs.First, we design a robust construction of recurrent multilayer perceptron so as to approach a spatio-temporal dynamic. We use an average of neural networks resulting from a cross-validation procedure, whose associated data splitting allows to adjust the parameters of these models thanks to a test set without any information loss. Moreover, the construction of this perceptron can be distributed according to its outputs. This construction is applied to the modelling of the temporal evolution of the temperature at different points of an aeronautical equipment.Then, we proposed a mixture of Gaussian process models in a multifidelity framework where we have a high-fidelity observation model completed by many observation models with lower and no comparable fidelities. A particular attention is paid to the specifications of trends and adjustement coefficients present in these models. Different kriging and co-krigings models are put together according to a partition or a weighted aggregation based on a robustness measure associated to the most reliable design points. This approach is used in order to model the temperature at different points of the equipment in steady state.Finally, we propose a penalized criterion for the problem of heteroscedastic regression. This tool is build in the case of projection estimators and applied with the Haar wavelet. We also give some numerical results for different noise specifications and possible dependencies in the observations.

Page generated in 0.0738 seconds