• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 38
  • 38
  • 18
  • 17
  • 17
  • 9
  • 9
  • 9
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Coupled flow systems, adjoint techniques and uncertainty quantification

Garg, Vikram Vinod, 1985- 25 October 2012 (has links)
Coupled systems are ubiquitous in modern engineering and science. Such systems can encompass fluid dynamics, structural mechanics, chemical species transport and electrostatic effects among other components, all of which can be coupled in many different ways. In addition, such models are usually multiscale, making their numerical simulation challenging, and necessitating the use of adaptive modeling techniques. The multiscale, multiphysics models of electrosomotic flow (EOF) constitute a particularly challenging coupled flow system. A special feature of such models is that the coupling between the electric physics and hydrodynamics is via the boundary. Numerical simulations of coupled systems are typically targeted towards specific Quantities of Interest (QoIs). Adjoint-based approaches offer the possibility of QoI targeted adaptive mesh refinement and efficient parameter sensitivity analysis. The formulation of appropriate adjoint problems for EOF models is particularly challenging, due to the coupling of physics via the boundary as opposed to the interior of the domain. The well-posedness of the adjoint problem for such models is also non-trivial. One contribution of this dissertation is the derivation of an appropriate adjoint problem for slip EOF models, and the development of penalty-based, adjoint-consistent variational formulations of these models. We demonstrate the use of these formulations in the simulation of EOF flows in straight and T-shaped microchannels, in conjunction with goal-oriented mesh refinement and adjoint sensitivity analysis. Complex computational models may exhibit uncertain behavior due to various reasons, ranging from uncertainty in experimentally measured model parameters to imperfections in device geometry. The last decade has seen a growing interest in the field of Uncertainty Quantification (UQ), which seeks to determine the effect of input uncertainties on the system QoIs. Monte Carlo methods remain a popular computational approach for UQ due to their ease of use and "embarassingly parallel" nature. However, a major drawback of such methods is their slow convergence rate. The second contribution of this work is the introduction of a new Monte Carlo method which utilizes local sensitivity information to build accurate surrogate models. This new method, called the Local Sensitivity Derivative Enhanced Monte Carlo (LSDEMC) method can converge at a faster rate than plain Monte Carlo, especially for problems with a low to moderate number of uncertain parameters. Adjoint-based sensitivity analysis methods enable the computation of sensitivity derivatives at virtually no extra cost after the forward solve. Thus, the LSDEMC method, in conjuction with adjoint sensitivity derivative techniques can offer a robust and efficient alternative for UQ of complex systems. The efficiency of Monte Carlo methods can be further enhanced by using stratified sampling schemes such as Latin Hypercube Sampling (LHS). However, the non-incremental nature of LHS has been identified as one of the main obstacles in its application to certain classes of complex physical systems. Current incremental LHS strategies restrict the user to at least doubling the size of an existing LHS set to retain the convergence properties of LHS. The third contribution of this research is the development of a new Hierachical LHS algorithm, that creates designs which can be used to perform LHS studies in a more flexibly incremental setting, taking a step towards adaptive LHS methods. / text
22

Statistical Yield Analysis and Design for Nanometer VLSI

Jaffari, Javid January 2010 (has links)
Process variability is the pivotal factor impacting the design of high yield integrated circuits and systems in deep sub-micron CMOS technologies. The electrical and physical properties of transistors and interconnects, the building blocks of integrated circuits, are prone to significant variations that directly impact the performance and power consumption of the fabricated devices, severely impacting the manufacturing yield. However, the large number of the transistors on a single chip adds even more challenges for the analysis of the variation effects, a critical task in diagnosing the cause of failure and designing for yield. Reliable and efficient statistical analysis methodologies in various design phases are key to predict the yield before entering such an expensive fabrication process. In this thesis, the impacts of process variations are examined at three different levels: device, circuit, and micro-architecture. The variation models are provided for each level of abstraction, and new methodologies are proposed for efficient statistical analysis and design under variation. At the circuit level, the variability analysis of three crucial sub-blocks of today's system-on-chips, namely, digital circuits, memory cells, and analog blocks, are targeted. The accurate and efficient yield analysis of circuits is recognized as an extremely challenging task within the electronic design automation community. The large scale of the digital circuits, the extremely high yield requirement for memory cells, and the time-consuming analog circuit simulation are major concerns in the development of any statistical analysis technique. In this thesis, several sampling-based methods have been proposed for these three types of circuits to significantly improve the run-time of the traditional Monte Carlo method, without compromising accuracy. The proposed sampling-based yield analysis methods benefit from the very appealing feature of the MC method, that is, the capability to consider any complex circuit model. However, through the use and engineering of advanced variance reduction and sampling methods, ultra-fast yield estimation solutions are provided for different types of VLSI circuits. Such methods include control variate, importance sampling, correlation-controlled Latin Hypercube Sampling, and Quasi Monte Carlo. At the device level, a methodology is proposed which introduces a variation-aware design perspective for designing MOS devices in aggressively scaled geometries. The method introduces a yield measure at the device level which targets the saturation and leakage currents of an MOS transistor. A statistical method is developed to optimize the advanced doping profiles and geometry features of a device for achieving a maximum device-level yield. Finally, a statistical thermal analysis framework is proposed. It accounts for the process and thermal variations simultaneously, at the micro-architectural level. The analyzer is developed, based on the fact that the process variations lead to uncertain leakage power sources, so that the thermal profile, itself, would have a probabilistic nature. Therefore, by a co-process-thermal-leakage analysis, a more reliable full-chip statistical leakage power yield is calculated.
23

Pravděpodobnostní řešení porušení ochranné hráze v důsledku přelití / The probabilistic solution of dike breaching due to overtopping

Alhasan, Zakaraya January 2017 (has links)
Doctoral thesis deals with reliability analysis of flood protection dikes by estimating the probability of dike failure. This study based on theoretical knowledge, experimental and statistical researches, mathematical models and field survey extends present knowledge concerning with reliability analysis of dikes vulnerable to the problem of breaching due to overtopping. This study contains the results of probabilistic solution of breaching of a left bank dike of the River Dyje at a location adjacent to the village of Ladná near the town of Břeclav in the Czech Republic. Within thin work, a mathematical model describing the overtopping and erosion processes was proposed. The dike overtopping is simulated using simple surface hydraulics equations. For modelling the dike erosion which commences with the exceedance of erosion resistance of the dike surface, simple transport equations were used with erosion parameters calibrated depending on data from past real embankment failures. In the context of analysis of the model, uncertainty in input parameters was determined and subsequently the sensitivity analysis was carried out using the screening method. In order to achieve the probabilistic solution, selected input parameters were considered random variables with different probability distributions. For generating the sets of random values for the selected input variables, the Latin Hypercube Sampling (LHS) method was used. Concerning with the process of dike breaching due to overtopping, four typical phases were distinguished. The final results of this study take the form of probabilities for those typical dike breach phases.
24

Response Surface Analysis of Trapped-Vortex Augmented Airfoils

Zope, Anup Devidas 11 December 2015 (has links)
In this study, the effect of a passive trapped-vortex cell on lift to drag (L/D) ratio of an FFA-W3-301 airfoil is studied. The upper surface of the airfoil was modified to incorporate a cavity defined by seven parameters. The L/D ratio of the airfoil is modeled using a radial basis function metamodel. This model is used to find the optimal design parameter values that give the highest L/D. The numerical results indicate that the L/D ratio is most sensitive to the position on an airfoil’s upper surface at which the cavity starts, the position of the end point of the cavity, and the vertical distance of the cavity end point relative to the airfoil surface. The L/D ratio can be improved by locating the cavity start point at the point of separation for a particular angle of attack. The optimal cavity shape (o19_aXX) is also tested for a NACA0024 airfoil.
25

計算機實驗設計--旋轉因子設計 / Designing computer experiments: rotated factorial designs

侯永盛 Unknown Date (has links)
計算機模型可以描述複雜的物理現象,然而這些模型應用在科學研究時其運算需要很長的時間,而且要有特定的實驗設計才能了解現象的本質。在有缺少一個或數個主效應的情形下,因子設計是不適合的,因為在缺少主效應下時其重複實驗不但不能估計誤差,只是產生重複實驗。雖然已經有學者提出許多可替代的設計,但是大部份設計的計算還是很累贅。本篇論文所提出的一些設計是從旋轉平面的二維因子設計發展而來,這些旋轉因子的設計很容易建構而且保有許多標準因子設計中吸引人的性質:(1)在每個維度的投影是均等空間投影;(2)在迴歸模型中,估計效應是不相關的(即正交的)。這些設計被稱為最大化最小拉丁超方陣,其設計與近期學者建構的最小化內點間距離的準則是同等的。 關鍵字:有效相關((Effect Correaltion)、拉丁超方陣(Latin Hypercube)、最大化最小距離(Maximin Distance)、最大化最小內點距離(Minimum Interpoint Distance) / Computer models can describe complicated physical phenomena. To use these models for scientific investigation, however, their generally long running times and mostly deterministic nature require a special designed experiment. Standard factorial designs are inadequate; in the absence of one or more main effects, their replication cannot be used to estimate error but instead produces redundancy. A number of alternative designs have been proposed, but many can be burdensome computationally. This paper presents a class of designs developed from the rotation of a two-dimensional factorial design in the plane. These rotated factorial designs are very easy to construct and preserve many of the attractive properties of standard factorial designs: they have equally-spaced projections to univariate dimensions and uncorrelated regression effect estimates (orthogonality) . They also rate comparably to maximin Latin hypercube designs by the minimum interpoint distance criterion used in the latter "s construction. Key Word : Effect Correlation, Latin Hypercube, Maximin Distance, Minimum Interpoint Distance
26

Plans prédictifs à taille fixe et séquentiels pour le krigeage / Fixed-size and sequential designs for kriging

Abtini, Mona 30 August 2018 (has links)
La simulation numérique est devenue une alternative à l’expérimentation réelle pour étudier des phénomènes physiques. Cependant, les phénomènes complexes requièrent en général un nombre important de simulations, chaque simulation étant très coûteuse en temps de calcul. Une approche basée sur la théorie des plans d’expériences est souvent utilisée en vue de réduire ce coût de calcul. Elle consiste à partir d’un nombre réduit de simulations, organisées selon un plan d’expériences numériques, à construire un modèle d’approximation souvent appelé métamodèle, alors beaucoup plus rapide à évaluer que le code lui-même. Traditionnellement, les plans utilisés sont des plans de type Space-Filling Design (SFD). La première partie de la thèse concerne la construction de plans d’expériences SFD à taille fixe adaptés à l’identification d’un modèle de krigeage car le krigeage est un des métamodèles les plus populaires. Nous étudions l’impact de la contrainte Hypercube Latin (qui est le type de plans les plus utilisés en pratique avec le modèle de krigeage) sur des plans maximin-optimaux. Nous montrons que cette contrainte largement utilisée en pratique est bénéfique quand le nombre de points est peu élevé car elle atténue les défauts de la configuration maximin-optimal (majorité des points du plan aux bords du domaine). Un critère d’uniformité appelé discrépance radiale est proposé dans le but d’étudier l’uniformité des points selon leur position par rapport aux bords du domaine. Ensuite, nous introduisons un proxy pour le plan minimax-optimal qui est le plan le plus proche du plan IMSE (plan adapté à la prédiction par krigeage) et qui est coûteux en temps de calcul, ce proxy est basé sur les plans maximin-optimaux. Enfin, nous présentons une procédure bien réglée de l’optimisation par recuit simulé pour trouver les plans maximin-optimaux. Il s’agit ici de réduire au plus la probabilité de tomber dans un optimum local. La deuxième partie de la thèse porte sur un problème légèrement différent. Si un plan est construit de sorte à être SFD pour N points, il n’y a aucune garantie qu’un sous-plan à n points (n 6 N) soit SFD. Or en pratique le plan peut être arrêté avant sa réalisation complète. La deuxième partie est donc dédiée au développement de méthodes de planification séquentielle pour bâtir un ensemble d’expériences de type SFD pour tout n compris entre 1 et N qui soient toutes adaptées à la prédiction par krigeage. Nous proposons une méthode pour générer des plans séquentiellement ou encore emboités (l’un est inclus dans l’autre) basée sur des critères d’information, notamment le critère d’Information Mutuelle qui mesure la réduction de l’incertitude de la prédiction en tout point du domaine entre avant et après l’observation de la réponse aux points du plan. Cette approche assure la qualité des plans obtenus pour toutes les valeurs de n, 1 6 n 6 N. La difficulté est le calcul du critère et notamment la génération de plans en grande dimension. Pour pallier ce problème une solution a été présentée. Cette solution propose une implémentation astucieuse de la méthode basée sur le découpage par blocs des matrices de covariances ce qui la rend numériquement efficace. / In recent years, computer simulation models are increasingly used to study complex phenomena. Such problems usually rely on very large sophisticated simulation codes that are very expensive in computing time. The exploitation of these codes becomes a problem, especially when the objective requires a significant number of evaluations of the code. In practice, the code is replaced by global approximation models, often called metamodels, most commonly a Gaussian Process (kriging) adjusted to a design of experiments, i.e. on observations of the model output obtained on a small number of simulations. Space-Filling-Designs which have the design points evenly spread over the entire feasible input region, are the most used designs. This thesis consists of two parts. The main focus of both parts is on construction of designs of experiments that are adapted to kriging, which is one of the most popular metamodels. Part I considers the construction of space-fillingdesigns of fixed size which are adapted to kriging prediction. This part was started by studying the effect of Latin Hypercube constraint (the most used design in practice with the kriging) on maximin-optimal designs. This study shows that when the design has a small number of points, the addition of the Latin Hypercube constraint will be useful because it mitigates the drawbacks of maximin-optimal configurations (the position of the majority of points at the boundary of the input space). Following this study, an uniformity criterion called Radial discrepancy has been proposed in order to measure the uniformity of the points of the design according to their distance to the boundary of the input space. Then we show that the minimax-optimal design is the closest design to IMSE design (design which is adapted to prediction by kriging) but is also very difficult to evaluate. We then introduce a proxy for the minimax-optimal design based on the maximin-optimal design. Finally, we present an optimised implementation of the simulated annealing algorithm in order to find maximin-optimal designs. Our aim here is to minimize the probability of falling in a local minimum configuration of the simulated annealing. The second part of the thesis concerns a slightly different problem. If XN is space-filling-design of N points, there is no guarantee that any n points of XN (1 6 n 6 N) constitute a space-filling-design. In practice, however, we may have to stop the simulations before the full realization of design. The aim of this part is therefore to propose a new methodology to construct sequential of space-filling-designs (nested designs) of experiments Xn for any n between 1 and N that are all adapted to kriging prediction. We introduce a method to generate nested designs based on information criteria, particularly the Mutual Information criterion. This method ensures a good quality forall the designs generated, 1 6 n 6 N. A key difficulty of this method is that the time needed to generate a MI-sequential design in the highdimension case is very larg. To address this issue a particular implementation, which calculates the determinant of a given matrix by partitioning it into blocks. This implementation allows a significant reduction of the computational cost of MI-sequential designs, has been proposed.
27

A Framework for the Determination of Weak Pareto Frontier Solutions under Probabilistic Constraints

Ran, Hongjun 09 April 2007 (has links)
A framework is proposed that combines separately developed multidisciplinary optimization, multi-objective optimization, and joint probability assessment methods together but in a decoupled way, to solve joint probabilistic constraint, multi-objective, multidisciplinary optimization problems that are representative of realistic conceptual design problems of design alternative generation and selection. The intent here is to find the Weak Pareto Frontier (WPF) solutions that include additional compromised solutions besides the ones identified by a conventional Pareto frontier. This framework starts with constructing fast and accurate surrogate models of different disciplinary analyses. A new hybrid method is formed that consists of the second order Response Surface Methodology (RSM) and the Support Vector Regression (SVR) method. The three parameters needed by SVR to be pre-specified are automatically selected using a modified information criterion based on model fitting error, predicting error, and model complexity information. The model predicting error is estimated inexpensively with a new method called Random Cross Validation. This modified information criterion is also used to select the best surrogate model for a given problem out of the RSM, SVR, and the hybrid methods. A new neighborhood search method based on Monte Carlo simulation is proposed to find valid designs that satisfy the deterministic constraints and are consistent for the coupling variables featured in a multidisciplinary design problem, and at the same time decouple the three loops required by the multidisciplinary, multi-objective, and probabilistic features. Two schemes have been developed. One scheme finds the WPF by finding a large enough number of valid design solutions such that some WPF solutions are included in those valid solutions. Another scheme finds the WPF by directly finding the WPF of each consistent design zone. Then the probabilities of the PCs are estimated, and the WPF and corresponding design solutions are found. Various examples demonstrate the feasibility of this framework.
28

Contributions to computer experiments and binary time series

Hung, Ying 19 May 2008 (has links)
This thesis consists of two parts. The first part focuses on design and analysis for computer experiments and the second part deals with binary time series and its application to kinetic studies in micropipette experiments. The first part of the thesis addresses three problems. The first problem is concerned with optimal design of computer experiments. Latin hypercube designs (LHDs) have been used extensively for computer experiments. A multi-objective optimization approach is proposed to find good LHDs by combining correlation and distance performance measures. Several examples are presented to show that the obtained designs are good in terms of both criteria. The second problem is related to the analysis of computer experiments. Kriging is the most popular method for approximating complex computer models. Here a modified kriging method is proposed, which has an unknown mean model. Therefore it is called blind kriging. The unknown mean model is identified from experimental data using a Bayesian variable selection technique. Many examples are presented which show remarkable improvement in prediction using blind kriging over ordinary kriging. The third problem is related to computer experiments with nested and branching factors. Design and analysis of experiments with branching and nested factors are challenging and have not received much attention in the literature. Motivated by a computer experiment in a machining process, we develop optimal LHDs and kriging methods that can accommodate branching and nested factors. Through the application of the proposed methods, optimal machining conditions and tool edge geometry are attained, which resulted in a remarkable improvement in the machining process. The second part of the thesis deals with binary time series analysis with application to cell adhesion frequency experiments. Motivated by the analysis of repeated adhesion tests, a binary time series model incorporating random effects is developed in this chapter. A goodness-of-fit statistic is introduced to assess the adequacy of distribution assumptions on the dependent binary data with random effects. Application of the proposed methodology to real data from a T-cell experiment reveals some interesting information. These results provide some quantitative evidence to the speculation that cells can have ¡§memory¡¨ in their adhesion behavior.
29

Využití softwarové podpory pro ekonomické hodnocení investičního projektu / Use of Software Support for the Economic Evaluation of the Investment Project

Hortová, Michaela January 2016 (has links)
This thesis deals with economic evaluation case study of Ekofarm construction using applications Crystal Ball and Pertmaster Risk Project. The thesis represents the fundamental characteristics of the investment project and methods of its evaluation. There are introduced basic features of both applications on probabilistic risk analysis performed by simulation method Latin Hypercube Sampling. The case study is described in detail including breeding system and method of financing. This is linked to the calculation of economic fundamentals and creation of project cash flow. The result is probabilistic analysis which is output from tested software tools, and its evaluation.
30

Nelineární analýza zatížitelnosti železobetonového mostu / Nonlinear analysis of load-bearing capacity of reinforced concrete bridge

Šomodíková, Martina January 2012 (has links)
The subject of master’s thesis is determination of bridge load-bearing capacity and fully probabilistic approach to reliability assessment. It includes a nonlinear analysis of the specific bridge load-bearing capacity in compliance with co-existing Standards and its stochastic and sensitivity analysis. In connection with durability limit states of reinforced concrete structures, the influence of carbonation and the corrosion of reinforcement on the structure’s reliability is also mentioned.

Page generated in 0.0497 seconds