• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 2
  • 1
  • 1
  • Tagged with
  • 17
  • 17
  • 8
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Efficient adaptive sampling applied to multivariate, multiple output rational interpolation models, with applications in electromagnetics-based device modelling

Lehmensiek, Robert 12 1900 (has links)
Thesis (PhD) -- Stellenbosch University, 2001. / ENGLISH ABSTRACT: A robust and efficient adaptive sampling algorithm for multivariate, multiple output rational interpolation models, based on convergents of Thiele-type branched continued fractions, is presented. A variation of the standard branched continued fraction method is proposed that uses approximation to establish a non-rectangular grid of support points. Starting with a low order interpolant, the technique systematically increases the order by optimally choosing new support points in the areas of highest error, until the desired accuracy is achieved. In this way, accurate surrogate models are established by a small number of support points, without assuming any a priori knowledge of the microwave structure under study. The technique is illustrated and evaluated on several passive microwave structures, however it is general enough to be applied to many modelling problems. / AFRIKAANSE OPSOMMING: 'n Robuuste en effektiewe aanpasbare monsternemingsalgoritme vir multi-veranderlike, multi-uittree rasionale interpolasiemodelle, gegrond op konvergente van Thiele vertakte volgehoue breukuitbreidings, word beskryf. 'n Variasie op die konvensionele breukuitbreidingsmetode word voorgestel, wat 'n nie-reghoekige rooster van ondersteuningspunte gebruik in die funksiebenadering. Met 'n lae orde interpolant as beginpunt, verhoog die algoritme stelselmatig die orde van die interpolant deur optimaal verbeterde ondersteuningspunte te kies waar die grootste fout voorkom, totdat die gewensde akuraatheid bereik word. Hierdeur word akkurate surrogaat modelle opgebou ten spyte van min inisiele ondersteuningspunte, asook sonder voorkennis van die mikrogolfstruktuur ter sprake. Die algoritme word gedemonstreer en geevalueer op verskeie passiewe mikrogolfstrukture, maar is veelsydig genoeg om toepassing te vind in meer algemene modelleringsprobleme.
12

Developing Efficient Strategies for Automatic Calibration of Computationally Intensive Environmental Models

Razavi, Seyed Saman January 2013 (has links)
Environmental simulation models have been playing a key role in civil and environmental engineering decision making processes for decades. The utility of an environmental model depends on how well the model is structured and calibrated. Model calibration is typically in an automated form where the simulation model is linked to a search mechanism (e.g., an optimization algorithm) such that the search mechanism iteratively generates many parameter sets (e.g., thousands of parameter sets) and evaluates them through running the model in an attempt to minimize differences between observed data and corresponding model outputs. The challenge rises when the environmental model is computationally intensive to run (with run-times of minutes to hours, for example) as then any automatic calibration attempt would impose a large computational burden. Such a challenge may make the model users accept sub-optimal solutions and not achieve the best model performance. The objective of this thesis is to develop innovative strategies to circumvent the computational burden associated with automatic calibration of computationally intensive environmental models. The first main contribution of this thesis is developing a strategy called “deterministic model preemption” which opportunistically evades unnecessary model evaluations in the course of a calibration experiment and can save a significant portion of the computational budget (even as much as 90% in some cases). Model preemption monitors the intermediate simulation results while the model is running and terminates (i.e., pre-empts) the simulation early if it recognizes that further running the model would not guide the search mechanism. This strategy is applicable to a range of automatic calibration algorithms (i.e., search mechanisms) and is deterministic in that it leads to exactly the same calibration results as when preemption is not applied. One other main contribution of this thesis is developing and utilizing the concept of “surrogate data” which is basically a reasonably small but representative proportion of a full set of calibration data. This concept is inspired by the existing surrogate modelling strategies where a surrogate model (also called a metamodel) is developed and utilized as a fast-to-run substitute of an original computationally intensive model. A framework is developed to efficiently calibrate hydrologic models to the full set of calibration data while running the original model only on surrogate data for the majority of candidate parameter sets, a strategy which leads to considerable computational saving. To this end, mapping relationships are developed to approximate the model performance on the full data based on the model performance on surrogate data. This framework can be applicable to the calibration of any environmental model where appropriate surrogate data and mapping relationships can be identified. As another main contribution, this thesis critically reviews and evaluates the large body of literature on surrogate modelling strategies from various disciplines as they are the most commonly used methods to relieve the computational burden associated with computationally intensive simulation models. To reliably evaluate these strategies, a comparative assessment and benchmarking framework is developed which presents a clear computational budget dependent definition for the success/failure of surrogate modelling strategies. Two large families of surrogate modelling strategies are critically scrutinized and evaluated: “response surface surrogate” modelling which involves statistical or data–driven function approximation techniques (e.g., kriging, radial basis functions, and neural networks) and “lower-fidelity physically-based surrogate” modelling strategies which develop and utilize simplified models of the original system (e.g., a groundwater model with a coarse mesh). This thesis raises fundamental concerns about response surface surrogate modelling and demonstrates that, although they might be less efficient, lower-fidelity physically-based surrogates are generally more reliable as they to-some-extent preserve the physics involved in the original model. Five different surface water and groundwater models are used across this thesis to test the performance of the developed strategies and elaborate the discussions. However, the strategies developed are typically simulation-model-independent and can be applied to the calibration of any computationally intensive simulation model that has the required characteristics. This thesis leaves the reader with a suite of strategies for efficient calibration of computationally intensive environmental models while providing some guidance on how to select, implement, and evaluate the appropriate strategy for a given environmental model calibration problem.
13

Developing Efficient Strategies for Automatic Calibration of Computationally Intensive Environmental Models

Razavi, Seyed Saman January 2013 (has links)
Environmental simulation models have been playing a key role in civil and environmental engineering decision making processes for decades. The utility of an environmental model depends on how well the model is structured and calibrated. Model calibration is typically in an automated form where the simulation model is linked to a search mechanism (e.g., an optimization algorithm) such that the search mechanism iteratively generates many parameter sets (e.g., thousands of parameter sets) and evaluates them through running the model in an attempt to minimize differences between observed data and corresponding model outputs. The challenge rises when the environmental model is computationally intensive to run (with run-times of minutes to hours, for example) as then any automatic calibration attempt would impose a large computational burden. Such a challenge may make the model users accept sub-optimal solutions and not achieve the best model performance. The objective of this thesis is to develop innovative strategies to circumvent the computational burden associated with automatic calibration of computationally intensive environmental models. The first main contribution of this thesis is developing a strategy called “deterministic model preemption” which opportunistically evades unnecessary model evaluations in the course of a calibration experiment and can save a significant portion of the computational budget (even as much as 90% in some cases). Model preemption monitors the intermediate simulation results while the model is running and terminates (i.e., pre-empts) the simulation early if it recognizes that further running the model would not guide the search mechanism. This strategy is applicable to a range of automatic calibration algorithms (i.e., search mechanisms) and is deterministic in that it leads to exactly the same calibration results as when preemption is not applied. One other main contribution of this thesis is developing and utilizing the concept of “surrogate data” which is basically a reasonably small but representative proportion of a full set of calibration data. This concept is inspired by the existing surrogate modelling strategies where a surrogate model (also called a metamodel) is developed and utilized as a fast-to-run substitute of an original computationally intensive model. A framework is developed to efficiently calibrate hydrologic models to the full set of calibration data while running the original model only on surrogate data for the majority of candidate parameter sets, a strategy which leads to considerable computational saving. To this end, mapping relationships are developed to approximate the model performance on the full data based on the model performance on surrogate data. This framework can be applicable to the calibration of any environmental model where appropriate surrogate data and mapping relationships can be identified. As another main contribution, this thesis critically reviews and evaluates the large body of literature on surrogate modelling strategies from various disciplines as they are the most commonly used methods to relieve the computational burden associated with computationally intensive simulation models. To reliably evaluate these strategies, a comparative assessment and benchmarking framework is developed which presents a clear computational budget dependent definition for the success/failure of surrogate modelling strategies. Two large families of surrogate modelling strategies are critically scrutinized and evaluated: “response surface surrogate” modelling which involves statistical or data–driven function approximation techniques (e.g., kriging, radial basis functions, and neural networks) and “lower-fidelity physically-based surrogate” modelling strategies which develop and utilize simplified models of the original system (e.g., a groundwater model with a coarse mesh). This thesis raises fundamental concerns about response surface surrogate modelling and demonstrates that, although they might be less efficient, lower-fidelity physically-based surrogates are generally more reliable as they to-some-extent preserve the physics involved in the original model. Five different surface water and groundwater models are used across this thesis to test the performance of the developed strategies and elaborate the discussions. However, the strategies developed are typically simulation-model-independent and can be applied to the calibration of any computationally intensive simulation model that has the required characteristics. This thesis leaves the reader with a suite of strategies for efficient calibration of computationally intensive environmental models while providing some guidance on how to select, implement, and evaluate the appropriate strategy for a given environmental model calibration problem.
14

Development of a process modelling methodology and condition monitoring platform for air-cooled condensers

Haffejee, Rashid Ahmed 05 August 2021 (has links)
Air-cooled condensers (ACCs) are a type of dry-cooling technology that has seen an increase in implementation globally, particularly in the power generation industry, due to its low water consumption. Unfortunately, ACC performance is susceptible to changing ambient conditions, such as dry bulb temperatures, wind direction, and wind speeds. This can result in performance reduction under adverse ambient conditions, which leads to increased turbine back pressures and in turn, a decrease in generated electricity. Therefore, this creates a demand to monitor and predict ACC performance under changing ambient conditions. This study focuses on modelling a utility-scale ACC system at steady-state conditions applying a 1-D network modelling approach and using a component-level discretization approach. This approach allowed for each cell to be modelled individually, accounting for steam duct supply behaviour, and for off-design conditions to be investigated. The developed methodology was based on existing empirical correlations for condenser cells and adapted to model double-row dephlegmators. A utility-scale 64-cell ACC system based in South Africa was selected for this study. The thermofluid network model was validated using site data with agreement in results within 1%; however, due to a lack of site data, the model was not validated for off-design conditions. The thermofluid network model was also compared to the existing lumped approach and differences were observed due to the steam ducting distribution. The effect of increasing ambient air temperature from 25 35  −  C C was investigated, with a heat rejection rate decrease of 10.9 MW and a backpressure increase of 7.79 kPa across the temperature range. Condensers' heat rejection rate decreased with higher air temperatures, while dephlegmators' heat rejection rate increased due to the increased outlet vapour pressure and flow rates from condensers. Off-design conditions were simulated, including hot air recirculation and wind effects. For wind effects, the developed model predicted a decrease in heat rejection rate of 1.7 MW for higher wind speeds, while the lumped approach predicted an increase of 4.9 . MW For practicality, a data-driven surrogate model was developed through machine learning techniques using data generated by the thermofluid network model. The surrogate model predicted systemlevel ACC performance indicators such as turbine backpressure and total heat rejection rate. Multilayer perceptron neural networks were developed in the form of a regression network and binary classifier network. For the test sets, the regression network had an average relative error of 0.3%, while the binary classifier had a 99.85% classification accuracy. The surrogate model was validated to site data over a 3 week operating period, with 93.5% of backpressure predictions within 6% of site data backpressures. The surrogate model was deployed through a web-application prototype which included a forecasting tool to predict ACC performance based on a weather forecast.
15

Improving Reconstructive Surgery through Computational Modeling of Skin Mechanics

Taeksang Lee (9183377) 30 July 2020 (has links)
<div>Excessive deformation and stress of skin following reconstructive surgery plays a crucial role in wound healing, often leading to complications. Yet, despite of this concern, surgeries are still planned and executed based on each surgeon's training and experience rather than quantitative engineering tools. The limitations of current treatment planning and execution stem in part from the difficulty in predicting the mechanical behavior of skin, challenges in directly measuring stress in the operating room, and inability to predict the long term adaptation of skin following reconstructive surgery. Computational modeling of soft tissue mechanics has emerged as an ideal candidate to determine stress contours over sizable skin regions in realistic situations. Virtual surgeries with computational mechanics tools will help surgeons explore different surgeries preoperatively, make prediction of stress contours, and eventually aid the surgeon in planning for optimal wound healing. While there has been significant progress on computational modeling of both reconstructive surgery and skin mechanical and mechanobiological behavior, there remain major gaps preventing computational mechanics to be widely used in the clinical setting. At the preoperative stage, better calibration of skin mechanical properties for individual patients based on minimally invasive mechanical tests is still needed. One of the key challenges in this task is that skin is not stress-free in vivo. In many applications requiring large skin flaps, skin is further grown with the tissue expansion technique. Thus, better understanding of skin growth and the resulting stress-free state is required. The other most significant challenge is dealing with the inherent variability of mechanical properties and biological response of biological systems. Skin properties and adaptation to mechanical cues changes with patient demographic, anatomical location, and from one individual to another. Thus, the precise model parameters can never be known exactly, even if some measurements are available. Therefore, rather than expecting to know the exact model describing a patient, a probabilistic approach is needed. To bridge the gaps, this dissertation aims to advance skin biomechanics and computational mechanics tools in order to make virtual surgery for clinical use a reality in the near future. In this spirit, the dissertation constitutes three parts: skin growth and its incompatibility, acquisition of patient-specific geometry and skin mechanical properties, and uncertainty analysis of virtual surgery scenarios.</div><div>Skin growth induced by tissue expansion has been widely used to gain extra skin before reconstructive surgery. Within continuum mechanics, growth can be described with the split of the deformation gradient akin to plasticity. We propose a probabilistic framework to do uncertainty analysis of growth and remodeling of skin in tissue expansion. Our approach relies on surrogate modeling through multi-fidelity Gaussian process regression. This work is being used calibrate the computational model against animal model data. Details of the animal model and the type of data obtained are also covered in the thesis. One important aspect of the growth and remodeling process is that it leads to residual stress. It is understood that this stress arises due to the nonhomogeneous growth deformation. In this dissertation we characterize the geometry of incompatibility of the growth field borrowing concepts originally developed in the study of crystal plasticity. We show that growth produces unique incompatibility fields that increase our understanding of the development of residual stress and the stress-free configuration of tissues. We pay particular attention to the case of skin growth in tissue expansion.</div><div>Patient-specific geometry and material properties are the focus on the second part of the thesis. Minimally invasive mechanical tests based on suction have been developed which can be used in vivo, but these tests offer only limited characterization of an individual's skin mechanics. Current methods have the following limitations: only isotropic behavior can be measured, the calibration problem is done with inverse finite element methods or simple analytical calculations which are inaccurate, the calibration yields a single deterministic set of parameters, and the process ignores any previous information about the mechanical properties that can be expected for a patient. To overcome these limitations, we recast the calibration problem in a Bayesian framework. To sample from the posterior distribution of the parameters for a patient given a suction test, the method relies on an inexpensive Gaussian process surrogate. For the patient-specific geometry, techniques such as magnetic resonance imaging or computer tomography scans can be used. Such approaches, however, require specialized equipment and set up and are not affordable in many scenarios. We propose to use multi-view stereo (MVS) to capture patient-specific geometry.</div><div>The last part of the dissertation focuses on uncertainty analysis of the reconstructive procedure itself. To achieve uncertainty analysis in the clinical setting we propose to create surrogate and reduced order models, especially principal component analysis and Gaussian process regression. We first show the characterization of stress profiles under uncertainty for the three most common flap designs. For these examples we deal with idealized geometries. The probabilistic surrogates enable not only tasks such as fast prediction and uncertainty quantification, but also optimization. Based on a global sensitivity analysis we show that the direction of anisotropy of skin with respect to the flap geometry is the most important parameter controlled by the surgeon, and we show hot to optimize the flap in this idealized setting. We conclude with the application of the probabilistic surrogates to perform uncertainty analysis in patient-specific geometries. In summary, this dissertation focuses on some of the fundamental challenges that needed to be addressed to make virtual surgery models ready for clinical use. We anticipate that our results will continue to shape the way computational models continue to be incorporated in reconstructive surgery plans.</div>
16

Méta-modèles adaptatifs pour l'analyse de fiabilité et l'optimisation sous contrainte fiabiliste / Adaptive surrogate models for reliability analysis and reliability-based design optimization

Dubourg, Vincent 05 December 2011 (has links)
Cette thèse est une contribution à la résolution du problème d’optimisation sous contrainte de fiabilité. Cette méthode de dimensionnement probabiliste vise à prendre en compte les incertitudes inhérentes au système à concevoir, en vue de proposer des solutions optimales et sûres. Le niveau de sûreté est quantifié par une probabilité de défaillance. Le problème d’optimisation consiste alors à s’assurer que cette probabilité reste inférieure à un seuil fixé par les donneurs d’ordres. La résolution de ce problème nécessite un grand nombre d’appels à la fonction d’état-limite caractérisant le problème de fiabilité sous-jacent. Ainsi,cette méthodologie devient complexe à appliquer dès lors que le dimensionnement s’appuie sur un modèle numérique coûteux à évaluer (e.g. un modèle aux éléments finis). Dans ce contexte, ce manuscrit propose une stratégie basée sur la substitution adaptative de la fonction d’état-limite par un méta-modèle par Krigeage. On s’est particulièrement employé à quantifier, réduire et finalement éliminer l’erreur commise par l’utilisation de ce méta-modèle en lieu et place du modèle original. La méthodologie proposée est appliquée au dimensionnement des coques géométriquement imparfaites soumises au flambement. / This thesis is a contribution to the resolution of the reliability-based design optimization problem. This probabilistic design approach is aimed at considering the uncertainty attached to the system of interest in order to provide optimal and safe solutions. The safety level is quantified in the form of a probability of failure. Then, the optimization problem consists in ensuring that this failure probability remains less than a threshold specified by the stakeholders. The resolution of this problem requires a high number of calls to the limit-state design function underlying the reliability analysis. Hence it becomes cumbersome when the limit-state function involves an expensive-to-evaluate numerical model (e.g. a finite element model). In this context, this manuscript proposes a surrogate-based strategy where the limit-state function is progressively replaced by a Kriging meta-model. A special interest has been given to quantifying, reducing and eventually eliminating the error introduced by the use of this meta-model instead of the original model. The proposed methodology is applied to the design of geometrically imperfect shells prone to buckling.
17

Efficient Sequential Sampling for Neural Network-based Surrogate Modeling

Pavankumar Channabasa Koratikere (15353788) 27 April 2023 (has links)
<p>Gaussian Process Regression (GPR) is a widely used surrogate model in efficient global optimization (EGO) due to its capability to provide uncertainty estimates in the prediction. The cost of creating a GPR model for large data sets is high. On the other hand, neural network (NN) models scale better compared to GPR as the number of samples increase. Unfortunately, the uncertainty estimates for NN prediction are not readily available. In this work, a scalable algorithm is developed for EGO using NN-based prediction and uncertainty (EGONN). Initially, two different NNs are created using two different data sets. The first NN models the output based on the input values in the first data set while the second NN models the prediction error of the first NN using the second data set. The next infill point is added to the first data set based on criteria like expected improvement or prediction uncertainty. EGONN is demonstrated on the optimization of the Forrester function and a constrained Branin function and is compared with EGO. The convergence criteria is based on the maximum number of infill points in both cases. The algorithm is able to reach the optimum point within the given budget. The EGONN is extended to handle constraints explicitly and is utilized for aerodynamic shape optimization of the RAE 2822 airfoil in transonic viscous flow at a free-stream Mach number of 0.734 and a Reynolds number of 6.5 million. The results obtained from EGONN are compared with the results from gradient-based optimization (GBO) using adjoints. The optimum shape obtained from EGONN is comparable to the shape obtained from GBO and is able to eliminate the shock. The drag coefficient is reduced from 200 drag counts to 114 and is close to 110 drag counts obtained from GBO. The EGONN is also extended to handle uncertainty quantification (uqEGONN) using prediction uncertainty as an infill method. The convergence criteria is based on the relative change of summary statistics such as mean and standard deviation of an uncertain quantity. The uqEGONN is tested on Ishigami function with an initial sample size of 100 samples and the algorithm terminates after 70 infill points. The statistics obtained from uqEGONN (using only 170 function evaluations) are close to the values obtained from directly evaluating the function one million times. uqEGONN is demonstrated on to quantifying the uncertainty in the airfoil performance due to geometric variations. The algorithm terminates within 100 computational fluid dynamics (CFD) analyses and the statistics obtained from the algorithm are close to the one obtained from 1000 direct CFD based evaluations.</p>

Page generated in 0.0848 seconds