• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 2
  • 1
  • 1
  • Tagged with
  • 15
  • 15
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Stochastic Multiperiod Optimization of an Industrial Refinery Model

Boucheikhchoukh, Ariel January 2021 (has links)
The focus of this work is an industrial refinery model developed by TotalEnergies SE. The model is a sparse, large-scale, nonconvex, mixed-integer nonlinear program (MINLP). The nonconvexity of the problem arises from the many bilinear, trilinear, fractional, logarithmic, exponential, and sigmoidal terms. In order to account for various sources of uncertainty in refinery planning, the industrial refinery model is extended into a two-stage stochastic program, where binary scheduling decisions must be made prior to the realization of the uncertainty, and mixed-integer recourse decisions are made afterwards. Two case studies involving uncertainty are formulated and solved in order to demonstrate the economic and logistical benefits of robust solutions over their deterministic counterparts. A full-space solution strategy is proposed wherein the integrality constraints are relaxed and a multi-step initialization strategy is employed in order to gradually approach the feasible region of the multi-scenario problem. The full-space solution strategy was significantly hampered by difficulties with finding a feasible point and numerical problems. In order to facilitate the identification of a feasible point and to reduce the incidence of numerical difficulties, a hybrid surrogate refinery model was developed using the ALAMO modelling tool. An evaluation procedure was employed to assess the surrogate model, which was shown to be reasonably accurate for most output variables and to be more reliable than the high-fidelity model. Feasible solutions are obtained for the continuous relaxations of both case studies using the full-space solution strategy in conjunction with the surrogate model. In order to solve the original MINLP problems, a decomposition strategy based on the generalized Benders decomposition (GBD) algorithm is proposed. The binary decisions are designated as complicating variables that, when fixed, reduce the full-space problem to a series of independent scenario subproblems. Through the application of the GBD algorithm, feasible mixed-integer solutions are obtained for both case studies, however optimality could not be guaranteed. Solutions obtained via the stochastic programming framework are shown to be more robust than solutions obtained via a deterministic problem formulation. / Thesis / Master of Applied Science (MASc)
2

Surrogate-assisted optimisation-based verification & validation

Kamath, Atul Krishna January 2014 (has links)
This thesis deals with the application of optimisation based Validation and Verification (V&V) analysis on aerospace vehicles in order to determine their worst case performance metrics. To this end, three aerospace models relating to satellite and launcher vehicles provided by European Space Agency (ESA) on various projects are utilised. As a means to quicken the process of optimisation based V&V analysis, surrogate models are developed using polynomial chaos method. Surro- gate models provide a quick way to ascertain the worst case directions as computation time required for evaluating them is very small. A sin- gle evaluation of a surrogate model takes less than a second. Another contribution of this thesis is the evaluation of operational safety margin metric with the help of surrogate models. Operational safety margin is a metric defined in the uncertain parameter space and is related to the distance between the nominal parameter value and the first instance of performance criteria violation. This metric can help to gauge the robustness of the controller but requires the evaluation of the model in the constraint function and hence could be computationally intensive. As surrogate models are computationally very cheap, they are utilised to rapidly compute the operational safety margin metric. But this metric focuses only on finding a safe region around the nominal parameter value and the possibility of other disjoint safe regions are not explored. In order to find other safe or failure regions in the param- eter space, the method of Bernstein expansion method is utilised on surrogate polynomial models to help characterise the uncertain param- eter space into safe and failure regions. Furthermore, Binomial failure analysis is used to assign failure probabilities to failure regions which might help the designer to determine if a re-design of the controller is required or not. The methodologies of optimisation based V&V, surrogate modelling, operational safety margin, Bernstein expansion method and risk assessment have been combined together to form the WCAT-II MATLAB toolbox.
3

Urychlení evolučních algoritmů pomocí rozhodovacích stromů a jejich zobecnění / Accelerating evolutionary algorithms by decision trees and their generalizations

Klíma, Jan January 2011 (has links)
Evolutionary algorithms are one of the most successful methods for solving non-traditional optimization problems. As they employ only function values of the objective function, evolutionary algorithms converge much more slowly than optimization methods for smooth functions. This property of evolutionary algorithms is particularly disadvantageous in the context of costly and time-consuming empirical way of obtaining values of the objective function. However, evolutionary algorithms can be substantially speeded up by employing a sufficiently accurate regression model of the empirical objective function. This thesis provides a survey of utilizability of regression trees and their ensembles as a surrogate model to accelerate convergence of evolutionary optimization.
4

Toward an Uncertain Modeling of Hypersonic Aerodynamic Forces

January 2017 (has links)
abstract: The focus of this investigation is on the development of a surrogate model of hypersonic aerodynamic forces on structures to reduce the computational effort involved in the determination of the structural response. The application is more precisely focused on uncertain structures. Then, following an uncertainty management strategy, the surrogate may exhibit an error with respect to Computational Fluid Dynamics (CFD) reference data as long as that error does not significantly affect the uncertainty band of the structural response. Moreover, this error will be treated as an epistemic uncertainty introduced in the model thereby generating an uncertain surrogate. Given this second step, the aerodynamic surrogate is limited to those exhibiting simple analytic forms with parameters that can be identified from CFD data. The first phase of the investigation focuses on the selection of an appropriate form for the surrogate for the 1-dimensional flow over a flat clamped-clamped. Following piston theory, the model search started with purely local models, linear and nonlinear of the local slope. A second set of models was considered that involve also the local displacement, curvature, and integral of displacement and an improvement was observed that can be attributed to a global effect of the pressure distribution. Various ways to involve such a global effect were next investigated eventually leading to a two-level composite model based on the sum of a local component represented as a cubic polynomial of the downwash and a global component represented by an auto-regressive moving average (ARMA) model driven nonlinearly by the local downwash. This composite model is applicable to both steady pressure distributions with the downwash equal to the slope and to unsteady cases with the downwash as partial derivative with time in addition to steady. The second part of the investigation focused on the introduction of the epistemic uncertainty in the aerodynamic surrogate and it was recognized that it could be achieved by randomizing the coefficients of the local and/or the auto-regressive components of the model. In fact, the combination of the two effects provided an applicable strategy. / Dissertation/Thesis / Masters Thesis Mechanical Engineering 2017
5

Advancing surrogate modelling for sustainable building design.

Westermann, Paul W. 14 September 2020 (has links)
Building design processes are dynamic and complex. The context of a building pro- ject is manifold and depends on the cultural context, climatic conditions and personal design preferences. Many stakeholders may be involved in deciding between a large space of possible designs defined by a set of influential design parameters. Building performance simulation is the state-of-the-art way to provide estimates of the energy and environmental performance of various design alternatives. However, setting up a simulation model can be labour intensive and evaluating it can be com- putationally costly. As a consequence, building simulations often occur towards the end of the design process instead of being an active component in design processes. This observation and the growing availability of machine learning algorithms as an aid to exploring analytical problems has lead to the development of surrogate mo- dels. The idea of surrogate models is to learn from a high-fidelity counterpart, here a building simulation model, by emulating the simulation outputs given the simula- tion inputs. The key advantage is their computational efficiency. They can produce performance estimates for hundreds of thousands of building designs within seconds. This has great potential to innovate the field. Instead of only being able to assess a few specific designs, entire regions of the design space can be explored, or instan- taneous feedback on the sustainability of building can be given to architects during design sessions. This PhD thesis aims to advance the young field of building energy simulation surrogate models. It contributes by: (a) deriving Bayesian surrogate models that are aware of their uncertainties and can warn of large approximation errors; (b) deriving surrogate models that can process large weather data (≈150’000 inputs) and estimate the associated impact on building performance; (c) calibrating a simulation model via fast iterations of surrogate models, and (d) benchmarking the use of surrogate-based calibration against other approaches. / Graduate
6

Suivi en service de la durée de vie des ombilicaux dynamiques pour l’éolien flottant / Fatigue monitoring of dynamic power cables for floating wind turbines

Spraul, Charles 12 April 2018 (has links)
Le travail présenté vise à mettre en place une méthodologie pour le suivi en service de la fatigue mécanique pour l’ombilical dynamique d’un système EMR flottant. L’approche envisagée consiste à simuler à l’aide d’outils numériques la réponse de l’ombilical aux cas de chargement observés sur site. Le post-traitement des résultats de ces simulations devant permettre d’accéder à différentes quantités d’intérêt en tout point du câble. Pour quantifier et réduire l’incertitude sur la réponse calculée de l’ombilical ce dernier doit être instrumenté. Un certain nombre de paramètres du modèle numérique feront alors l’objet d’une calibration régulière pour suivre l’évolution des caractéristiques de l’ombilical susceptibles d’évoluer. Dans ce contexte ce manuscrit présente et compare différentes méthodes pour analyser la sensibilité de la réponse de l’ombilical aux paramètres susceptibles d’être suivis. L’objectif est notamment d’orienter le choix des mesures à mettre en oeuvre. L’analyse en composantes principales permet pour cela d’identifier les principaux modes de variation de la réponse de l’ombilical en réponse aux variations des paramètres étudiés. Différentes approches sont également envisagées pour la calibration des paramètres suivis,avec en particulier le souci de quantifier l’incertitude restante sur le dommage. Les méthodes envisagées sont coûteuses en nombre d’évaluations du modèle numérique et ce dernier est relativement long à évaluer. L’emploi de méta-modèles en substitution des simulations numériques apparait donc nécessaire, et là encore différentes options sont considérées. La méthodologie proposée est appliquée à une configuration simplifiée d’ombilical dans des conditions inspirées du projet FLOATGEN. / The present work introduces a methodology to monitor fatigue damage of the dynamic power cable of a floating wind turbine. The suggested approach consists in using numerical simulations to compute the power cable response at the sea states observed on site. The quantities of interest are then obtained in any location along the cable length through the post-treatment of the simulations results. The cable has to be instrumented to quantify and to reduce the uncertainties on the calculated response of the power cable. Indeed some parameters of the numerical model should be calibrated on a regular basis in order to monitor the evolution of the cable properties that might change over time. In this context, this manuscript describes and compares various approaches to analyze the sensitivity of the power cable response to the variations of the parameters to be monitored. The purpose is to provide guidance in the choice of the instrumentation for the cable. Principal components analysis allows identifying the main modes of power cable response variations when the studied parameters are varied. Various methods are also assessed for the calibration of the monitored cable parameters. Special care is given to the quantification of the remaining uncertainty on the fatigue damage. The considered approaches are expensive to apply as they require a large number of model evaluations and as the numerical simulations durations are quite long. Surrogate models are thus employed to replace the numerical model and again different options are considered. The proposed methodology is applied to a simplified configuration which is inspired by the FLOATGEN project.
7

Study of a wideband sinuous feed for reflector antenna applications

Mutonkole, Ngoy 12 1900 (has links)
Thesis (MScEng)-- Stellenbosch University, 2013. / ENGLISH ABSTRACT: This thesis presents a thorough study of the printed sinuous antenna and its characterisation as a feed for re ector antenna applications. Two di erent techniques are used in this study, namely a parametric study and an e cient surrogate based optimisation strategy. A planar sinuous antenna over a re ecting ground plane, with no absorber lining, is designed following a parameter study from which e ective design guidelines are derived. The designed prototype displays a bandwidth ratio of more than 3 : 1 from 1:9􀀀6:2 GHz, at a measured return loss of 10 dB, representing a signi cant improvement over the octave band previously achieved with a similar antenna. An optimisation based approach is followed in formally investigating a conical sinuous antenna over a re ecting ground plane. An e cient surrogate based optimisation strategy, in which the antenna's response is approximated by a Kriging model, is used. The search for optimal design parameters as well as improvements in the accuracy of the Kriging model is accomplished by using expected improvement as the in ll sampling criterion. The antenna is optimised for return loss, aperture e ciency for a prime-focus paraboloid re ector as well as cross-polarisation and results from the optimisation are used to derive e ective design guidelines and performance limitations. The investigations are conducted for the 2 􀀀 6 GHz band and the obtained results can be easily applied for designs with wider bandwidths. Simulation results reveal improved return loss, aperture e ciency and cross-polarisation performances compared to what has previously been reported for this antenna. / AFRIKAANSE OPSOMMING: Hierdie tesis bied 'n deeglike studie van die gedrukte stroombaanbord sinuous antenna, sowel as die karakterisering daarvan as voer vir re ektor antenna toepassings. Twee verskillende tegnieke word gebruik, naamlik 'n parametriese studie en 'n surrogaat-gebaseerde optimering strategie. E ektiewe doeleindes vir die ontwerp is van 'n parameter studie afgelei, waarvolgends 'n planêre sinuous antenna met 'n weerkaatsingsgrondvlak ontwerp is sonder enige absorberende materiale. Die prototipe vertoon beter as 'n 3 : 1 bandwydte van 1:9 GHz tot 6:2 GHz teen 'n gemete weerkaatsingskoë siënt van beter as 􀀀10 dB, wat dui op 'n aansienlike verbetering teenoor die oktaaf bandwydte wat voorheen met 'n soorgelyke antenna bereik is. 'n Optimering-gebaseerde benadering is gebruik om ondersoek in te stel in die gebruik van 'n koniese sinuous antenna met 'n weerkaatsingsgrondvlak. 'n Doeltre ende surrogaat-gebaseerde optimeering strategie is gebruik, waar die antenna se weergawe deur 'n Kriging model benader word. Die verwagte verbetering is gebruik as maatstaf in die soektog vir optimale ontwerpsparameters, sowel as om die akkuraatheid van die Kriging model te verbeter. Die antenna is geoptimeer vir sy weerkaatsingskoë siënt, stralingsvlak e ektiwiteit for 'n paraboloïed antenna sowel as kruispolarisasie. Resultate van die optimering is gebruik om e ektiewe riglyne vir die ontwerp en grense vir die werkverrigting op te stel. Die antenna is ondersoek vir die 2 tot 6 GHz frekwensieband en die resultate wat verkry is kan maklik op ontwerpe met selfs wyer bandwydtes toegepas word. Simulasie resultate dui op 'n verbetering in weerkaatsingskoë siënt, stralingsvlak e ektiwiteit en kruispolarisasie in vergelyking met wat berig is vir hierdie antenna.
8

Seismic experimental analyses and surrogate models of multi-component systems in special-risk industrial facilities

Nardin, Chiara 22 December 2022 (has links)
Nowadays, earthquakes are one of the most catastrophic natural events that have a significant human, socio-economic and environmental impact. Besides, based on both observations of damage following recent major/moderate seismic events and numerical/experimental studies, it clearly emerges that critical non-structural components (NSCs) that are ubiquitous to most industrial facilities are particularly and even disproportionately vulnerable to those events. Nonetheless and despite their great importance, seismic provisions for industrial facilities and their process equipment are still based on the classical load-and-resistance factor design (LRFD) approach; a performance-based earthquake engineering (PBEE) approach should, instead, be preferred. Along this vein, in recent years, much research has been devoted to setting computational fragility frameworks for special-risk industrial components and structures. However, within a PBEE perspective, studies have clearly remarked: i) a lack of definition of performance objectives for NSCs; ii) the need for fully comprehensive testing campaigns data on coupling effects between main structures and NSCs. In this respect, this doctorate thesis introduces a computational framework for an efficient and accurate seismic state-dependent fragility analysis; it is based on a combination of data acquired from an extensive experimental shake table test campaign on a full-scale prototype industrial steel frame structure and the most recent surrogate-based UQ forward analysis advancements. Specifically, the framework is applied to a real-world application consisting of seismic shake table tests of a representative industrial multi-storey frame structure equipped with complex process components, carried out at the EUCENTRE facility in Italy, within the European SPIF project: Seismic Performance of Multi-Component Systems in Special Risk Industrial Facilities. The results of this experimental research campaign also aspire to improve the understanding of these complex systems and improve the knowledge of FE modelling techniques. The main goals aim to reduce the huge computational burden and to assess, as well, when the importance of coupling effects between NSCs and the main structure comes into play. Insights provided by innovative monitoring systems were then deployed to develop and validate numerical and analytical models. At the same time, the adoption of Der Kiureghian's stochastic site-based ground motion model (GMM) was deemed necessary to severely excite the process equipment and supplement the scarcity of real records with a specific frequency content capable of enhancing coupling effects. Finally, to assess the seismic risk of NSCs of those special facilities, this thesis introduces state-dependent fragility curves that consider the accumulation of damage effects due to sequential seismic events. To this end, the computational burden was alleviated by adopting polynomial chaos expansion (PCE) surrogate models. More precisely, the dimensionality of a seismic input random vector has been reduced by performing the principal component analysis (PCA) on the experimental realizations. Successively, by bootstrapping on the experimental design, separate PCE coefficients have been determined, yielding a full response sample at each point. Eventually, empirical state-dependent fragility curves were derived.
9

Efficient adaptive sampling applied to multivariate, multiple output rational interpolation models, with applications in electromagnetics-based device modelling

Lehmensiek, Robert 12 1900 (has links)
Thesis (PhD) -- Stellenbosch University, 2001. / ENGLISH ABSTRACT: A robust and efficient adaptive sampling algorithm for multivariate, multiple output rational interpolation models, based on convergents of Thiele-type branched continued fractions, is presented. A variation of the standard branched continued fraction method is proposed that uses approximation to establish a non-rectangular grid of support points. Starting with a low order interpolant, the technique systematically increases the order by optimally choosing new support points in the areas of highest error, until the desired accuracy is achieved. In this way, accurate surrogate models are established by a small number of support points, without assuming any a priori knowledge of the microwave structure under study. The technique is illustrated and evaluated on several passive microwave structures, however it is general enough to be applied to many modelling problems. / AFRIKAANSE OPSOMMING: 'n Robuuste en effektiewe aanpasbare monsternemingsalgoritme vir multi-veranderlike, multi-uittree rasionale interpolasiemodelle, gegrond op konvergente van Thiele vertakte volgehoue breukuitbreidings, word beskryf. 'n Variasie op die konvensionele breukuitbreidingsmetode word voorgestel, wat 'n nie-reghoekige rooster van ondersteuningspunte gebruik in die funksiebenadering. Met 'n lae orde interpolant as beginpunt, verhoog die algoritme stelselmatig die orde van die interpolant deur optimaal verbeterde ondersteuningspunte te kies waar die grootste fout voorkom, totdat die gewensde akuraatheid bereik word. Hierdeur word akkurate surrogaat modelle opgebou ten spyte van min inisiele ondersteuningspunte, asook sonder voorkennis van die mikrogolfstruktuur ter sprake. Die algoritme word gedemonstreer en geevalueer op verskeie passiewe mikrogolfstrukture, maar is veelsydig genoeg om toepassing te vind in meer algemene modelleringsprobleme.
10

Developing Efficient Strategies for Automatic Calibration of Computationally Intensive Environmental Models

Razavi, Seyed Saman January 2013 (has links)
Environmental simulation models have been playing a key role in civil and environmental engineering decision making processes for decades. The utility of an environmental model depends on how well the model is structured and calibrated. Model calibration is typically in an automated form where the simulation model is linked to a search mechanism (e.g., an optimization algorithm) such that the search mechanism iteratively generates many parameter sets (e.g., thousands of parameter sets) and evaluates them through running the model in an attempt to minimize differences between observed data and corresponding model outputs. The challenge rises when the environmental model is computationally intensive to run (with run-times of minutes to hours, for example) as then any automatic calibration attempt would impose a large computational burden. Such a challenge may make the model users accept sub-optimal solutions and not achieve the best model performance. The objective of this thesis is to develop innovative strategies to circumvent the computational burden associated with automatic calibration of computationally intensive environmental models. The first main contribution of this thesis is developing a strategy called “deterministic model preemption” which opportunistically evades unnecessary model evaluations in the course of a calibration experiment and can save a significant portion of the computational budget (even as much as 90% in some cases). Model preemption monitors the intermediate simulation results while the model is running and terminates (i.e., pre-empts) the simulation early if it recognizes that further running the model would not guide the search mechanism. This strategy is applicable to a range of automatic calibration algorithms (i.e., search mechanisms) and is deterministic in that it leads to exactly the same calibration results as when preemption is not applied. One other main contribution of this thesis is developing and utilizing the concept of “surrogate data” which is basically a reasonably small but representative proportion of a full set of calibration data. This concept is inspired by the existing surrogate modelling strategies where a surrogate model (also called a metamodel) is developed and utilized as a fast-to-run substitute of an original computationally intensive model. A framework is developed to efficiently calibrate hydrologic models to the full set of calibration data while running the original model only on surrogate data for the majority of candidate parameter sets, a strategy which leads to considerable computational saving. To this end, mapping relationships are developed to approximate the model performance on the full data based on the model performance on surrogate data. This framework can be applicable to the calibration of any environmental model where appropriate surrogate data and mapping relationships can be identified. As another main contribution, this thesis critically reviews and evaluates the large body of literature on surrogate modelling strategies from various disciplines as they are the most commonly used methods to relieve the computational burden associated with computationally intensive simulation models. To reliably evaluate these strategies, a comparative assessment and benchmarking framework is developed which presents a clear computational budget dependent definition for the success/failure of surrogate modelling strategies. Two large families of surrogate modelling strategies are critically scrutinized and evaluated: “response surface surrogate” modelling which involves statistical or data–driven function approximation techniques (e.g., kriging, radial basis functions, and neural networks) and “lower-fidelity physically-based surrogate” modelling strategies which develop and utilize simplified models of the original system (e.g., a groundwater model with a coarse mesh). This thesis raises fundamental concerns about response surface surrogate modelling and demonstrates that, although they might be less efficient, lower-fidelity physically-based surrogates are generally more reliable as they to-some-extent preserve the physics involved in the original model. Five different surface water and groundwater models are used across this thesis to test the performance of the developed strategies and elaborate the discussions. However, the strategies developed are typically simulation-model-independent and can be applied to the calibration of any computationally intensive simulation model that has the required characteristics. This thesis leaves the reader with a suite of strategies for efficient calibration of computationally intensive environmental models while providing some guidance on how to select, implement, and evaluate the appropriate strategy for a given environmental model calibration problem.

Page generated in 0.1149 seconds