• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 25
  • 25
  • 8
  • 6
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Intelligent fault diagnosis of gearboxes and its applications on wind turbines

Hussain, Sajid 01 February 2013 (has links)
The development of condition monitoring and fault diagnosis systems for wind turbines has received considerable attention in recent years. With wind playing an increasing part in Canada’s electricity demand from renewable resources, installations of new wind turbines are experiencing significant growth in the region. Hence, there is a need for efficient condition monitoring and fault diagnosis systems for wind turbines. Gearbox, as one of the highest risk elements in wind turbines, is responsible for smooth operation of wind turbines. Moreover, the availability of the whole system depends on the serviceability of the gearbox. This work presents signal processing and soft computing techniques to increase the detection and diagnosis capabilities of wind turbine gearbox monitoring systems based on vibration signal analysis. Although various vibration based fault detection and diagnosis techniques for gearboxes exist in the literature, it is still a difficult task especially because of huge background noise and a large solution search space in real world applications. The objective of this work is to develop a novel, intelligent system for reliable and real time monitoring of wind turbine gearboxes. The developed system incorporates three major processes that include detecting the faults, extracting the features, and making the decisions. The fault detection process uses intelligent filtering techniques to extract faulty information buried in huge background noise. The feature extraction process extracts fault-sensitive and vibration based transient features that best describe the health of the gearboxes. The decision making module implements probabilistic decision theory based on Bayesian inference. This module also devises an intelligent decision theory based on fuzzy logic and fault semantic network. Experimental data from a gearbox test rig and real world data from wind turbines are used to verify the viability, reliability, and robustness of the methods developed in this thesis. The experimental test rig operates at various speeds and allows the implementation of different faults in gearboxes such as gear tooth crack, tooth breakage, bearing faults, iv and shaft misalignment. The application of hybrid conventional and evolutionary optimization techniques to enhance the performance of the existing filtering and fault detection methods in this domain is demonstrated. Efforts have been made to decrease the processing time in the fault detection process and to make it suitable for the real world applications. As compared to classic evolutionary optimization framework, considerable improvement in speed has been achieved with no degradation in the quality of results. The novel features extraction methods developed in this thesis recognize the different faulty signatures in the vibration signals and estimate their severity under different operating conditions. Finally, this work also demonstrates the application of intelligent decision support methods for fault diagnosis in gearboxes. / UOIT
2

A framework for evolutionary optimization applications in water distribution systems

Morley, Mark S. January 2008 (has links)
The application of optimization to Water Distribution Systems encompasses the use of computer-based techniques to problems of many different areas of system design, maintenance and operational management. As well as laying out the configuration of new WDS networks, optimization is commonly needed to assist in the rehabilitation or reinforcement of existing network infrastructure in which alternative scenarios driven by investment constraints and hydraulic performance are used to demonstrate a cost-benefit relationship between different network intervention strategies. Moreover, the ongoing operation of a WDS is also subject to optimization, particularly with respect to the minimization of energy costs associated with pumping and storage and the calibration of hydraulic network models to match observed field data. Increasingly, Evolutionary Optimization techniques, of which Genetic Algorithms are the best-known examples, are applied to aid practitioners in these facets of design, management and operation of water distribution networks as part of Decision Support Systems (DSS). Evolutionary Optimization employs processes akin to those of natural selection and “survival of the fittest” to manipulate a population of individual solutions, which, over time, “evolve” towards optimal solutions. Such algorithms are characterized, however, by large numbers of function evaluations. This, coupled with the computational complexity associated with the hydraulic simulation of water networks incurs significant computational overheads, can limit the applicability and scalability of this technology in this domain. Accordingly, this thesis presents a methodology for applying Genetic Algorithms to Water Distribution Systems. A number of new procedures are presented for improving the performance of such algorithms when applied to complex engineering problems. These techniques approach the problem of minimising the impact of the inherent computational complexity of these problems from a number of angles. A novel genetic representation is presented which combines the algorithmic simplicity of the classical binary string of the Genetic Algorithm with the performance advantages inherent in an integer-based representation. Further algorithmic improvements are demonstrated with an intelligent mutation operator that “learns” which genes have the greatest impact on the quality of a solution and concentrates the mutation operations on those genes. A technique for implementing caching of solutions – recalling the results for solutions that have already been calculated - is demonstrated to reduce runtimes for Genetic Algorithms where applied to problems with significant computation complexity in their evaluation functions. A novel reformulation of the Genetic Algorithm for implementing robust stochastic optimizations is presented which employs the caching technology developed to produce an multiple-objective optimization methodology that demonstrates dramatically improved quality of solutions for given runtime of the algorithm. These extensions to the Genetic Algorithm techniques are coupled with a supporting software library that represents a standardized modelling architecture for the representation of connected networks. This library gives rise to a system for distributing the computational load of hydraulic simulations across a network of computers. This methodology is established to provide a viable, scalable technique for accelerating evolutionary optimization applications.
3

Urychlení evolučních algoritmů pomocí rozhodovacích stromů a jejich zobecnění / Accelerating evolutionary algorithms by decision trees and their generalizations

Klíma, Jan January 2011 (has links)
Evolutionary algorithms are one of the most successful methods for solving non-traditional optimization problems. As they employ only function values of the objective function, evolutionary algorithms converge much more slowly than optimization methods for smooth functions. This property of evolutionary algorithms is particularly disadvantageous in the context of costly and time-consuming empirical way of obtaining values of the objective function. However, evolutionary algorithms can be substantially speeded up by employing a sufficiently accurate regression model of the empirical objective function. This thesis provides a survey of utilizability of regression trees and their ensembles as a surrogate model to accelerate convergence of evolutionary optimization.
4

Dimensionamento econÃmico de redes de distribuiÃÃo de Ãgua considerando os custos de manutenÃÃo e de implantaÃÃo / Economic design of water distribution networks considering the costs of maintenance and deployment

Marcos Rodrigues Pinto 04 February 2015 (has links)
FundaÃÃo Cearense de Apoio ao Desenvolvimento Cientifico e TecnolÃgico / Apresenta-se uma abordagem para o problema de otimizaÃÃo de projetos de redes de distribuiÃÃo de Ãgua (RDA), considerando-se o custo de implantaÃÃo (CI) e o custo de manutenÃÃo (CM) simultaneamente, aplicando-se um algoritmo multi-populaÃÃo e multiobjetivo. Uma RDA pode ser vista como um grafo cujas arestas sÃo os condutos e o vÃrtices sÃo os nÃs. Escolher os diÃmetros que tornam a rede mais econÃmica atendendo a restriÃÃes tÃcnicas à um problema combinatÃrio para o qual mÃtodos diretos podem se tornar inviÃveis ao se considerar nÃmeros crescentes de trechos. O problema de otimizaÃÃo abordado consiste em minimizar simultaneamente CI e CM de uma RDA, considerando para isso o custo dos condutos de acordo com seus diÃmetros e comprimentos. Para proceder a otimizaÃÃo foi desenvolvido o algoritmo evolucionÃrio (AE) hÃbrido Multi-Island Niched-Pareto Genetic Algorithm (MINPGA), resultado da junÃÃo adaptada de um algoritmo multi-populaÃÃo, o MIIGA, a outro algoritmo multi-objetivo baseado em Nicho de Pareto, o NPGA. Como simulador hidrÃulico foi utilizado o Environment Protection Agency Network Engine Tool (EPANET). O esquema Optimization of NEtwork By Evolutionary AlgoRithm (ONEBEAR) foi desenvolvido e aplicado a trÃs redes de tamanhos e traÃados distintos, sendo uma delas com 666 trechos. Um programa computacional foi escrito para implementar o ONEBEAR, conectando o EPANET ao MINPGA, possibilitando assim otimizaÃÃo pretendida. Mostrou-se, alÃm da importÃncia de se considerar o custo de manutenÃÃo ao longo da vida Ãtil de uma RDA, a viabilidade de abordar o problema de otimizaÃÃo multi-objetivo por meio de um AE multi-populaÃÃo. O esquema mostrou robustez e flexibilidade, resolvendo o problema de otimizaÃÃo tanto para redes ramificadas quanto para redes malhadas e com uma rede com 666 trechos. A frente de Pareto gerada para cada problema mostrou as soluÃÃes dominantes consideradas viÃveis. A viabilidade das redes foi verificada quanto ao atendimento ao requesito tÃcnico de pressÃo mÃnima por nÃ, calculada pelo EPANET. As redes de menor CI e / We present an approach for project optimization problem of networks water supply (RDA), considering the deployment cost (CI) and the cost of maintenance (MC) simultaneously applying a multi-population and multiobjective algorithm. An RDA can be seen as a graph whose edges are the conduits and the vertices are the nodes. Choose the diameters that make the most economical network given the restrictions techniques is a combinatorial problem for which direct methods may become unworkable when considering increasing numbers snippets. The optimization problem is tackled to minimize both CI and CM of an RDA, considering it to the cost of conduits according to their diameters and lengths. To carry out optimization was developed the evolutionary algorithm (EA) Hybrid Multi-Island-niched Pareto Genetic Algorithm (MINPGA), merge output adapted from a multi-population algorithm, MIIGA, another multi-objective algorithm based on Niche Pareto, the NPGA. How hydraulic simulator was used Environment Protection Agency Network Engine Tool (EPANET). The optimization scheme of Network By Evolutionary Algorithm (ONEBEAR) has been developed and applied to three different network sizes and layouts, one of with 666 snippets. A computer program was written to implement the ONEBEAR, connecting the EPANET to MINPGA, allowing desired optimization. It has been shown, and the importance of considering the cost of maintenance over the life of a RDA address the feasibility of multi-objective optimization problem using an AE multi-population. The scheme shown robustness and flexibility, solving the problem of optimization for both branched networks and for looped networks and a network with 666 excerpts. The Pareto front generated for each problem showed the dominant solutions considered viable. The viability of the networks was checked for compliance with technical requisite minimum pressure per node, calculated by EPANET. The smaller networks and CI
5

Advancing computational materials design and model development using data-driven approaches

Sose, Abhishek Tejrao 02 February 2024 (has links)
Molecular dynamics (MD) simulations find their applications in fundamental understanding of molecular level mechanisms of physical processes. This assists in tuning the key features affecting the development of the novel hybrid materials. A certain application demanding the need for a desired function can be cherished through the hybrids with a blend of new properties by a combination of pure materials. However, to run MD simulations, an accurate representation of the interatomic potentials i.e. force-fields (FF) models remain a crucial aspect. This thesis intricately explores the fusion of MD simulations, uncertainty quantification, and data-driven methodologies to accelerate the computational design of innovative materials and models across the following interconnected chapters. Beginning with the development of force fields for atomic-level systems and coarse-grained models for FCC metals, the study progresses into exploring the intricate interfacial interactions between 2D materials like graphene, MoS2, and water. Current state-of-the-art model development faces the challenge of high dimensional input parameters' model and unknown robustness of developed model. The utilization of advanced optimization techniques such as particle swarm optimization (PSO) integrated with MD enhances the accuracy and precision of FF models. Moreover, the bayesian uncertainty quantification (BUQ) assists FF model development researchers in estimating the robustness of the model. Furthermore, the complex structure and dynamics of water confined between and around sheets was unraveled using 3D Convolutional Neural Networks (3D-CNN). Specifically, through classification and regression models, water molecule ordering/disordering and atomic density profiles were accurately predicted, thereby elucidating nuanced interplays between sheet compositions and confined water molecules. To further the computational design of hybrid materials, this thesis delves into designing and investigating polymer composites with functionalized MOFs shedding light on crucial factors governing their compatibility and performance. Therefore, this report includes the study of structure and dynamics of functionalized MOF in the polymer matrix. Additionally, it investigates the biomedical potential of porous MOFs as drug delivery vehicles (DDVs). Often overlooked is the pivotal role of solvents (used in MOF synthesis or found in relevant body fluids) in the drug adsorption and release process. This report underscores the solvent's impact on drug adsorption within MOFs by comparing results in its presence and absence. Building on these findings, the study delves into the effects of MOF functionalization on tuning the drug adsorption and release process. It further explores how different physical and chemical properties influence drug adsorption within MOFs. Furthermore, the research explores the potential of functionalized MOFs for improved carbon capture, considering their application in energy-related contexts. By harnessing machine learning and deep learning, the thesis introduces innovative pathways for material property prediction and design, emphasizing the pivotal fusion of computational methodologies with data-driven approaches to advance molecular-level understanding and propel future material design endeavors. / Doctor of Philosophy / Envision a world where scientific exploration reaches the microscopic scale, powered by advanced computational tools. In this frontier of materials science, researchers employ sophisticated computer simulations to delve into the intricate properties of materials, particularly focusing on Metal-Organic Frameworks (MOFs). These MOFs, equivalent to microscopic molecular sponges, exhibit remarkable abilities to capture gases or hold medicinal drug compounds. This thesis meticulously studies MOFs alongside materials like graphene, Boron Nitride and Molybdenum disulfide, investigating their interactions with water with unprecedented precision. Through these detailed explorations and the fusion of cutting-edge technologies, we aim to unlock a future featuring enhanced drug delivery systems, improved energy storage solutions, and innovative energy applications.
6

Towards Evaluation of the Adaptive-Epsilon-R-NSGA-II algorithm (AE-R-NSGA-II) on industrial optimization problems

Kashfi, S. Ruhollah January 2015 (has links)
Simulation-based optimization methodologies are widely applied in real world optimization problems. In developing these methodologies, beside simulation models, algorithms play a critical role. One example is an evolutionary multi objective optimization algorithm known as Reference point-based Non-dominated Sorting Genetic Algorithm-II (R-NSGA-II), which has shown to have some promising results in this regard. Its successor, R-NSGA-II-adaptive diversity control (hereafter Adaptive Epsilon-R-NSGA-II (AE-R-NSGA-II) algorithm) is one of the latest proposed extensions of the R-NSGA-II algorithm and in the early stages of its development. So far, little research exists on its applicability and usefulness, especially in real world optimization problems. This thesis evaluates behavior and performance of AE-R-NSGA-II, and to the best of our knowledge is one of its kind. To this aim, we have investigated the algorithm in two experiments, using two benchmark functions, 10 performance measures, and a behavioral characteristics analysis method. The experiments are designed to (i) assess behavior and performance of AE-R-NSGA-II, (ii) and facilitate efficient use of the algorithm in real world optimization problems. This is achieved through the algorithm parameter configuration (parametric study) according to the problem characteristics. The behavior and performance of the algorithm in terms of diversity of the solutions obtained, and their convergence to the optimal Pareto front is studied in the first experiment through manipulating a parameter of the algorithm referred to as Adaptive epsilon coefficient value (C), and in the second experiment through manipulating the Reference point (R) according to the distance between the reference point and the global Pareto front. Therefore, as one contribution of this study two new diversity performance measures (called Modified spread, and Population diversity), and the behavioral characteristics analysis method called R-NSGA-II adaptive epsilon value have been introduced and applied. They can be modified and applied for the evaluation of any reference point based algorithm such as the AE-R-NSGA-II. Additionally, this project contributed to improving the Benchmark software, for instance by identifying new features that can facilitate future research in this area. Some of the findings of the study are as follows: (i) systematic changes of C and R parameters influence the diversity and convergence of the obtained solutions (to the optimal Pareto front and to the reference point), (ii) there is a tradeoff between the diversity and convergence speed, according to the systematic changes in the settings, (iii) the proposed diversity measures and the method are applicable and useful in combination with other performance measures. Moreover, we realized that because of the unexpected abnormal behaviors of the algorithm, in some cases the results are conflicting, therefore, impossible to interpret. This shows that still further research is required to verify the applicability and usefulness of AE-R-NSGA-II in practice. The knowledge gained in this study helps improving the algorithm.
7

Introduction of statistics in optimization

Teytaud, Fabien 08 December 2011 (has links) (PDF)
In this thesis we study two optimization fields. In a first part, we study the use of evolutionary algorithms for solving derivative-free optimization problems in continuous space. In a second part we are interested in multistage optimization. In that case, we have to make decisions in a discrete environment with finite horizon and a large number of states. In this part we use in particular Monte-Carlo Tree Search algorithms. In the first part, we work on evolutionary algorithms in a parallel context, when a large number of processors are available. We start by presenting some state of the art evolutionary algorithms, and then, show that these algorithms are not well designed for parallel optimization. Because these algorithms are population based, they should be we well suitable for parallelization, but the experiments show that the results are far from the theoretical bounds. In order to solve this discrepancy, we propose some rules (such as a new selection ratio or a faster decrease of the step-size) to improve the evolutionary algorithms. Experiments are done on some evolutionary algorithms and show that these algorithms reach the theoretical speedup with the help of these new rules.Concerning the work on multistage optimization, we start by presenting some of the state of the art algorithms (Min-Max, Alpha-Beta, Monte-Carlo Tree Search, Nested Monte-Carlo). After that, we show the generality of the Monte-Carlo Tree Search algorithm by successfully applying it to the game of Havannah. The application has been a real success, because today, every Havannah program uses Monte-Carlo Tree Search algorithms instead of the classical Alpha-Beta. Next, we study more precisely the Monte-Carlo part of the Monte-Carlo Tree Search algorithm. 3 generic rules are proposed in order to improve this Monte-Carlo policy. Experiments are done in order to show the efficiency of these rules.
8

Introduction of statistics in optimization / Introduction de statistiques en optimisation

Teytaud, Fabien 08 December 2011 (has links)
Cette thèse se situe dans le contexte de l'optimisation. Deux grandes parties s'en dégagent ; la première concerne l'utilisation d'algorithmes évolutionnaires pour résoudre des problèmes d'optimisation continue et sans dérivées. La seconde partie concerne l'optimisation de séquences de décisions dans un environnement discret et à horizon fini en utilisant des méthodes de type Monte-Carlo Tree Search. Dans le cadre de l'optimisation évolutionnaire, nous nous intéressons particulièrement au cadre parallèle à grand nombre d'unités de calcul. Après avoir présenté les algorithmes de référence du domaine, nous montrons que ces algorithmes, sous leur forme classique, ne sont pas adaptés à ce cadre parallèle et sont loin d'atteindre les vitesses de convergence théoriques. Nous proposons donc ensuite différentes règles (comme la modification du taux de sélection des individus ainsi que la décroissance plus rapide du pas) afin de corriger et améliorer ces algorithmes. Nous faisons un comparatif empirique de ces règles appliquées à certains algorithmes. Dans le cadre de l'optimisation de séquences de décisions, nous présentons d'abord les algorithmes de référence dans ce domaine (Min-Max, Alpha-Beta, Monte-carlo Tree Search, Nested Monte-Carlo). Nous montrons ensuite la généricité de l'algorithme Monte-Carlo Tree Search en l'appliquant avec succès au jeu de Havannah. Cette application a été un réel succès puisqu'aujourd'hui les meilleurs joueurs artificiels au jeu de Havannah utilisent cet algorithme et non plus des algorithmes de type Min-Max ou Alpha-Beta. Ensuite, nous nous sommes particulièrement intéressés à l'amélioration de la politique Monte-Carlo de ces algorithmes. Nous proposons trois améliorations, chacune étant générique. Des expériences sont faites pour mesurer l'impact de ces améliorations, ainsi que la généricité de l'une d'entre elles. Nous montrons à travers ces expériences que les résultats sont positifs. / In this thesis we study two optimization fields. In a first part, we study the use of evolutionary algorithms for solving derivative-free optimization problems in continuous space. In a second part we are interested in multistage optimization. In that case, we have to make decisions in a discrete environment with finite horizon and a large number of states. In this part we use in particular Monte-Carlo Tree Search algorithms. In the first part, we work on evolutionary algorithms in a parallel context, when a large number of processors are available. We start by presenting some state of the art evolutionary algorithms, and then, show that these algorithms are not well designed for parallel optimization. Because these algorithms are population based, they should be we well suitable for parallelization, but the experiments show that the results are far from the theoretical bounds. In order to solve this discrepancy, we propose some rules (such as a new selection ratio or a faster decrease of the step-size) to improve the evolutionary algorithms. Experiments are done on some evolutionary algorithms and show that these algorithms reach the theoretical speedup with the help of these new rules.Concerning the work on multistage optimization, we start by presenting some of the state of the art algorithms (Min-Max, Alpha-Beta, Monte-Carlo Tree Search, Nested Monte-Carlo). After that, we show the generality of the Monte-Carlo Tree Search algorithm by successfully applying it to the game of Havannah. The application has been a real success, because today, every Havannah program uses Monte-Carlo Tree Search algorithms instead of the classical Alpha-Beta. Next, we study more precisely the Monte-Carlo part of the Monte-Carlo Tree Search algorithm. 3 generic rules are proposed in order to improve this Monte-Carlo policy. Experiments are done in order to show the efficiency of these rules.
9

Metody evoluční optimalizace založené na modelech / Model-based evolutionary optimization methods

Bajer, Lukáš January 2018 (has links)
Model-based black-box optimization is a topic that has been intensively studied both in academia and industry. Especially real-world optimization tasks are often characterized by expensive or time-demanding objective functions for which statistical models can save resources or speed-up the optimization. Each of three parts of the thesis concerns one such model: first, copulas are used instead of a graphical model in estimation of distribution algorithms, second, RBF networks serve as surrogate models in mixed-variable genetic algorithms, and third, Gaussian processes are employed in Bayesian optimization algorithms as a sampling model and in the Covariance matrix adaptation Evolutionary strategy (CMA-ES) as a surrogate model. The last combination, described in the core part of the thesis, resulted in the Doubly trained surrogate CMA-ES (DTS-CMA-ES). This algorithm uses the uncertainty prediction of a Gaussian process for selecting only a part of the CMA-ES population for evaluation with the expensive objective function while the mean prediction is used for the rest. The DTS-CMA-ES improves upon the state-of-the-art surrogate continuous optimizers in several benchmark tests.
10

Sensitivity analysis and evolutionary optimization for building design

Wang, Mengchao January 2014 (has links)
In order to achieve global carbon reduction targets, buildings must be designed to be energy efficient. Building performance simulation methods, together with sensitivity analysis and evolutionary optimization methods, can be used to generate design solution and performance information that can be used in identifying energy and cost efficient design solutions. Sensitivity analysis is used to identify the design variables that have the greatest impacts on the design objectives and constraints. Multi-objective evolutionary optimization is used to find a Pareto set of design solutions that optimize the conflicting design objectives while satisfying the design constraints; building design being an inherently multi-objective process. For instance, there is commonly a desire to minimise both the building energy demand and capital cost while maintaining thermal comfort. Sensitivity analysis has previously been coupled with a model-based optimization in order to reduce the computational effort of running a robust optimization and in order to provide an insight into the solution sensitivities in the neighbourhood of each optimum solution. However, there has been little research conducted to explore the extent to which the solutions found from a building design optimization can be used for a global or local sensitivity analysis, or the extent to which the local sensitivities differ from the global sensitivities. It has also been common for the sensitivity analysis to be conducted using continuous variables, whereas building optimization problems are more typically formulated using a mixture of discretized-continuous variables (with physical meaning) and categorical variables (without physical meaning). This thesis investigates three main questions; the form of global sensitivity analysis most appropriate for use with problems having mixed discretised-continuous and categorical variables; the extent to which samples taken from an optimization run can be used in a global sensitivity analysis, the optimization process causing these solutions to be biased; and the extent to which global and local sensitivities are different. The experiments conducted in this research are based on the mid-floor of a commercial office building having 5 zones, and which is located in Birmingham, UK. The optimization and sensitivity analysis problems are formulated with 16 design variables, including orientation, heating and cooling setpoints, window-to-wall ratios, start and stop time, and construction types. The design objectives are the minimisation of both energy demand and capital cost, with solution infeasibility being a function of occupant thermal comfort. It is concluded that a robust global sensitivity analysis can be achieved using stepwise regression with the use of bidirectional elimination, rank transformation of the variables and BIC (Bayesian information criterion). It is concluded that, when the optimization is based on a genetic algorithm, that solutions taken from the start of the optimization process can be reliably used in a global sensitivity analysis, and therefore, there is no need to generate a separate set of random samples for use in the sensitivity analysis. The extent to which the convergence of the variables during the optimization can be used as a proxy for the variable sensitivities has also been investigated. It is concluded that it is not possible to identify the relative importance of variables through the optimization, even though the most important variable exhibited fast and stable convergence. Finally, it is concluded that differences exist in the variable rankings resulting from the global and local sensitivity methods, although the top-ranked solutions from each approach tend to be the same. It also concluded that the sensitivity of the objectives and constraints to all variables is obtainable through a local sensitivity analysis, but that a global sensitivity analysis is only likely to identify the most important variables. The repeatability of these conclusions has been investigated and confirmed by applying the methods to the example design problem with the building being located in four different climates (Birmingham, UK; San Francisco, US; and Chicago, US).

Page generated in 0.1591 seconds