• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • Tagged with
  • 11
  • 11
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Genetic algorithm based self-adaptive techniques for direct load balancing in nonstationary environments

Vavak, Frantisek January 1997 (has links)
No description available.
2

A simplicial homology algorithm for Lipschitz optimisation

Endres, Stefan January 2017 (has links)
The simplicial homology global optimisation (SHGO) algorithm is a general purpose global optimisation algorithm based on applications of simplicial integral homology and combinatorial topology. SHGO approximates the homology groups of a complex built on a hypersurface homeomorphic to a complex on the objective function. This provides both approximations of locally convex subdomains in the search space through Sperner's lemma (Sperner, 1928) and a useful visual tool for characterising and e ciently solving higher dimensional black and grey box optimisation problems. This complex is built up using sampling points within the feasible search space as vertices. The algorithm is specialised in nding all the local minima of an objective function with expensive function evaluations e ciently which is especially suitable to applications such as energy landscape exploration. SHGO was initially developed as an improvement on the topographical global optimisation (TGO) method rst proposed by T orn (1986; 1990; 1992). It is proven that the SHGO algorithm will always outperform TGO on function evaluations if the objective function is Lipschitz smooth. In this dissertation SHGO is applied to non-convex problems with linear and box constraints with bounds placed on the variables. Numerical experiments on linearly constrained test problems show that SHGO gives competitive results compared to TGO and the recently developed Lc-DISIMPL algorithm (Paulavi cius and Zilinskas, 2016) as well as the PSwarm and DIRECT-L1 algorithms. Furthermore SHGO is compared with the TGO, basinhopping (BH) and di erential evolution (DE) global optimisation algorithms over a large selection of black-box problems with bounds placed on the variables from the SciPy (Jones, Oliphant, Peterson, et al., 2001{) benchmarking test suite. A Python implementation of the SHGO and TGO algorithms published under a MIT license can be found from https://bitbucket.org/upiamcompthermo/shgo/. / Dissertation (MEng)--University of Pretoria, 2017. / Chemical Engineering / MEng / Unrestricted
3

Applying advanced methods to power system planning studies

Mr Guang Ya Yang Unknown Date (has links)
No description available.
4

Requirements specification for the optimisation function of an electric utility's energy flow simulator

Hatton, Marc 03 1900 (has links)
Thesis (MEng)--Stellenbosch University, 2015. / ENGLISH ABSTRACT: Efficient and reliable energy generation capability is vital to any country's economic growth. Many strategic, tactical and operational decisions take place along the energy supply chain. Shortcomings in South Africa's electricity production industry have led to the development of an energy ow simulator. The energy ow simulator is claimed to incorporate all significant factors involved in the energy ow process from primary energy to end-use consumption. The energy ow simulator thus provides a decision support system for electric utility planners. The original aim of this study was to develop a global optimisation model and integrate it into the existing energy ow simulator. After gaining an understanding of the architecture of the energy ow simulator and scrutinising a large number of variables, it was concluded that global optimisation was infeasible. The energy ow simulator is made up of four modules and is operated on a module-by-module basis, with inputs and outputs owing between modules. One of the modules, namely the primary energy module, lends itself well to optimisation. The primary energy module simulates coal stockpile levels through Monte Carlo simulation. Classic inventory management policies were adapted to fit the structure of the primary energy module, which is treated as a black box. The coal stockpile management policies that are introduced provide a prescriptive means to deal with the stochastic nature of the coal stockpiles. As the planning horizon continuously changes and the entire energy ow simulator has to be re-run, an efficient algorithm is required to optimise stockpile management policies. Optimisation is achieved through the rapidly converging cross-entropy method. By integrating the simulation and optimisation model, a prescriptive capability is added to the primary energy module. Furthermore, this study shows that coal stockpile management policies can be improved. An integrated solution is developed by nesting the primary energy module within the optimisation model. Scalability is incorporated into the optimisation model through a coding approach that automatically adjusts to an everchanging planning horizon as well as the commission and decommission of power stations. As this study is the first of several research projects to come, it paves the way for future research on the energy ow simulator by proposing future areas of investigation. / AFRIKAANSE OPSOMMING: Effektiewe en betroubare energie-opwekkingsvermoë is van kardinale belang in enige land se ekonomiese groei. Baie strategiese, taktiese en operasionele besluite word deurgaans in die energie-verskaffingsketting geneem. Tekortkominge in Suid-Afrika se elektrisiteitsopwekkingsindustrie het tot die ontwikkeling van 'n energie-vloei-simuleerder gelei. Die energie-vloei-simuleerder vervat na bewering al die belangrike faktore wat op die energie-vloei-proses betrekking het van primêre energieverbruik tot eindgebruik. Die energie-vloei-simuleerder verskaf dus 'n ondersteuningstelsel aan elektrisiteitsdiensbeplanners vir die neem van besluite. Die oorspronklike doel van hierdie studie was om 'n globale optimeringsmodel te ontwikkel en te integreer in die bestaande energie-vloeisimuleerder. Na 'n begrip aangaande die argitektuur van die energievloei- simuleerder gevorm is en 'n groot aantal veranderlikes ondersoek is, is die slotsom bereik dat globale optimering nie lewensvatbaar is nie. Die energie-vloei-simuleerder bestaan uit vier eenhede en werk op 'n eenheid-tot-eenheid basis met insette en uitsette wat tussen eenhede vloei. Een van die eenhede, naamlik die primêre energiemodel, leen dit goed tot optimering. Die primêre energiemodel boots steenkoolreserwevlakke deur Monte Carlo-simulering na. Tradisionele voorraadbestuursbeleide is aangepas om die primêre energiemodel se struktuur wat as 'n swartboks hanteer word, te pas. Die steenkoolreserwebestuursbeleide wat ingestel is, verskaf 'n voorgeskrewe middel om met die stogastiese aard van die steenkoolreserwes te werk. Aangesien die beplanningshorison deurgaans verander en die hele energie-vloei-simulering weer met die energie-vloei-simuleerder uitgevoer moet word, word 'n effektiewe algoritme benodig om die re-serwebestuursbeleide te optimeer. Optimering word bereik deur die vinnige konvergerende kruis-entropie-metode. 'n Geïntegreerde oplossing is ontwikkel deur die primêre energiemodel en die optimering funksie saam te voeg. Skalering word ingesluit in die optimeringsmodel deur 'n koderingsbenadering wat outomaties aanpas tot 'n altyd-veranderende beplanningshorison asook die ingebruikneem en uitgebruikstel van kragstasies. Aangesien hierdie studie die eerste van verskeie navorsingsprojekte is, baan dit die weg vir toekomstige navorsing oor die energie-vloeisimuleerder deur ondersoekareas vir die toekoms voor te stel.
5

Model Integration in Data Mining: From Local to Global Decisions

Bella Sanjuán, Antonio 31 July 2012 (has links)
El aprendizaje autom�atico es un �area de investigaci�on que proporciona algoritmos y t�ecnicas que son capaces de aprender autom�aticamente a partir de experiencias pasadas. Estas t�ecnicas son esenciales en el �area de descubrimiento de conocimiento de bases de datos (KDD), cuya fase principal es t�ÿpicamente conocida como miner�ÿa de datos. El proceso de KDD se puede ver como el aprendizaje de un modelo a partir de datos anteriores (generaci�on del modelo) y la aplicaci�on de este modelo a nuevos datos (utilizaci�on del modelo). La fase de utilizaci�on del modelo es muy importante, porque los usuarios y, muy especialmente, las organizaciones toman las decisiones dependiendo del resultado de los modelos. Por lo general, cada modelo se aprende de forma independiente, intentando obtener el mejor resultado (local). Sin embargo, cuando varios modelos se usan conjuntamente, algunos de ellos pueden depender los unos de los otros (por ejemplo, las salidas de un modelo pueden ser las entradas de otro) y aparecen restricciones. En este escenario, la mejor decisi�on local para cada problema tratado individualmente podr�ÿa no dar el mejor resultado global, o el resultado obtenido podr�ÿa no ser v�alido si no cumple las restricciones del problema. El �area de administraci�on de la relaci�on con los clientes (CRM) ha dado origen a problemas reales donde la miner�ÿa de datos y la optimizaci�on (global) deben ser usadas conjuntamente. Por ejemplo, los problemas de prescripci�on de productos tratan de distinguir u ordenar los productos que ser�an ofrecidos a cada cliente (o sim�etricamente, elegir los clientes a los que se les deber�ÿa de ofrecer los productos). Estas �areas (KDD, CRM) carecen de herramientas para tener una visi�on m�as completa de los problemas y una mejor integraci�on de los modelos de acuerdo a sus interdependencias y las restricciones globales y locales. La aplicaci�on cl�asica de miner�ÿa de datos a problemas de prescripci�on de productos, por lo general, ha / Bella Sanjuán, A. (2012). Model Integration in Data Mining: From Local to Global Decisions [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/16964 / Palancia
6

Maritime manoeuvring optimization : path planning in minefield threat environments

Muhandiramge, Ranga January 2008 (has links)
The aim of the research project that is the subject of this thesis is to apply mathematical techniques, especially those in the area of operations research, to the problem of maritime minefield transit. We develop several minefield models applicable to different aspects of the minefield problem. These include optimal mine clearance, shortest time traversal and time constrained traversal. We hope the suite of models and tools developed will help make mine field clearance and traversal both safer and more efficient and that exposition of the models will bring a clearer understanding of the mine problem from a mathematical perspective. In developing the solutions to mine field models, extensive use is made of network path planning algorithms, particularly the Weight Constrained Shortest Path Problem (WCSPP) for which the current state-of-the-art algorithm is extended. This is done by closer integration of Lagrangean relaxation and preprocessing to reduce the size of the network. This is then integrated with gap-closing algorithms based on enumeration to provide optimal or near optimal solutions to the path planning problem. We provide extensive computational evidence on the performance of our algorithm and compare it to other algorithms found in the literature. This tool then became fundamental in solving various separate minefield models. Our models can be broadly separated into obstacle models in which mine affected regions are treated as obstacles to be avoided and continuous threat in which each point of space has an associated risk. In the later case, we wish to find a path that minimizes the integral of the risk along the path while constraining the length of the path. We call this the Continuous Euclidean Length Constrained Minimum Cost Path Problem (C-LCMCPP), for which we present a novel network approach to solving this continuous problem. This approach results in being able to calculate a global lower bound on a non-convex optimization problem.
7

On the medium-term simulation of sediment transport and morphological evolution in complex coastal areas

Williams, Benjamin Graham January 2016 (has links)
A program for selecting the optimal wave conditions for morphodynamically accelerated simulations of coastal evolution (‘OPTIWAVE’) has been constructed using a novel Genetic Algorithm approach. The optimization routine iteratively reduces the complexity of an incident wave climate by removing the events that contribute least to a target sediment transport pattern, and then ‘evolving’ a new set of weights for the remaining wave conditions such that the target sediment transport pattern (and magnitude) is optimally maintained. The efficacy of OPTIWAVE to satisfactorily reduce the incident wave climate is tested against three coastal modeling paradigms of increasing complexity: (a) A simple 1-D beach profile model (no tides); (b) A 2-D micro-tidal beach; (c) A tidal inlet, where the interaction between waves, tides, and wave-current interaction, adds significant complexity. The simple test case for a beach profile shows that OPTIWAVE is successfully capable of maintaining a target profile-integrated long-shore sediment transport rate. The calculated skill and RMSE of the reduced wave climate is a good indicator of its ability to reproduce the target sediment transport pattern. The optimal number of wave conditions is identified by an ‘inflection point’ at a critical number of wave conditions, where less complex a wave climate results in substantially reduced skill (increased error). The assumption that the ability of OPTIWAVE to reproduce a target sediment transport field is a valid proxy for the potential skill of a morphologically accelerated simulation is assessed for the case of a 2D micro-tidal beach. The skill of the accelerated models, which use a state-of-the-art ‘event-parallel’ method of simulating bed evolution from multiple wave conditions in parallel, is tested against a ‘brute force’ reference simulation that considers the full wave forcing. A strong positive correlation is found between the skill of the reduced wave climate to reproduce a target sediment transport pattern, and the resultant skill of the accelerated morphodynamic model against the ‘brute force’ reference simulation. Finally, the ability to combine reduced wave and tide climates for simulations that must consider both wave and tidal forcing, is assessed against a ‘brute force’ reference simulation of the seasonal evolution Ancao inlet, Algarve, Portugal. The reference simulation is validated against a comprehensive field dataset collected in 1999, and is shown to qualitatively reproduce key features of inlet behavior over a seasonal period. The combination of reduced wave and tidal climates in accelerated ‘event-parallel’ models did not successfully reproduce the reference seasonal morphological evolution of Ancao inlet. Assessing the model Brier Skill Score showed that the model was more successful in reproducing the reference morphology in areas dominated by tidal forcing, but did not have any predictive power in regions where morphological evolution is due to some combination of both wave and tidal processes.
8

Bayesian Gaussian processes for sequential prediction, optimisation and quadrature

Osborne, Michael A. January 2010 (has links)
We develop a family of Bayesian algorithms built around Gaussian processes for various problems posed by sensor networks. We firstly introduce an iterative Gaussian process for multi-sensor inference problems, and show how our algorithm is able to cope with data that may be noisy, missing, delayed and/or correlated. Our algorithm can also effectively manage data that features changepoints, such as sensor faults. Extensions to our algorithm allow us to tackle some of the decision problems faced in sensor networks, including observation scheduling. Along these lines, we also propose a general method of global optimisation, Gaussian process global optimisation (GPGO), and demonstrate how it may be used for sensor placement. Our algorithms operate within a complete Bayesian probabilistic framework. As such, we show how the hyperparameters of our system can be marginalised by use of Bayesian quadrature, a principled method of approximate integration. Similar techniques also allow us to produce full posterior distributions for any hyperparameters of interest, such as the location of changepoints. We frame the selection of the positions of the hyperparameter samples required by Bayesian quadrature as a decision problem, with the aim of minimising the uncertainty we possess about the values of the integrals we are approximating. Taking this approach, we have developed sampling for Bayesian quadrature (SBQ), a principled competitor to Monte Carlo methods. We conclude by testing our proposals on real weather sensor networks. We further benchmark GPGO on a wide range of canonical test problems, over which it achieves a significant improvement on its competitors. Finally, the efficacy of SBQ is demonstrated in the context of both prediction and optimisation.
9

Accelerated sampling of energy landscapes

Mantell, Rosemary Genevieve January 2017 (has links)
In this project, various computational energy landscape methods were accelerated using graphics processing units (GPUs). Basin-hopping global optimisation was treated using a version of the limited-memory BFGS algorithm adapted for CUDA, in combination with GPU-acceleration of the potential calculation. The Lennard-Jones potential was implemented using CUDA, and an interface to the GPU-accelerated AMBER potential was constructed. These results were then extended to form the basis of a GPU-accelerated version of hybrid eigenvector-following. The doubly-nudged elastic band method was also accelerated using an interface to the potential calculation on GPU. Additionally, a local rigid body framework was adapted for GPU hardware. Tests were performed for eight biomolecules represented using the AMBER potential, ranging in size from 81 to 22\,811 atoms, and the effects of minimiser history size and local rigidification on the overall efficiency were analysed. Improvements relative to CPU performance of up to two orders of magnitude were obtained for the largest systems. These methods have been successfully applied to both biological systems and atomic clusters. An existing interface between a code for free energy basin-hopping and the SuiteSparse package for sparse Cholesky factorisation was refined, validated and tested. Tests were performed for both Lennard-Jones clusters and selected biomolecules represented using the AMBER potential. Significant acceleration of the vibrational frequency calculations was achieved, with negligible loss of accuracy, relative to the standard diagonalisation procedure. For the larger systems, exploiting sparsity reduces the computational cost by factors of 10 to 30. The acceleration of these computational energy landscape methods opens up the possibility of investigating much larger and more complex systems than previously accessible. A wide array of new applications are now computationally feasible.
10

Méthodes d’optimisation numérique pour le calcul de stabilité thermodynamique des phases / Numerical optimisation methods for the phase thermodynamic stability computation

Boudjlida, Khaled 27 September 2012 (has links)
La modélisation des équilibres thermodynamiques entre phases est essentielle pour le génie des procédés et le génie pétrolier. L’analyse de la stabilité des phases est un problème de la plus haute importance parmi les calculs d’équilibre des phases. Le calcul de stabilité décide si un système se présente dans un état monophasique ou multiphasique ; si le système se sépare en deux ou plusieurs phases, les résultats du calcul de stabilité fournissent une initialisation de qualité pour les calculs de flash (Michelsen, 1982b), et permettent la validation des résultats des calculs de flash multiphasique. Le problème de la stabilité des phases est résolu par une minimisation sans contraintes de la fonction distance au plan tangent à la surface de l’énergie libre de Gibbs (« tangent plane distance », ou TPD). Une phase est considérée comme étant thermodynamiquement stable si la fonction TPD est non- négative pour tous les points stationnaires, tandis qu’une valeur négative indique une phase thermodynamiquement instable. La surface TPD dans l’espace compositionnel est non- convexe et peut être hautement non linéaire, ce qui fait que les calculs de stabilité peuvent être extrêmement difficiles pour certaines conditions, notamment aux voisinages des singularités. On distingue deux types de singularités : (i) au lieu de la limite du test de stabilité (stability test limit locus, ou STLL), et ii) à la spinodale (la limite intrinsèque de la stabilité thermodynamique). Du point de vue géométrique, la surface TPD présente un point selle, correspondant à une solution non triviale (à la STLL) ou triviale (à la spinodale). Dans le voisinage de ces singularités, le nombre d’itérations de toute méthode de minimisation augmente dramatiquement et la divergence peut survenir. Cet inconvénient est bien plus sévère pour la STLL que pour la spinodale. Le présent mémoire est structuré sur trois grandes lignes : (i) après la présentation du critère du plan tangent à la surface de l’énergie libre de Gibbs, plusieurs solutions itératives (gradient et méthodes d’accélération de la convergence, méthodes de second ordre de Newton et méthodes quasi- Newton), du problème de la stabilité des phases sont présentées et analysées, surtout du point de vue de leurs comportement près des singularités; (ii) Suivant l’analyse des valeurs propres, du conditionnement de la matrice Hessienne et de l’échelle du problème, ainsi que la représentation de la surface de la fonction TPD, la résolution du calcul de la stabilité des phases par la minimisation des fonctions coût modifiées est adoptée. Ces fonctions « coût » sont choisies de telle sorte que tout point stationnaire (y compris les points selle) de la fonction TPD soit converti en minimum global; la Hessienne à la STLL est dans ce cas positif définie, et non indéfinie, ce qui mène a une amélioration des propriétés de convergence, comme montré par plusieurs exemples pour des mélanges représentatifs, synthétiques et naturels. Finalement, (iii) les calculs de stabilité sont menés par une méthode d’optimisation globale, dite de Tunneling. La méthode de Tunneling consiste à détruire (en plaçant un pôle) les minima déjà trouvés par une méthode de minimisation locale, et a tunneliser pour trouver un point situé dans une autre vallée de la surface de la fonction coût qui contient un minimum 9 à une valeur plus petite de la fonction coût; le processus continue jusqu'à ce que les critères du minimum global soient remplis. Plusieurs exemples soigneusement choisis montrent la robustesse et l’efficacité de la méthode de Tunneling pour la minimisation de la fonction TPD, ainsi que pour la minimisation des fonctions coût modifiées. / The thermodynamic phase equilibrium modelling is an essential issue for petroleum and process engineering. Phase stability analysis is a highly important problem among phase equilibrium calculations. The stability computation establishes whether a given mixture is in one or several phases. If a mixture splits into two or more phases, the stability calculations provide valuables initialisation sets for the flash calculations, and allow the validation of multiphase flash calculations. The phase stability problem is solved as an unconstrained minimisation of the tangent plan distance (TPD) function to the Gibbs free energy surface. A phase is thermodynamically stable if the TPD function is non-negative at all its stationary points, while a negative value indicates an unstable case. The TPD surface is non-convex and may be highly non-linear in the compositional space; for this reason, phase stability calculation may be extremely difficult for certain conditions, mainly within the vicinity of singularities. One can distinguish two types of singularities: (i) the stability test limit locus (STLL), and (ii) the intrinsic limit of stability (spinodal). Geometrically, the TPD surface exhibits a saddle point, corresponding to a non-trivial (at the STLL) or trivial solution (at the spinodal). In the immediate vicinity of these singularities, the number of iterations of all minimisation methods increases dramatically, and divergence could occur. This inconvenient is more severe for the STLL than for the spinodal. The work presented herein is structured as follow: (i) after the introduction to the concept of tangent plan distance to the Gibbs free energy surface, several iterative methods (gradient, acceleration methods, second-order Newton and quasi-Newton) are presented, and their behaviour analysed, especially near singularities. (ii) following the analysis of Hessian matrix eigenvalues and conditioning, of problem scaling, as well as of the TPD surface representation, the solution of phase stability computation using modified objective functions is adopted. The latter are chosen in such a manner that any stationary point of the TPD function becomes a global minimum of the modified function; at the STLL, the Hessian matrix is no more indefinite, but positive definite. This leads to a better scheme of convergence as will be shown in various examples for synthetic and naturally occurring mixtures. Finally, (iii) the so-called Tunneling global optimization method is used for the stability analysis. This method consists in destroying the minima already found (by placing poles), and to tunnel to another valley of the modified objective function to find a new minimum with a smaller value of the objective function. The process is resumed when criteria for the global minimum are fulfilled. Several carefully chosen examples demonstrate the robustness and the efficiency of the Tunneling method to minimize the TPD function, as well as the modified objective functions.

Page generated in 0.1101 seconds