• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 15
  • 15
  • 15
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Integration of ranking and selection methods with the multi-objective optimisation cross-entropy method

Von Lorne von Saint Ange, Chantel 03 1900 (has links)
Thesis (MEng)--Stellenbosch University, 2015. / ENGLISH ABSTRACT: A method for multi-objective optimisation using the cross-entropy method (MOO CEM) was recently developed by Bekker & Aldrich (2010) and Bekker (2012). The method aims to identify the nondominated solutions of multi-objective problems, which are often dynamic and stochastic. The method does not use a statistical ranking and selection technique to account for the stochastic nature of the problems it solves. The research in this thesis aims to investigate possible techniques that can be incorporated into the MOO CEM. The cross-entropy method for single-objective optimisation is studied first. It is applied to an interesting problem in the soil sciences and water management domain. The purpose of this was for the researcher to grasp the fundamentals of the cross-entropy method, which will be needed later in the study. The second part of the study documents an overview of multi-objective ranking and selection methods found in literature. The first method covered is the multi-objective optimal computing budget allocation algorithm. The second method extends upon the first to include the concept of an indifference-zone. Both methods aim to maximise the probability of correctly selecting the non-dominated scenarios, while intelligently allocating simulation replications to minimise required sample sizes. These techniques are applied to two problems that are represented by simulation models, namely the buffer allocation problem and a classic single-commodity inventory problem. Performance is measured using the hyperarea indicator and Mann-Whitney U-tests. It was found that the two techniques have significantly different performances, although this could be due to the different number of solutions in the Pareto set. In the third part of the document, the aforementioned multi-objective ranking and selection techniques are incorporated into the MOO CEM. Once again, the buffer allocation problem and the inventory problem were chosen as test problems. The results were compared to experiments where the MOO CEM without ranking and selection was used. Results show that the MOO CEM with ranking and selection has various affects on different problems. Investigating the possibility of incorporating ranking and selection differently in the MOO CEM is recommended as future research. Additionally, the combined algorithm should be tested on more stochastic problems. / AFRIKAANSE OPSOMMING: 'n Metode vir meerdoelige optimering wat gebruik maak van die kruisentropie- metode (MOO CEM) is onlangs deur Bekker & Aldrich (2010) en Bekker (2012) ontwikkel. Die metode mik om die nie-gedomineerde oplossings van meerdoelige probleme te identifiseer, wat dikwels dinamies en stogasties is. Die metode maak nie gebruik van 'n statistiese orden-en-kies tegniek om die stogastiese aard van die problem aan te spreek nie. Die navorsing in hierdie tesis poog om moontlike tegnieke wat in die MOO CEM opgeneem kan word, te ondersoek. Die kruis-entropie-metode vir enkeldoelwit optimering is eerste bestudeer. Dit is toegepas op 'n interessante probleem in die grondwetenskappe en waterbestuur domein. Die doel hiervan was om die navorser die grondbeginsels van die kruis-entropie metode te help verstaan, wat later in die studie benodig sal word. Die tweede gedeelte van die studie verskaf 'n oorsig van meerdoelige orden-en-kies metodes wat in die literatuur aangetref word. Die eerste metode wat bespreek word, is die optimale toedeling van rekenaarbegroting vir multi-doelwit optimering algoritme. Die tweede metode brei uit oor die eerste metode wat die konsep van 'n neutrale sone insluit. Beide metodes streef daarna om die waarskynlikheid dat die nie-gedomineerde oplossings korrek gekies word te maksimeer, terwyl dit ook steekproefgroottes probeer minimeer deur die aantal simulasieherhalings intelligent toe te ken. Hierdie tegnieke word toegepas op twee probleme wat verteenwoordig word deur simulasiemodelle, naamlik die buffer-toedelingsprobleem en 'n klassieke enkelitem voorraadprobleem. Die prestasie van die algoritmes word deur middel van die hiperarea-aanwyser en Mann Whitney U-toetse gemeet. Daar is gevind dat die twee tegnieke aansienlik verskillend presteer, alhoewel dit as gevolg van die verskillende aantal oplossings in die Pareto versameling kan wees. In die derde gedeelte van die dokument, is die bogenoemde meerdoelige orden-en-kies tegnieke in die MOO CEM geïnkorporeer. Weereens is die buffer-toedelingsprobleem en die voorraadprobleem as toetsprobleme gekies. Die resultate was met die eksperimente waar die MOO CEM sonder orden-en-kies gebruik is, vergelyk. Resultate toon dat vir verskillende probleme, tree die MOO CEM met orden-en-kies anders op. 'n Ondersoek oor 'n alternatiewe manier om orden-en-kies met die MOO CEM te integreer is as toekomstige navorsing voorgestel. Bykomend moet die gekombineerde algoritme op meer stogastiese probleme getoets word.
2

Applying the cross-entropy method in multi-objective optimisation of dynamic stochastic systems

Bekker, James 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: A difficult subclass of engineering optimisation problems is the class of optimisation problems which are dynamic and stochastic. These problems are often of a non-closed form and thus studied by means of computer simulation. Simulation production runs of these problems can be time-consuming due to the computational burden implied by statistical inference principles. In multi-objective optimisation of engineering problems, large decision spaces and large objective spaces prevail, since two or more objectives are simultaneously optimised and many problems are also of a combinatorial nature. The computational burden associated with solving such problems is even larger than for most single-objective optimisation problems, and hence an e cient algorithm that searches the vast decision space is required. Many such algorithms are currently available, with researchers constantly improving these or developing more e cient algorithms. In this context, the term \e cient" means to provide near-optimised results with minimal evaluations of objective function values. Thus far research has often focused on solving speci c benchmark problems, or on adapting algorithms to solve speci c engineering problems. In this research, a multi-objective optimisation algorithm, based on the cross-entropy method for single-objective optimisation, is developed and assessed. The aim with this algorithm is to reduce the number of objective function evaluations, particularly when time-dependent (dynamic), stochastic processes, as found in Industrial Engineering, are studied. A brief overview of scholarly work in the eld of multiobjective optimisation is presented, followed by a theoretical discussion of the cross-entropy method. The new algorithm is developed, based on this information, and assessed considering continuous, deterministic problems, as well as discrete, stochastic problems. The latter include a classical single-commodity inventory problem, the well-known buffer allocation problem, and a newly designed, laboratory-sized recon gurable manufacturing system. Near multi-objective optimisation of two practical problems were also performed using the proposed algorithm. In the rst case, some design parameters of a polymer extrusion unit are estimated using the algorithm. The management of carbon monoxide gas utilisation at an ilmenite smelter is complex with many decision variables, and the application of the algorithm in that environment is presented as a second case. Quality indicator values are estimated for thirty-four test problem instances of multi-objective optimisation problems in order to quantify the quality performance of the algorithm, and it is also compared to a commercial algorithm. The algorithm is intended to interface with dynamic, stochastic simulation models of real-world problems. It is typically implemented in a programming language while the simulation model is developed in a dedicated, commercial software package. The proposed algorithm is simple to implement and proved to be efficient on test problems. / AFRIKAANSE OPSOMMING: 'n Moeilike deelklas van optimeringsprobleme in die ingenieurswese is optimeringsprobleme van 'n dinamiese en stogastiese aard. Sulke probleme is dikwels nie-geslote en word gevolglik met behulp van rekenaarsimulasie bestudeer. Die beginsels van statistiese steekproefneming veroorsaak dat produksielopies van hierdie probleme tydrowend is weens die rekenlas wat genoodsaak word. Groot besluitnemingruimtes en doelwitruimtes bestaan in meerdoelige optimering van ingenieursprobleme, waar twee of meer doelwitte gelyktydig geoptimeer word, terwyl baie probleme ook 'n kombinatoriese aard het. Die rekenlas wat met die oplos van sulke probleme gepaard gaan, is selfs groter as vir die meeste enkeldoelwit optimeringsprobleme, en 'n doeltre ende algoritme wat die meesal uitgebreide besluitnemingsruimte verken, is gevolglik nodig. Daar bestaan tans verskeie sulke algoritmes, terwyl navorsers steeds poog om hierdie algoritmes te verbeter of meer doeltre ende algoritmes te ontwikkel. In hierdie konteks beteken \doeltre end" dat naby-optimale oplossings verskaf word deur die minimum evaluering van doelwitfunksiewaardes. Navorsing fokus dikwels op oplossing van standaard toetsprobleme, of aanpassing van algoritmes om 'n spesi eke ingenieursprobleem op te los. In hierdie navorsing word 'n meerdoelige optimeringsalgoritme gebaseer op die kruis-entropie-metode vir enkeldoelwit optimering ontwikkel en geassesseer. Die mikpunt met hierdie algoritme is om die aantal evaluerings van doelwitfunksiewaardes te verminder, spesi ek wanneer tydafhanklike (dinamiese), stogastiese prosesse soos wat dikwels in die Bedryfsingenieurswese te egekom word, bestudeer word. 'n Bondige oorsig van navorsing in die veld van meerdoelige optimering word gegee, gevolg deur 'n teoretiese bespreking van die kruis-entropiemetode. Die nuwe algoritme se ontwikkeling is hierop gebaseer, en dit word geassesseer deur kontinue, deterministiese probleme sowel as diskrete, stogastiese probleme benaderd daarmee op te los. Laasgenoemde sluit in 'n klassieke enkelitem voorraadprobleem, die bekende buffer-toedelingsprobleem, en 'n nuut-ontwerpte, laboratorium-skaal herkon gureerbare vervaardigingstelsel. Meerdoelige optimering van twee praktiese probleme is met die algoritme uitgevoer. In die eerste geval word sekere ontwerpparameters van 'n polimeer-uittrekeenheid met behulp van die algoritme beraam. Die bestuur van koolstofmonoksiedbenutting in 'n ilmeniet-smelter is kompleks met verskeie besluitnemingveranderlikes, en die toepassing van die algoritme in daardie omgewing word as 'n tweede geval aangebied. Verskeie gehalte-aanwyserwaardes word beraam vir vier-en-dertig toetsgevalle van meerdoelige optimeringsprobleme om die gehalte-prestasie van die algoritme te kwanti seer, en dit word ook vergelyk met 'n kommersi ele algoritme. Die algoritme is veronderstel om te skakel met dinamiese, stogastiese simulasiemodelle van regtew^ereldprobleme. Die algoritme sal tipies in 'n programmeertaal ge mplementeer word terwyl die simulasiemodel in doelmatige, kommersi ele programmatuur ontwikkel sal word. Die voorgestelde algoritme is maklik om te implementeer en dit het doeltre end gewerk op toetsprobleme.
3

Simulation ranking and selection procedures and applications in network reliability design

Kiekhaefer, Andrew Paul 01 May 2011 (has links)
This thesis presents three novel contributions to the application as well as development of ranking and selection procedures. Ranking and selection is an important topic in the discrete event simulation literature concerned with the use of statistical approaches to select the best or set of best systems from a set of simulated alternatives. Ranking and selection is comprised of three different approaches: subset selection, indifference zone selection, and multiple comparisons. The methodology addressed in this thesis focuses primarily on the first two approaches: subset selection and indifference zone selection. Our first contribution regards the application of existing ranking and selection procedures to an important body of literature known as system reliability design. If we are capable of modeling a system via a network of arcs and nodes, then the difficult problem of determining the most reliable network configuration, given a set of design constraints, is an optimization problem that we refer to as the network reliability design problem. In this thesis, we first present a novel solution approach for one type of network reliability design optimization problem where total enumeration of the solution space is feasible and desirable. This approach focuses on improving the efficiency of the evaluation of system reliabilities as well as quantifying the probability of correctly selecting the true best design based on the estimation of the expected system reliabilities through the use of ranking and selection procedures, both of which are novel ideas in the system reliability design literature. Altogether, this method eliminates the guess work that was previously associated with this design problem and maintains significant runtime improvements over the existing methodology. Our second contribution regards the development of a new optimization framework for the network reliability design problem that is applicable to any topological and terminal configuration as well as solution sets of any sizes. This framework focuses on improving the efficiency of the evaluation and comparison of system reliabilities, while providing a more robust performance and user-friendly procedure in terms of the input parameter level selection. This is accomplished through the introduction of two novel statistical sampling procedures based on the concepts of ranking and selection: Sequential Selection of the Best Subset and Duplicate Generation. Altogether, this framework achieves the same convergence and solution quality as the baseline cross-entropy approach, but achieves runtime and sample size improvements on the order of 450% to 1500% over the example networks tested. Our final contribution regards the development and extension of the general ranking and selection literature with novel procedures for the problem concerned with the selection of the -best systems, where system means and variances are unknown and potentially unequal. We present three new ranking and selection procedures: a subset selection procedure, an indifference zone selection procedure, and a combined two stage subset selection and indifference zone selection procedure. All procedures are backed by proofs of the theoretical guarantees as well as empirical results on the probability of correct selection. We also investigate the effect of various parameters on each procedure's overall performance.
4

Stochastic Modelling and Intervention of the Spread of HIV/AIDS

Asrul Sani Unknown Date (has links)
Since the first cases of HIV/AIDS disease were recognised in the early 1980s, a large number of mathematical models have been proposed. However, the mobility of people among regions, which has an obvious impact on the spread of the disease, has not been much considered in the modelling studies. One of the main reasons is that the models for the spread of the disease in multiple populations are very complex and, as a consequence, they can easily become intractable. In this thesis we provide various new results pertaining to the spread of the disease in mobile populations, including epidemic intervention in multiple populations. We first develop stochastic models for the spread of the disease in a single heterosexual population, considering both constant and varying population sizes. In particular, we consider a class of continuous-time Markov chains (CTMCs). We establish deterministic and Gaussian diffusion analogues of these stochastic processes by applying the theory of density dependent processes. A range of numerical experiments are provided to show how well the deterministic and Gaussian counterparts approximate the dynamic behaviour of the processes. We derive threshold parameters, known as basic reproduction numbers, for both cases above the threshold which the disease is uniformly persistent and below the threshold which disease-free equilibrium is locally attractive. We find that the threshold conditions for both constant and varying population sizes have the same form. In order to take into account the mobility of people among regions, we extend the stochastic models to multiple populations. Various stochastic models for multiple populations are formulated as CTMCs. The deterministic and Gaussian diffusion counterparts of the corresponding stochastic processes for the multiple populations are also established. Threshold parameters for the persistence of the disease in the multiple population models are derived by applying the concept of next generation matrices. The results of this study can serve as a basic framework how to formulate and analyse a more realistic stochastic model for the spread of HIV in mobile heterogeneous populations—classifying all individuals by age, risk, and level of infectivities, and at the same time considering different modes of the disease transmission. Assuming an accurate mathematical model for the spread of HIV/AIDS disease, another question that we address in this thesis is how to control the spread of the disease in a mobile population. Most previous studies for the spread of the disease focus on identifying the most significant parameters in a model. In contrast, we study these problems as optimal epidemic intervention problems. The study is mostly motivated by the fact that more and more local governments allocate budgets over a certain period of time to combat the disease in their areas. The question is how to allocate this limited budget to minimise the number of new HIV cases, say on a country level, over a finite time horizon as people move among regions. The mathematical models developed in the first part of this thesis are used as dynamic constraints of the optimal control problems. In this thesis, we also introduce a novel approach to solve quite general optimal control problems using the Cross-Entropy (CE) method. The effectiveness of the CE method is demonstrated through several illustrative examples in optimal control. The main application is the optimal epidemic intervention problems discussed above. These are highly non-linear and multidimensional problems. Many existing numerical techniques for solving such optimal control problems suffer from the curse of dimensionality. However, we find that the CE technique is very efficient in solving such problems. The numerical results of the optimal epidemic strategies obtained via the CE method suggest that the structure of the optimal trajectories are highly synchronised among patches but the trajectories do not depend much on the structure of the models. Instead, the parameters of the models (such as the time horizon, the amount of available budget, infection rates) much affect the form of the solution.
5

Stochastic Modelling and Intervention of the Spread of HIV/AIDS

Asrul Sani Unknown Date (has links)
Since the first cases of HIV/AIDS disease were recognised in the early 1980s, a large number of mathematical models have been proposed. However, the mobility of people among regions, which has an obvious impact on the spread of the disease, has not been much considered in the modelling studies. One of the main reasons is that the models for the spread of the disease in multiple populations are very complex and, as a consequence, they can easily become intractable. In this thesis we provide various new results pertaining to the spread of the disease in mobile populations, including epidemic intervention in multiple populations. We first develop stochastic models for the spread of the disease in a single heterosexual population, considering both constant and varying population sizes. In particular, we consider a class of continuous-time Markov chains (CTMCs). We establish deterministic and Gaussian diffusion analogues of these stochastic processes by applying the theory of density dependent processes. A range of numerical experiments are provided to show how well the deterministic and Gaussian counterparts approximate the dynamic behaviour of the processes. We derive threshold parameters, known as basic reproduction numbers, for both cases above the threshold which the disease is uniformly persistent and below the threshold which disease-free equilibrium is locally attractive. We find that the threshold conditions for both constant and varying population sizes have the same form. In order to take into account the mobility of people among regions, we extend the stochastic models to multiple populations. Various stochastic models for multiple populations are formulated as CTMCs. The deterministic and Gaussian diffusion counterparts of the corresponding stochastic processes for the multiple populations are also established. Threshold parameters for the persistence of the disease in the multiple population models are derived by applying the concept of next generation matrices. The results of this study can serve as a basic framework how to formulate and analyse a more realistic stochastic model for the spread of HIV in mobile heterogeneous populations—classifying all individuals by age, risk, and level of infectivities, and at the same time considering different modes of the disease transmission. Assuming an accurate mathematical model for the spread of HIV/AIDS disease, another question that we address in this thesis is how to control the spread of the disease in a mobile population. Most previous studies for the spread of the disease focus on identifying the most significant parameters in a model. In contrast, we study these problems as optimal epidemic intervention problems. The study is mostly motivated by the fact that more and more local governments allocate budgets over a certain period of time to combat the disease in their areas. The question is how to allocate this limited budget to minimise the number of new HIV cases, say on a country level, over a finite time horizon as people move among regions. The mathematical models developed in the first part of this thesis are used as dynamic constraints of the optimal control problems. In this thesis, we also introduce a novel approach to solve quite general optimal control problems using the Cross-Entropy (CE) method. The effectiveness of the CE method is demonstrated through several illustrative examples in optimal control. The main application is the optimal epidemic intervention problems discussed above. These are highly non-linear and multidimensional problems. Many existing numerical techniques for solving such optimal control problems suffer from the curse of dimensionality. However, we find that the CE technique is very efficient in solving such problems. The numerical results of the optimal epidemic strategies obtained via the CE method suggest that the structure of the optimal trajectories are highly synchronised among patches but the trajectories do not depend much on the structure of the models. Instead, the parameters of the models (such as the time horizon, the amount of available budget, infection rates) much affect the form of the solution.
6

Techniques d'optimisation déterministe et stochastique pour la résolution de problèmes difficiles en cryptologie / Deterministic and stochastic optimization techniques for hard problems in cryptology

Bouallagui, Sarra 05 July 2010 (has links)
Cette thèse s'articule autour des fonctions booléennes liées à la cryptographie et la cryptanalyse de certains schémas d'identification. Les fonctions booléennes possèdent des propriétés algébriques fréquemment utilisées en cryptographie pour constituer des S-Boxes (tables de substitution).Nous nous intéressons, en particulier, à la construction de deux types de fonctions : les fonctions courbes et les fonctions équilibrées de haut degré de non-linéarité.Concernant la cryptanalyse, nous nous focalisons sur les techniques d'identification basées sur les problèmes de perceptron et de perceptron permuté. Nous réalisons une nouvelle attaque sur le schéma afin de décider de sa faisabilité.Nous développons ici des nouvelles méthodes combinant l'approche déterministe DCA (Difference of Convex functions Algorithm) et heuristique (recuit simulé, entropie croisée, algorithmes génétiques...). Cette approche hybride, utilisée dans toute cette thèse, est motivée par les résultats intéressants de la programmation DC. / In cryptography especially in block cipher design, boolean functions are the basic elements.A cryptographic function should have high non-linearity as it can be attacked by linear method. There are three goals for the research presented in this thesis :_ Finding a new construction algorithm for the highest possible nonlinear boolean functions in the even dimension, that is bent functions, based on a detreministic model._ Finding highly non linear boolean functions._ Cryptanalysing an identification scheme based on the perceptron problem.Optimisation heuristic algorithms (Genetic algorithm and simulated annealing) and a deterministicone based on DC programming (DCA) were used together.
7

Multi-objective optimisation using the cross-entropy method in CO gas management at a South African ilmenite smelter

Stadler, Johan George 12 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: In a minerals processing environment, stable production processes, cost minimisation and energy efficiency are key to operational excellence, safety and profitability. At an ilmenite smelter, typically found in the heavy minerals industry, it is no different. Management of an ilmenite smelting process is a complex, multi-variable challenge with high costs and safety risks at stake. A by-product of ilmenite smelting is superheated carbon monoxide (CO) gas, or furnace off-gas. This gas is inflammable and extremely poisonous to humans. At the same time the gas is a potential energy source for various on-site heating applications. Re-using furnace off-gas can increase the energy efficiency of the energy intensive smelting process and can save on the cost of procuring other gas for heating purposes. In this research project, the management of CO gas from the Tronox KZN Sands ilmenite smelter in South Africa was studied with the aim of optimising the current utilisation of the gas. In the absence of any buffer capacity in the form of a pressure vessel, the stability of the available CO gas is directly dependent on the stability of the furnaces. The CO gas has been identified as a partial replacement for methane gas which is currently purchased for drying and heating of feed material and pre-heating of certain smelter equipment. With no buffer capacity between the furnaces and the gas consuming plants, a dynamic prioritisation approach had to be found if the CO was to replace the methane. The dynamics of this supply-demand problem, which has been termed the “CO gas problem”, needed to be studied. A discrete-event simulation model was developed to match the variable supply of CO gas to the variable demand for gas over time – the demand being a function of the availability of the plants requesting the gas, and the feed rates and types of feed material processed at those plants. The problem was formulated as a multi-objective optimisation problem with the two main, conflicting objectives, identified as: 1) the average production time lost per plant per day due to CO-methane switchovers; and 2) the average monthly saving on methane gas costs due to lower consumption thereof. A metaheuristic, namely multi-objective optimisation using the cross-entropy method, or MOO CEM, was applied as optimisation algorithm to solve the CO gas problem. The performance of the MOO CEM algorithm was compared with that of a recognised benchmark algorithm for multi-objective optimisation, the NSGA II, when both were applied to the CO gas problem. The background of multi-objective optimisation, metaheuristics and the usage of furnace off-gas, particularly CO gas, were investigated in the literature review. The simulation model was then developed and the optimisation algorithm applied. The research aimed to comment on the merit of the MOO CEM algorithm for solving the dynamic, stochastic CO gas problem and on the algorithm’s performance compared to the benchmark algorithm. The results served as a basis for recommendations to Tronox KZN Sands in order to implement a project to optimise usage and management of the CO gas. / AFRIKAANSE OPSOMMING: In mineraalprosessering is stabiele produksieprosesse, kostebeperking en energie-effektiwiteit sleuteldrywers tot bedryfsprestasie, veiligheid en wins. ‘n Ilmenietsmelter, tipies aangetref in swaarmineraleprosessering, is geen uitsondering nie. Die bestuur van ‘n ilmenietsmelter is ‘n komplekse, multi-doelwit uitdaging waar hoë kostes en veiligheidsrisiko’s ter sprake is. ‘n Neweproduk van die ilmenietsmeltproses is superverhitte koolstofmonoksiedgas (CO gas). Hierdie gas is ontvlambaar en uiters giftig vir die mens. Terselfdertyd kan hierdie gas benut word as energiebron vir allerlei verhittingstoepassings. Die herbenutting van CO gas vanaf die smelter kan die energie-effektiwiteit van die energie-intensiewe smeltproses verhoog en kan verder kostes bespaar op die aankoop van ‘n ander gas vir verhittingsdoeleindes. In hierdie navorsingsprojek is die bestuur van die CO gasstroom wat deur die ilmenietsmelter van Tronox KZN Sands in Suid-Afrika geproduseer word, ondersoek met die doel om die huidige benuttingsvlak daarvan te verbeter. Weens die afwesigheid van enige bufferkapasiteit in die vorm van ‘n drukbestande tenk, is die stabiliteit van CO gas beskikbaar vir hergebruik direk afhanklik van die stabiliteit van die twee hoogoonde wat die gas produseer. Die CO gas kan gedeeltelik metaangas, wat tans aangekoop word vir die droog en verhitting van voermateriaal en vir die voorverhitting van sekere smeltertoerusting, vervang. Met geen bufferkapasiteit tussen die hoogoonde en die aanlegte waar die gas verbruik word nie, was die ondersoek van ‘n dinamiese prioritiseringsbenadering nodig om te kon vasstel of die CO die metaangas kon vervang. Die dinamika van hierdie vraag-aanbod probleem, getiteld die “CO gasprobleem”, moes bestudeer word. ‘n Diskrete-element simulasiemodel is ontwikkel as probleemoplossingshulpmiddel om die vraag-aanbodproses te modelleer en die prioritiseringsbenadering te ondersoek. Die doel van die model was om oor tyd die veranderlike hoeveelhede van geproduseerde CO teenoor die veranderlike gasaanvraag te vergelyk. Die vlak van gasaanvraag is afhanklik van die beskikbaarheidsvlak van die aanlegte waar die gas verbruik word, sowel as die voertempo’s en tipes voermateriaal in laasgenoemde aanlegte. Die probleem is geformuleer as ‘n multi-doelwit optimeringsprobleem met twee hoof, teenstrydige doelwitte: 1) die gemiddelde verlies aan produksietyd per aanleg per dag weens oorgeskakelings tussen CO en metaangas; 2) die gemiddelde maandelikse besparing op metaangaskoste weens laer verbruik van dié gas. ‘n Metaheuristiek, genaamd MOO CEM (multi-objective optimisation using the cross-entropy method), is ingespan as optimeringsalgoritme om die CO gasprobleem op te los. Die prestasie van die MOO CEM algoritme is vergelyk met dié van ‘n algemeen aanvaarde riglynalgoritme, die NSGA II, met beide toepas op die CO gasprobleem. The agtergrond van multi-doelwit optimering, metaheuristieke en die benutting van hoogoond af-gas, spesifiek CO gas, is ondersoek in die literatuurstudie. Die simulasiemodel is daarna ontwikkel en die optimeringsalgoritme is toegepas.
8

The application of the cross-entropy method for multi-objective optimisation to combinatorial problems

Hauman, Charlotte 12 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: Society is continually in search of ways to optimise various objectives. When faced with multiple and con icting objectives, humans are in need of solution techniques to enable optimisation. This research is based on a recent venture in the eld of multi-objective optimisation, the use of the cross-entropy method to solve multi-objective problems. The document provides a brief overview of the two elds, multi-objective optimisation and the cross-entropy method, touching on literature, basic concepts and applications or techniques. The application of the method to two problems is then investigated. The rst application is to the multi-objective vehicle routing problem with soft time windows, a widely studied problem with many real-world applications. The problem is modelled mathematically with a transition probability matrix that is updated according to cross-entropy principles before converging to an approximation solution set. The highly constrained problem is successfully modelled and the optimisation algorithm is applied to a set of benchmark problems. It was found that the cross-entropy method for multi-objective optimisation is a valid technique in providing feasible and non-dominated solutions. The second application is to a real world case study in blood management done at the Western Province Blood Transfusion Service. The conceptual model is derived from interviews with relevant stakeholders before discrete event simulation is used to model the system. The cross-entropy method is used to optimise the inventory policy of the system by simultaneously maximising the combined service level of the system and minimising the total distance travelled. By integrating the optimisation and simulation model, the study shows that the inventory policy of the service can improve signi cantly, and the use of the cross-entropy algorithm adequately progresses to a front of solutions. The research proves the remarkable width and simplicity of possible applications of the cross-entropy algorithm for multi-objective optimisation, whilst contributing to literature on the vehicle routing problem and blood management. Results on benchmark problems for the vehicle routing problem with soft time windows are provided and an improved inventory policy is suggested to the Western Province Blood Transfusion Service. / AFRIKAANSE OPSOMMING: Die mensdom is voortdurend op soek na maniere om verskeie doelwitte te optimeer. Wanneer die mens konfrontreer word met meervoudige en botsende doelwitte, is oplossingsmetodes nodig om optimering te bewerkstellig. Hierdie navorsing is baseer op 'n nuwe wending in die veld van multi-doelwit optimering, naamlik die gebruik van die kruisentropie metode om multi-doelwit probleme op te los. Die dokument verskaf 'n bre e oorsig oor die twee velde { multi-doelwit optimering en die kruis-entropie-metode { deur kortliks te kyk na die beskikbare literatuur, basiese beginsels, toepassingsareas en metodes. Die toepassing van die metode op twee onafhanklike probleme word dan ondersoek. Die eerste toepassing is di e van die multi-doelwit voertuigroeteringsprobleem met plooibare tydvensters. Die probleem word eers wiskundig modelleer met 'n oorgangswaarskynlikheidsmatriks. Die matriks word dan deur kruis-entropie beginsels opdateer voor dit konvergeer na 'n benaderingsfront van oplossings. Die oplossingsruimte is onderwerp aan heelwat beperkings, maar die probleem is suksesvol modelleer en die optimeringsalgoritme is gevolglik toegepas op 'n stel verwysingsprobleme. Die navorsing het gevind dat die kruis-entropie metode vir multi-doelwit optimering 'n geldige metode is om 'n uitvoerbare front van oplossings te beraam. Die tweede toepassing is op 'n gevallestudie van die bestuur van bloed binne die konteks van die Westelike Provinsie Bloedoortappingsdiens. Na aanleiding van onderhoude met die relevante belanghebbers is 'n konsepmodel geskep voor 'n simulasiemodel van die stelsel gebou is. Die kruis-entropie metode is gebruik om die voorraadbeleid van die stelsel te optimeer deur 'n gesamentlike diensvlak van die stelsel te maksimeer en terselfdetyd die totale reis-afstand te minimeer. Deur die optimerings- en simulasiemodel te integreer, wys die studie dat die voorraadbeleid van die diens aansienlik kan verbeter, en dat die kruis-entropie algoritme in staat is om na 'n front van oplossings te beweeg. Die navorsing bewys die merkwaardige wydte en eenvoud van moontlike toepassings van die kruis-entropie algoritme vir multidoelwit optimering, terwyl dit 'n bydrae lewer tot die afsonderlike velde van voertuigroetering en die bestuur van bloed. Uitslae vir die verwysingsprobleme van die voertuigroeteringsprobleem met plooibare tydvensters word verskaf en 'n verbeterde voorraadbeleid word aan die Westelike Provinsie Bloedoortappingsdiens voorgestel.
9

Estratégias numéricas e de otimização para inferência da dinâmica de redes bioquímicas

Ladeira, Carlos Roberto Lima 28 February 2014 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-03-07T11:11:33Z No. of bitstreams: 1 carlosrobertolimaladeira.pdf: 2482685 bytes, checksum: b90ffa199573e38ddbce8d8ac0283585 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-07T15:03:08Z (GMT) No. of bitstreams: 1 carlosrobertolimaladeira.pdf: 2482685 bytes, checksum: b90ffa199573e38ddbce8d8ac0283585 (MD5) / Made available in DSpace on 2017-03-07T15:03:08Z (GMT). No. of bitstreams: 1 carlosrobertolimaladeira.pdf: 2482685 bytes, checksum: b90ffa199573e38ddbce8d8ac0283585 (MD5) Previous issue date: 2014-02-28 / Estimar parâmetros de modelos dinâmicos de sistemas biológicos usando séries temporais é cada vez mais importante, pois uma quantidade imensa de dados experimentais está sendo mensurados pela biologia molecular moderna. Uma abordagem de resolução baseada em problemas inversos pode ser utilizada na solução deste tipo de problema. A escolha do modelo matemático é uma tarefa importante, pois vários modelos podem ser utilizados, apresentando níveis diversos de precisão em suas representações. A Teoria dos Sistemas Bioquímicos (TSB) faz uso de equações diferenciais ordinárias e expansões de séries de potências para representar processos bioquímicos. O Sistema S é um dos modelos usados pela TSB que permite a transformação do sistema original de equações diferenciais em um sistema algébrico desacoplado, facilitando a solução do problema inverso. Essa transformação pode comprometer a qualidade da resposta se o valor das derivadas nos pontos das séries temporais não for obtidos com precisão. Para estimar as derivadas pretende-se explorar o método do passo complexo, que apresenta vantagens em relação ao método das diferenças finitas, mais conhecido e utilizado. A partir daí pode então ser realizada a busca pelas variáveis que definirão as equações do sistema. O método da Regressão Alternada é um dos mais rápidos para esse tipo de problema, mas a escolha inicial dos parâmetros possui influência em seu resultado, que pode até mesmo não ser encontrado. Pretende-se avaliar o método da Entropia Cruzada, que possui a vantagem de realizar buscas globais e talvez por esse motivo a escolha dos parâmetros inicias não cause tanta influência nos resultados. Além disso, será avaliado um método híbrido que fará uso das principais vantagens do método da Regressão Alternada e do Entropia Cruzada para resolver o problema. Experimentos numéricos sistematizados serão realizados tanto para a etapa de estimativa das derivadas quanto para a etapa de otimização para obtenção dos parâmetros das equações do sistema. / Estimating parameters of dynamic models of biological systems using time series is becoming very important because a huge amount of experimental data is being measured by modern molecular biology. A resolution-based approach on inverse problems can be used in solving this type of problem. The choice of the mathematical model is an important task, since many models can be used, with varying levels of accuracy in their representations. The Biochemical Systems Theory (BST) makes use of ordinary differential equations and power series expansions to represent biochemical processes. The S-system is one of the models used by BST that allows the transformation of the original system of differential equations in a decoupled system of algebric equations, favouring the solution of the inverse problem. This transformation can compromise the quality of the response if the value of the derivatives at points of time series are not obtained accurately. To estimate the derivatives we intend to explore the complex-step method, which has advantages over the finite difference method, best known and used . So the search for the variables that define the equations of the system can be performed. The Alternating Regression method is one of the fastest for this type of problem, but the initial choice of parameters has influence on its performance, which may not even be found. We intend to evaluate the Cross-entropy method, which has the advantage of performing global searches and for this reason the choice of the initial search parameters does not cause as much influence on the results. Also, will be assessed a hybrid method that makes use of the main advantages of Alternating Regression and Cross-entropy to solve the problem. Systematic numerical experiments will be conducted for both the step of estimating derivatives as for the optimization step to estimate the variables of the equations of the system.
10

Advanced Monte Carlo Methods with Applications in Finance

Joshua Chi Chun Chan Unknown Date (has links)
The main objective of this thesis is to develop novel Monte Carlo techniques with emphasis on various applications in finance and economics, particularly in the fields of risk management and asset returns modeling. New stochastic algorithms are developed for rare-event probability estimation, combinatorial optimization, parameter estimation and model selection. The contributions of this thesis are fourfold. Firstly, we study an NP-hard combinatorial optimization problem, the Winner Determination Problem (WDP) in combinatorial auctions, where buyers can bid on bundles of items rather than bidding on them sequentially. We present two randomized algorithms, namely, the cross-entropy (CE) method and the ADAptive Mulitilevel splitting (ADAM) algorithm, to solve two versions of the WDP. Although an efficient deterministic algorithm has been developed for one version of the WDP, it is not applicable for the other version considered. In addition, the proposed algorithms are straightforward and easy to program, and do not require specialized software. Secondly, two major applications of conditional Monte Carlo for estimating rare-event probabilities are presented: a complex bridge network reliability model and several generalizations of the widely popular normal copula model used in managing portfolio credit risk. We show how certain efficient conditional Monte Carlo estimators developed for simple settings can be extended to handle complex models involving hundreds or thousands of random variables. In particular, by utilizing an asymptotic description on how the rare event occurs, we derive algorithms that are not only easy to implement, but also compare favorably to existing estimators. Thirdly, we make a contribution at the methodological front by proposing an improvement of the standard CE method for estimation. The improved method is relevant, as recent research has shown that in some high-dimensional settings the likelihood ratio degeneracy problem becomes severe and the importance sampling estimator obtained from the CE algorithm becomes unreliable. In contrast, the performance of the improved variant does not deteriorate as the dimension of the problem increases. Its utility is demonstrated via a high-dimensional estimation problem in risk management, namely, a recently proposed t-copula model for credit risk. We show that even in this high-dimensional model that involves hundreds of random variables, the proposed method performs remarkably well, and compares favorably to existing importance sampling estimators. Furthermore, the improved CE algorithm is then applied to estimating the marginal likelihood, a quantity that is fundamental in Bayesian model comparison and Bayesian model averaging. We present two empirical examples to demonstrate the proposed approach. The first example involves women's labor market participation and we compare three different binary response models in order to find the one best fits the data. The second example utilizes two vector autoregressive (VAR) models to analyze the interdependence and structural stability of four U.S. macroeconomic time series: GDP growth, unemployment rate, interest rate, and inflation. Lastly, we contribute to the growing literature of asset returns modeling by proposing several novel models that explicitly take into account various recent findings in the empirical finance literature. Specifically, two classes of stylized facts are particularly important. The first set is concerned with the marginal distributions of asset returns. One prominent feature of asset returns is that the tails of their distributions are heavier than those of the normal---large returns (in absolute value) occur much more frequently than one might expect from a normally distributed random variable. Another robust empirical feature of asset returns is skewness, where the tails of the distributions are not symmetric---losses are observed more frequently than large gains. The second set of stylized facts is concerned with the dependence structure among asset returns. Recent empirical studies have cast doubts on the adequacy of the linear dependence structure implied by the multivariate normal specification. For example, data from various asset markets, including equities, currencies and commodities markets, indicate the presence of extreme co-movement in asset returns, and this observation is again incompatible with the usual assumption that asset returns are jointly normally distributed. In light of the aforementioned empirical findings, we consider various novel models that generalize the usual normal specification. We develop efficient Markov chain Monte Carlo (MCMC) algorithms to estimate the proposed models. Moreover, since the number of plausible models is large, we perform a formal Bayesian model comparison to determine the model that best fits the data. In this way, we can directly compare the two approaches of modeling asset returns: copula models and the joint modeling of returns.

Page generated in 0.1791 seconds