Spelling suggestions: "subject:"multiobjective doptimisation"" "subject:"multiobjective d'optimisation""
1 |
A study of simulated annealing techniques for multi-objective optimisationSmith, Kevin I. January 2006 (has links)
Many areas in which computational optimisation may be applied are multi-objective optimisation problems; those where multiple objectives must be minimised (for minimisation problems) or maximised (for maximisation problems). Where (as is usually the case) these are competing objectives, the optimisation involves the discovery of a set of solutions the quality of which cannot be distinguished without further preference information regarding the objectives. A large body of literature exists documenting the study and application of evolutionary algorithms to multi-objective optimisation, with particular focus being given to evolutionary strategy techniques which demonstrate the ability to converge to desired solutions rapidly on many problems. Simulated annealing is a single-objective optimisation technique which is provably convergent, making it a tempting technique for extension to multi-objective optimisation. Previous proposals for extending simulated annealing to the multi-objective case have mostly taken the form of a traditional single-objective simulated annealer optimising a composite (often summed) function of the objectives. The first part of this thesis deals with introducing an alternate method for multiobjective simulated annealing, dealing with the dominance relation which operates without assigning preference information to the objectives. Non-generic improvements to this algorithm are presented, providing methods for generating more desirable suggestions for new solutions. This new method is shown to exhibit rapid convergence to the desired set, dependent upon the properties of the problem, with empirical results on a range of popular test problems with comparison to the popular NSGA-II genetic algorithm and a leading multi-objective simulated annealer from the literature. The new algorithm is applied to the commercial optimisation of CDMA mobile telecommunication networks and is shown to perform well upon this problem. The second section of this thesis contains an investigation into the effects upon convergence of a range of optimiser properties. New algorithms are proposed with the properties desired to investigate. The relationship between evolutionary strategies and the simulated annealing techniques is illustrated, and explanation of the differing performance of the previously proposed algorithms across a standard test suite is given. The properties of problems on which simulated annealer approaches are desirable are investigated and new problems proposed to best provide comparisons between different simulated annealing techniques.
|
2 |
Ordering and visualisation of many-objective populationsWalker, David J. January 2012 (has links)
In many everyday tasks it is necessary to compare the performance of the individuals in a population described by two or more criteria, for example comparing products in order to decide which is the best to purchase in terms of price and quality. Other examples are the comparison of universities, countries, the infrastructure in a telecommunications network, and the candidate solutions to a multi- or many-objective problem. In all of these cases, visualising the individuals better allows a decision maker to interpret their relative performance. This thesis explores methods for understanding and visualising multi- and many-criterion populations. Since people cannot generally comprehend more than three spatial dimensions the visualisation of many-criterion populations is a non-trivial task. We address this by generating visualisations based on the dominance relation which defines a structure in the population and we introduce two novel visualisation methods. The first method explicitly illustrates the dominance relationships between individuals as a graph in which individuals are sorted into Pareto shells, and is enhanced using many-criterion ranking methods to produce a finer ordering of individuals. We extend the power index, a method for ranking according to a single criterion, into the many-criterion domain by defining individual quality in terms of tournaments. The second visualisation method uses a new dominance-based distance in conjunction with multi-dimensional scaling, and we show that dominance can be used to identify an intuitive low-dimensional mapping of individuals, placing similar individuals close together. We demonstrate that this method can visualise a population comprising a large number of criteria. Heatmaps are another common method for presenting high-dimensional data, however they suffer from a drawback of being difficult to interpret if dissimilar individuals are placed close to each other. We apply spectral seriation to produce an ordering of individuals and criteria by which the heatmap is arranged, placing similar individuals and criteria close together. A basic version, computing similarity with the Euclidean distance, is demonstrated, before rank-based alternatives are investigated. The procedure is extended to seriate both the parameter and objective spaces of a multi-objective population in two stages. Since this process describes a trade-off, favouring the ordering of individuals in one space or the other, we demonstrate methods that enhance the visualisation by using an evolutionary optimiser to tune the orderings. One way of revealing the structure of a population is by highlighting which individuals are extreme. To this end, we provide three definitions of the “edge” of a multi-criterion mutually non-dominating population. All three of the definitions are in terms of dominance, and we show that one of them can be extended to cope with many-criterion populations. Because they can be difficult to visualise, it is often difficult for a decision maker to comprehend a population consisting of a large number of criteria. We therefore consider criterion selection methods to reduce the dimensionality with a view to preserving the structure of the population as quantified by its rank order. We investigate the efficacy of greedy, hill-climber and evolutionary algorithms and cast the dimension reduction as a multi-objective problem.
|
3 |
Integration of ranking and selection methods with the multi-objective optimisation cross-entropy methodVon Lorne von Saint Ange, Chantel 03 1900 (has links)
Thesis (MEng)--Stellenbosch University, 2015. / ENGLISH ABSTRACT: A method for multi-objective optimisation using the cross-entropy
method (MOO CEM) was recently developed by Bekker & Aldrich
(2010) and Bekker (2012). The method aims to identify the nondominated
solutions of multi-objective problems, which are often dynamic
and stochastic. The method does not use a statistical ranking
and selection technique to account for the stochastic nature of the
problems it solves. The research in this thesis aims to investigate
possible techniques that can be incorporated into the MOO CEM.
The cross-entropy method for single-objective optimisation is studied
first. It is applied to an interesting problem in the soil sciences and
water management domain. The purpose of this was for the researcher
to grasp the fundamentals of the cross-entropy method, which will be
needed later in the study.
The second part of the study documents an overview of multi-objective
ranking and selection methods found in literature. The first method
covered is the multi-objective optimal computing budget allocation
algorithm. The second method extends upon the first to include the
concept of an indifference-zone. Both methods aim to maximise the
probability of correctly selecting the non-dominated scenarios, while
intelligently allocating simulation replications to minimise required
sample sizes. These techniques are applied to two problems that
are represented by simulation models, namely the buffer allocation
problem and a classic single-commodity inventory problem. Performance
is measured using the hyperarea indicator and Mann-Whitney
U-tests. It was found that the two techniques have significantly different
performances, although this could be due to the different number
of solutions in the Pareto set.
In the third part of the document, the aforementioned multi-objective
ranking and selection techniques are incorporated into the MOO CEM.
Once again, the buffer allocation problem and the inventory problem
were chosen as test problems. The results were compared to experiments
where the MOO CEM without ranking and selection was used.
Results show that the MOO CEM with ranking and selection has
various affects on different problems. Investigating the possibility of
incorporating ranking and selection differently in the MOO CEM is
recommended as future research. Additionally, the combined algorithm
should be tested on more stochastic problems. / AFRIKAANSE OPSOMMING: 'n Metode vir meerdoelige optimering wat gebruik maak van die kruisentropie-
metode (MOO CEM) is onlangs deur Bekker & Aldrich (2010)
en Bekker (2012) ontwikkel. Die metode mik om die nie-gedomineerde
oplossings van meerdoelige probleme te identifiseer, wat dikwels dinamies
en stogasties is. Die metode maak nie gebruik van 'n statistiese
orden-en-kies tegniek om die stogastiese aard van die problem aan te
spreek nie. Die navorsing in hierdie tesis poog om moontlike tegnieke
wat in die MOO CEM opgeneem kan word, te ondersoek.
Die kruis-entropie-metode vir enkeldoelwit optimering is eerste bestudeer.
Dit is toegepas op 'n interessante probleem in die grondwetenskappe
en waterbestuur domein. Die doel hiervan was om die navorser
die grondbeginsels van die kruis-entropie metode te help verstaan, wat
later in die studie benodig sal word.
Die tweede gedeelte van die studie verskaf 'n oorsig van meerdoelige
orden-en-kies metodes wat in die literatuur aangetref word. Die eerste
metode wat bespreek word, is die optimale toedeling van rekenaarbegroting
vir multi-doelwit optimering algoritme. Die tweede metode
brei uit oor die eerste metode wat die konsep van 'n neutrale sone
insluit. Beide metodes streef daarna om die waarskynlikheid dat die
nie-gedomineerde oplossings korrek gekies word te maksimeer, terwyl
dit ook steekproefgroottes probeer minimeer deur die aantal simulasieherhalings
intelligent toe te ken. Hierdie tegnieke word toegepas
op twee probleme wat verteenwoordig word deur simulasiemodelle,
naamlik die buffer-toedelingsprobleem en 'n klassieke enkelitem voorraadprobleem.
Die prestasie van die algoritmes word deur middel van
die hiperarea-aanwyser en Mann Whitney U-toetse gemeet. Daar is
gevind dat die twee tegnieke aansienlik verskillend presteer, alhoewel dit as gevolg van die verskillende aantal oplossings in die Pareto versameling
kan wees.
In die derde gedeelte van die dokument, is die bogenoemde meerdoelige
orden-en-kies tegnieke in die MOO CEM geïnkorporeer. Weereens
is die buffer-toedelingsprobleem en die voorraadprobleem as toetsprobleme
gekies. Die resultate was met die eksperimente waar die
MOO CEM sonder orden-en-kies gebruik is, vergelyk. Resultate toon
dat vir verskillende probleme, tree die MOO CEM met orden-en-kies
anders op. 'n Ondersoek oor 'n alternatiewe manier om orden-en-kies
met die MOO CEM te integreer is as toekomstige navorsing voorgestel.
Bykomend moet die gekombineerde algoritme op meer stogastiese
probleme getoets word.
|
4 |
Disaggregating employment data to building level : a multi-objective optimisation approachLudick, Chantel Judith 08 1900 (has links)
The land use policies and development plans that are implemented in a city contribute to whether the city will be sustainable in the future. Therefore, when these policies are being established they should consider the potential impact on development. An analytical tool, such as land use change models, allow decision-makers to see the possible impact that these policies could have on development. Land use change models like UrbanSim make use of the relationship between households, buildings, and employment opportunities to model the decisions that people make on where to live and work. To be able to do this the model needs accurate data.
When there is a more accurate location for the employment opportunities in an area, the decisions made by individuals can be better modelled and therefore the projected results are expected to be better. Previous research indicated that the methods that are traditionally used to disaggregate employment data to a lower level in UrbanSim projects are not applicable in the South African context. This is because the traditional methods require a detailed employment dataset for the disaggregation and this detailed employment dataset is not available in South Africa.
The aim of this project was to develop a methodology for a metropolitan municipality in South Africa that could be used to disaggregate the employment data that is available at a higher level to a more detailed building level. To achieve this, the methodology consisted of two parts. The first part of the methodology was establishing a method that could be used to prepare a base dataset that is used for disaggregating the employment data. The second part of the methodology was using a multi-objective optimisation approach to allocate the number of employment opportunities within a municipality to building level. The algorithm was developed using the Distributed Evolutionary Algorithm in Python (DEAP) computational framework. DEAP is an open-source evolutionary algorithm framework that is developed in Python and enables users to rapidly create prototypes by allowing them to customise the algorithm to suit their needs
The evaluation showed that it is possible to make use of multi-objective optimisation to disaggregate employment data to building level. The results indicate that the employment allocation algorithm was successful in disaggregating employment data from municipal level to building level. All evolutionary algorithms come with some degree of uncertainty as one of the main features of evolutionary algorithms is that they find the most optimal solution, and so there are other solutions available as well. Thus, the results of the algorithm also come with that same level of uncertainty.
By enhancing the data used by land use change models, the performance of the overall model is improved. With this improved performance of the model, an improved view of the impact that land use policies could have on development can also be seen. This will allow decision-makers to draw the best possible conclusions and allow them the best possible opportunity to develop policies that will contribute to creating sustainable and lasting urban areas. / Dissertation (MSc (Geoinformatics))--University of Pretoria, 2020. / Geography, Geoinformatics and Meteorology / MSc (Geoinformatics) / Unrestricted
|
5 |
Manufacturing management and decision support using simulation-based multi-objective optimisationPehrsson, Leif January 2013 (has links)
A majority of the established automotive manufacturers are under severe competitive pressure and their long term economic sustainability is threatened. In particular the transformation towards more CO2-efficient energy sources is a huge financial burden for an already investment capital intensive industry. In addition existing operations urgently need rapid improvement and even more critical is the development of highly productive, efficient and sustainable manufacturing solutions for new and updated products. Simultaneously, a number of severe drawbacks with current improvement methods for industrial production systems have been identified. In summary, variation is not considered sufficient with current analysis methods, tools used are insufficient for revealing enough knowledge to support decisions, procedures for finding optimal solutions are not considered, and information about bottlenecks is often required, but no accurate methods for the identification of bottlenecks are used in practice, because they do not normally generate any improvement actions. Current methods follow a trial-and-error pattern instead of a proactive approach. Decisions are often made directly on the basis of raw static historical data without an awareness of optimal alternatives and their effects. These issues could most likely lead to inadequate production solutions, low effectiveness, and high costs, resulting in poor competitiveness. In order to address the shortcomings of existing methods, a methodology and framework for manufacturing management decision support using simulation-based multi-objective optimisation is proposed. The framework incorporates modelling and the optimisation of production systems, costs, and sustainability. Decision support is created through the extraction of knowledge from optimised data. A novel method and algorithm for the detection of constraints and bottlenecks is proposed as part of the framework. This enables optimal improvement activities with ranking in order of importance can be sought. The new method can achieve a higher improvement rate, when applied to industrial improvement situations, compared to the well-established shifting bottleneck technique. A number of 'laboratory' experiments and real-world industrial applications have been conducted in order to explore, develop, and verify the proposed framework. The identified gaps can be addressed with the proposed methodology. By using simulation-based methods, stochastic behaviour and variability is taken into account and knowledge for the creation of decision support is gathered through post-optimality analysis. Several conflicting objectives can be considered simultaneously through the application of multi-objective optimisation, while objectives related to running cost, investments and other sustainability parameters can be included through the use of the new cost and sustainability models introduced. Experiments and tests have been undertaken and have shown that the proposed framework can assist the creation of manufacturing management decision support and that such a methodology can contribute significantly to regaining profitability when applied within the automotive industry. It can be concluded that a proof-of-concept has been rigorously established for the application of the proposed framework on real-world industrial decision-making, in a manufacturing management context.
|
6 |
Modelling and determining inventory decisions for improved sustainability in perishable food supply chainsSaengsathien, Arjaree January 2015 (has links)
Since the introduction of sustainable development, industries have witnessed significant sustainability challenges. Literature shows that the food industry is concerned about its need for efficient and effective management practices in dealing with perishability and the requirements for conditioned storage and transport of food products that effect the environment. Hence, the environmental part of sustainability demonstrates its significance in this industrial sector. Despite this, there has been little research into environmentally sustainable inventory management of deteriorating items. This thesis presents mathematical modelling based research for production inventory systems in perishable food supply chains. In this study, multi-objective mixed-integer linear programming models are developed to determine economically and environmentally optimal production and inventory decisions for a two-echelon supply chain. The supply chain consists of single sourcing suppliers for raw materials and a producer who operates under a make-to-stock or make-to-order strategy. The demand facing the producer is non-stationary stochastic in nature and has requirements in terms of service level and the remaining shelf life of the marketed products. Using data from the literature, numerical examples are given in order to test and analyse these models. The computational experiments show that operational adjustments in cases where emission and cost parameters were not strongly correlated with supply chain collaboration (where suppliers and a producer operate under centralised control), emissions are effectively reduced without a significant increase in cost. The findings show that assigning a high disposal cost, limit or high weight of importance to perished goods leads to appropriate reduction of expected waste in the supply chain with no major cost increase. The research has made contributions to the literature on sustainable production and inventory management; providing formal models that can be used as an aid to understanding and as a tool for planning and improving sustainable production and inventory control in supply chains involving deteriorating items, in particular with perishable food supply chains.
|
7 |
Applying the cross-entropy method in multi-objective optimisation of dynamic stochastic systemsBekker, James 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: A difficult subclass of engineering optimisation problems is the class
of optimisation problems which are dynamic and stochastic. These
problems are often of a non-closed form and thus studied by means of
computer simulation. Simulation production runs of these problems
can be time-consuming due to the computational burden implied by
statistical inference principles. In multi-objective optimisation of engineering
problems, large decision spaces and large objective spaces
prevail, since two or more objectives are simultaneously optimised and
many problems are also of a combinatorial nature. The computational
burden associated with solving such problems is even larger than for
most single-objective optimisation problems, and hence an e cient
algorithm that searches the vast decision space is required. Many
such algorithms are currently available, with researchers constantly
improving these or developing more e cient algorithms. In this context,
the term \e cient" means to provide near-optimised results with
minimal evaluations of objective function values. Thus far research has
often focused on solving speci c benchmark problems, or on adapting
algorithms to solve speci c engineering problems.
In this research, a multi-objective optimisation algorithm, based on the
cross-entropy method for single-objective optimisation, is developed
and assessed. The aim with this algorithm is to reduce the number
of objective function evaluations, particularly when time-dependent
(dynamic), stochastic processes, as found in Industrial Engineering,
are studied. A brief overview of scholarly work in the eld of multiobjective
optimisation is presented, followed by a theoretical discussion
of the cross-entropy method. The new algorithm is developed, based
on this information, and assessed considering continuous, deterministic
problems, as well as discrete, stochastic problems. The latter include a
classical single-commodity inventory problem, the well-known buffer allocation problem, and a newly designed, laboratory-sized recon gurable
manufacturing system. Near multi-objective optimisation of two
practical problems were also performed using the proposed algorithm.
In the rst case, some design parameters of a polymer extrusion unit are
estimated using the algorithm. The management of carbon monoxide
gas utilisation at an ilmenite smelter is complex with many decision
variables, and the application of the algorithm in that environment is
presented as a second case.
Quality indicator values are estimated for thirty-four test problem
instances of multi-objective optimisation problems in order to quantify
the quality performance of the algorithm, and it is also compared to a
commercial algorithm.
The algorithm is intended to interface with dynamic, stochastic simulation
models of real-world problems. It is typically implemented in a
programming language while the simulation model is developed in a
dedicated, commercial software package.
The proposed algorithm is simple to implement and proved to be
efficient on test problems. / AFRIKAANSE OPSOMMING: 'n Moeilike deelklas van optimeringsprobleme in die ingenieurswese
is optimeringsprobleme van 'n dinamiese en stogastiese aard. Sulke
probleme is dikwels nie-geslote en word gevolglik met behulp van rekenaarsimulasie
bestudeer. Die beginsels van statistiese steekproefneming
veroorsaak dat produksielopies van hierdie probleme tydrowend is weens
die rekenlas wat genoodsaak word. Groot besluitnemingruimtes en
doelwitruimtes bestaan in meerdoelige optimering van ingenieursprobleme,
waar twee of meer doelwitte gelyktydig geoptimeer word, terwyl
baie probleme ook 'n kombinatoriese aard het. Die rekenlas wat met
die oplos van sulke probleme gepaard gaan, is selfs groter as vir die
meeste enkeldoelwit optimeringsprobleme, en 'n doeltre ende algoritme
wat die meesal uitgebreide besluitnemingsruimte verken, is gevolglik
nodig. Daar bestaan tans verskeie sulke algoritmes, terwyl navorsers
steeds poog om hierdie algoritmes te verbeter of meer doeltre ende
algoritmes te ontwikkel. In hierdie konteks beteken \doeltre end" dat
naby-optimale oplossings verskaf word deur die minimum evaluering
van doelwitfunksiewaardes. Navorsing fokus dikwels op oplossing van
standaard toetsprobleme, of aanpassing van algoritmes om 'n spesi eke
ingenieursprobleem op te los.
In hierdie navorsing word 'n meerdoelige optimeringsalgoritme gebaseer
op die kruis-entropie-metode vir enkeldoelwit optimering ontwikkel
en geassesseer. Die mikpunt met hierdie algoritme is om die aantal
evaluerings van doelwitfunksiewaardes te verminder, spesi ek wanneer
tydafhanklike (dinamiese), stogastiese prosesse soos wat dikwels in die
Bedryfsingenieurswese te egekom word, bestudeer word. 'n Bondige
oorsig van navorsing in die veld van meerdoelige optimering word gegee,
gevolg deur 'n teoretiese bespreking van die kruis-entropiemetode. Die
nuwe algoritme se ontwikkeling is hierop gebaseer, en dit word geassesseer
deur kontinue, deterministiese probleme sowel as diskrete, stogastiese probleme benaderd daarmee op te los. Laasgenoemde sluit
in 'n klassieke enkelitem voorraadprobleem, die bekende buffer-toedelingsprobleem,
en 'n nuut-ontwerpte, laboratorium-skaal herkon gureerbare
vervaardigingstelsel. Meerdoelige optimering van twee praktiese
probleme is met die algoritme uitgevoer. In die eerste geval word sekere
ontwerpparameters van 'n polimeer-uittrekeenheid met behulp van die
algoritme beraam. Die bestuur van koolstofmonoksiedbenutting in 'n
ilmeniet-smelter is kompleks met verskeie besluitnemingveranderlikes,
en die toepassing van die algoritme in daardie omgewing word as 'n
tweede geval aangebied.
Verskeie gehalte-aanwyserwaardes word beraam vir vier-en-dertig toetsgevalle
van meerdoelige optimeringsprobleme om die gehalte-prestasie
van die algoritme te kwanti seer, en dit word ook vergelyk met 'n
kommersi ele algoritme.
Die algoritme is veronderstel om te skakel met dinamiese, stogastiese
simulasiemodelle van regtew^ereldprobleme. Die algoritme sal tipies in
'n programmeertaal ge mplementeer word terwyl die simulasiemodel
in doelmatige, kommersi ele programmatuur ontwikkel sal word. Die
voorgestelde algoritme is maklik om te implementeer en dit het doeltre
end gewerk op toetsprobleme.
|
8 |
A multi-objective evolutionary approach to simulation-based optimisation of real-world problemsSyberfeldt, Anna January 2009 (has links)
This thesis presents a novel evolutionary optimisation algorithm that can improve the quality of solutions in simulation-based optimisation. Simulation-based optimisation is the process of finding optimal parameter settings without explicitly examining each possible configuration of settings. An optimisation algorithm generates potential configurations and sends these to the simulation, which acts as an evaluation function. The evaluation results are used to refine the optimisation such that it eventually returns a high-quality solution. The algorithm described in this thesis integrates multi-objective optimisation, parallelism, surrogate usage, and noise handling in a unique way for dealing with simulation-based optimisation problems incurred by these characteristics. In order to handle multiple, conflicting optimisation objectives, the algorithm uses a Pareto approach in which the set of best trade-off solutions is searched for and presented to the user. The algorithm supports a high degree of parallelism by adopting an asynchronous master-slave parallelisation model in combination with an incremental population refinement strategy. A surrogate evaluation function is adopted in the algorithm to quickly identify promising candidate solutions and filter out poor ones. A novel technique based on inheritance is used to compensate for the uncertainties associated with the approximative surrogate evaluations. Furthermore, a novel technique for multi-objective problems that effectively reduces noise by adopting a dynamic procedure in resampling solutions is used to tackle the problem of real-world unpredictability (noise). The proposed algorithm is evaluated on benchmark problems and two complex real-world problems of manufacturing optimisation. The first real-world problem concerns the optimisation of a production cell at Volvo Aero, while the second one concerns the optimisation of a camshaft machining line at Volvo Cars Engine. The results from the optimisations show that the algorithm finds better solutions for all the problems considered than existing, similar algorithms. The new techniques for dealing with surrogate imprecision and noise used in the algorithm are identified as key reasons for the good performance.
|
9 |
The use of real options and multi-objective optimisation in flood risk managementWoodward, Michelle January 2012 (has links)
The development of suitable long term flood risk intervention strategies is a challenge. Climate change alone is a significant complication but in addition complexities exist trying to identify the most appropriate set of interventions, the area with the highest economical benefit and the most opportune time for implementation. All of these elements pose difficulties to decision makers. Recently, there has been a shift in the current practice for appraising potential strategies and consideration is now being given to ensure flexible, adaptive strategies to account for the uncertain climatic conditions. Real Options in particular is becoming an acknowledged approach to account for the future uncertainties inherent in a flood risk investment decision. Real Options facilitates adaptive strategies as it enables the value of flexibility to be explicitly included within the decision making process. Opportunities are provided for the decision maker to modify and update investments when knowledge of the future state comes to light. In this thesis the use of Real Options in flood risk management is investigated as a method to account for the uncertainties of climate change. Each Intervention strategy is purposely designed to capture a level of flexibility and have the ability to adapt in the future if required. A state of the art flood risk analysis tool is employed to evaluate the risk associated to each strategy over future points in time. In addition to Real Options, this thesis also explores the use of evolutionary optimisation algorithms to aid the decision making process when identifying the most appropriate long term strategies. Although the risk analysis tool is capable of quantifying the potential benefits attributed to a strategy, it is not necessarily able to identify the most appropriate. Methods are required which can search for the optimal solutions according to a range of performance metrics. Single and multi-objective genetic algorithms are investigated in this thesis as a method to search for the most appropriate long term intervention strategies. The Real Options concepts are combined with the evolutionary multiobjective optimisation algorithm to create a decision support methodology which is capable of searching for the most appropriate long term economical yet robust intervention strategies which are flexible to future change. The methodology is applied to two individual case studies, a section of the Thames Estuary and an area on the River Dodder. The results show the inclusion of flexibility is advantageous while the outputs provide decision makers with supplementary knowledge which previously has not been considered.
|
10 |
End to end Multi-Objective Optimisation of H.264 and HEVC CODECsAl Barwani, Maryam Mohsin Salim January 2018 (has links)
All multimedia devices now incorporate video CODECs that comply with international video coding standards such as H.264 / MPEG4-AVC and the new High Efficiency Video Coding Standard (HEVC) otherwise known as H.265. Although the standard CODECs have been designed to include algorithms with optimal efficiency, large number of coding parameters can be used to fine tune their operation, within known constraints of for e.g., available computational power, bandwidth, consumer QoS requirements, etc. With large number of such parameters involved, determining which parameters will play a significant role in providing optimal quality of service within given constraints is a further challenge that needs to be met. Further how to select the values of the significant parameters so that the CODEC performs optimally under the given constraints is a further important question to be answered. This thesis proposes a framework that uses machine learning algorithms to model the performance of a video CODEC based on the significant coding parameters. Means of modelling both the Encoder and Decoder performance is proposed. We define objective functions that can be used to model the performance related properties of a CODEC, i.e., video quality, bit-rate and CPU time. We show that these objective functions can be practically utilised in video Encoder/Decoder designs, in particular in their performance optimisation within given operational and practical constraints. A Multi-objective Optimisation framework based on Genetic Algorithms is thus proposed to optimise the performance of a video codec. The framework is designed to jointly minimize the CPU Time, Bit-rate and to maximize the quality of the compressed video stream. The thesis presents the use of this framework in the performance modelling and multi-objective optimisation of the most widely used video coding standard in practice at present, H.264 and the latest video coding standard, H.265/HEVC. When a communication network is used to transmit video, performance related parameters of the communication channel will impact the end-to-end performance of the video CODEC. Network delays and packet loss will impact the quality of the video that is received at the decoder via the communication channel, i.e., even if a video CODEC is optimally configured network conditions will make the experience sub-optimal. Given the above the thesis proposes a design, integration and testing of a novel approach to simulating a wired network and the use of UDP protocol for the transmission of video data. This network is subsequently used to simulate the impact of packet loss and network delays on optimally coded video based on the framework previously proposed for the modelling and optimisation of video CODECs. The quality of received video under different levels of packet loss and network delay is simulated, concluding the impact on transmitted video based on their content and features.
|
Page generated in 0.114 seconds