• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Maritime Transportation Optimization Using Evolutionary Algorithms in the Era of Big Data and Internet of Things

Cheraghchi, Fatemeh 19 July 2019 (has links)
With maritime industry carrying out nearly 90% of the volume of global trade, the algorithms and solutions to provide quality of services in maritime transportation are of great importance to both academia and the industry. This research investigates an optimization problem using evolutionary algorithms and big data analytics to address an important challenge in maritime disruption management, and illustrates how it can be engaged with information technologies and Internet of Things. Accordingly, in this thesis, we design, develop and evaluate methods to improve decision support systems (DSSs) in maritime supply chain management. We pursue three research goals in this thesis. First, the Vessel Schedule recovery Problem (VSRP) is reformulated and a bi-objective optimization approach is proposed. We employ bi-objective evolutionary algorithms (MOEAs) to solve optimization problems. An optimal Pareto front provides a valuable trade-off between two objectives (minimizing delay and minimizing financial loss) for a stakeholder in the freight ship company. We evaluate the problem in three domains, namely scalability analysis, vessel steaming policies, and voyage distance analysis, and statistically validate their performance significance. According to the experiments, the problem complexity varies in different scenarios, while NSGAII performs better than other MOEAs in all scenarios. In the second work, a new data-driven VSRP is proposed, which benefits from the available Automatic Identification System (AIS) data. In the new formulation, the trajectory between the port calls is divided and encoded into adjacent geohashed regions. In each geohash, the historical speed profiles are extracted from AIS data. This results in a large-scale optimization problem called G-S-VSRP with three objectives (i.e., minimizing loss, delay, and maximizing compliance) where the compliance objective maximizes the compliance of optimized speeds with the historical data. Assuming that the historical speed profiles are reliable to trust for actual operational speeds based on other ships' experience, maximizing the compliance of optimized speeds with these historical data offers some degree of avoiding risks. Three MOEAs tackled the problem and provided the stakeholder with a Pareto front which reflects the trade-off among the three objectives. Geohash granularity and dimensionality reduction techniques were evaluated and discussed for the model. G-S-VSRPis a large-scale optimization problem and suffers from the curse of dimensionality (i.e. problems are difficult to solve due to exponential growth in the size of the multi-dimensional solution space), however, due to a special characteristic of the problem instance, a large number of function evaluations in MOEAs can still find a good set of solutions. Finally, when the compliance objective in G-S-VSRP is changed to minimization, the regular MOEAs perform poorly due to the curse of dimensionality. We focus on improving the performance of the large-scale G-S-VSRP through a novel distributed multiobjective cooperative coevolution algorithm (DMOCCA). The proposed DMOCCA improves the quality of performance metrics compared to the regular MOEAs (i.e. NSGAII, NSGAIII, and GDE3). Additionally, the DMOCCA results in speedup when running on a cluster.
2

Coevolution of Neuro-controllers to Train Multi-Agent Teams from Zero Knowledge

Scheepers, Christiaan 25 July 2013 (has links)
After the historic chess match between Deep Blue and Garry Kasparov, many researchers considered the game of chess solved and moved on to the more complex game of soccer. Artificial intelligence research has shifted focus to creating artificial players capable of mimicking the task of playing soccer. A new training algorithm is presented in this thesis for training teams of players from zero knowledge, evaluated on a simplified version of the game of soccer. The new algorithm makes use of the charged particle swarm optimiser as a neural network trainer in a coevolutionary training environment. To counter the lack of domain information a new relative fitness measure based on the FIFA league-ranking system was developed. The function provides a granular relative performance measure for competitive training. Gameplay strategies that resulted from the trained players are evaluated. It was found that the algorithm successfully trains teams of agents to play in a cooperative manner. Techniques developed in this study may also be widely applied to various other artificial intelligence fields. / Dissertation (MSc)--University of Pretoria, 2013. / Computer Science / unrestricted
3

Comprehensibility, Overfitting and Co-Evolution in Genetic Programming for Technical Trading Rules

Seshadri, Mukund 30 April 2003 (has links)
This thesis presents Genetic Programming methodologies to find successful and understandable technical trading rules for financial markets. The methods when applied to the S&P500 consistently beat the buy-and-hold strategy over a 12-year period, even when considering transaction costs. Some of the methods described discover rules that beat the S&P500 with 99% significance. The work describes the use of a complexity-penalizing factor to avoid overfitting and improve comprehensibility of the rules produced by GPs. The effect of this factor on the returns for this domain area is studied and the results indicated that it increased the predictive ability of the rules. A restricted set of operators and domain knowledge were used to improve comprehensibility. In particular, arithmetic operators were eliminated and a number of technical indicators in addition to the widely used moving averages, such as trend lines and local maxima and minima were added. A new evaluation function that tests for consistency of returns in addition to total returns is introduced. Different cooperative coevolutionary genetic programming strategies for improving returns are studied and the results analyzed. We find that paired collaborator coevolution has the best results.
4

Cooperative coevolutionary mixture of experts : a neuro ensemble approach for automatic decomposition of classification problems

Nguyen, Minh Ha, Information Technology & Electrical Engineering, Australian Defence Force Academy, UNSW January 2006 (has links)
Artificial neural networks have been widely used for machine learning and optimization. A neuro ensemble is a collection of neural networks that works cooperatively on a problem. In the literature, it has been shown that by combining several neural networks, the generalization of the overall system could be enhanced over the separate generalization ability of the individuals. Evolutionary computation can be used to search for a suitable architecture and weights for neural networks. When evolutionary computation is used to evolve a neuro ensemble, it is usually known as evolutionary neuro ensemble. In most real-world problems, we either know little about these problems or the problems are too complex to have a clear vision on how to decompose them by hand. Thus, it is usually desirable to have a method to automatically decompose a complex problem into a set of overlapping or non-overlapping sub-problems and assign one or more specialists (i.e. experts, learning machines) to each of these sub-problems. An important feature of neuro ensemble is automatic problem decomposition. Some neuro ensemble methods are able to generate networks, where each individual network is specialized on a unique sub-task such as mapping a subspace of the feature space. In real world problems, this is usually an important feature for a number of reasons including: (1) it provides an understanding of the decomposition nature of a problem; (2) if a problem changes, one can replace the network associated with the sub-space where the change occurs without affecting the overall ensemble; (3) if one network fails, the rest of the ensemble can still function in their sub-spaces; (4) if one learn the structure of one problem, it can potentially be transferred to other similar problems. In this thesis, I focus on classification problems and present a systematic study of a novel evolutionary neuro ensemble approach which I call cooperative coevolutionary mixture of experts (CCME). Cooperative coevolution (CC) is a branch of evolutionary computation where individuals in different populations cooperate to solve a problem and their fitness function is calculated based on their reciprocal interaction. The mixture of expert model (ME) is a neuro ensemble approach which can generate networks that are specialized on different sub-spaces in the feature space. By combining CC and ME, I have a powerful framework whereby it is able to automatically form the experts and train each of them. I show that the CCME method produces competitive results in terms of generalization ability without increasing the computational cost when compared to traditional training approaches. I also propose two different mechanisms for visualizing the resultant decomposition in high-dimensional feature spaces. The first mechanism is a simple one where data are grouped based on the specialization of each expert and a color-map of the data records is visualized. The second mechanism relies on principal component analysis to project the feature space onto lower dimensions, whereby decision boundaries generated by each expert are visualized through convex approximations. I also investigate the regularization effect of learning by forgetting on the proposed CCME. I show that learning by forgetting helps CCME to generate neuro ensembles of low structural complexity while maintaining their generalization abilities. Overall, the thesis presents an evolutionary neuro ensemble method whereby (1) the generated ensemble generalizes well; (2) it is able to automatically decompose the classification problem; and (3) it generates networks with small architectures.
5

[en] REFINERY SCHEDULING OPTIMIZATION USING GENETIC ALGORITHMS AND COOPERATIVE COEVOLUTION / [pt] OTIMIZAÇÃO DA PROGRAMAÇÃO DA PRODUÇÃO EM REFINARIAS DE PETRÓLEO UTILIZANDO ALGORITMOS GENÉTICOS E CO-EVOLUÇÃO COOPERATIVA

LEONARDO MENDES SIMAO 28 February 2005 (has links)
[pt] Esta dissertação investiga a aplicação de Algoritmos Genéticos e de Co-Evolução Cooperativa na otimização da programação da produção em refinarias de petróleo. Refinarias de petróleo constituem um dos mais importantes exemplos de plantas contínuas multiproduto, isto é, um sistema de processamento contínuo gerador de múltiplos produtos simultâneos. Uma refinaria, em geral, processa um ou mais tipos de petróleo, produzindo uma série de produtos derivados, como o GLP (gás liquefeito de petróleo), a nafta, o querosene e o óleo diesel. Trata- se de um problema complexo de otimização, devido ao número e diversidade de atividades existentes e diferentes objetivos. Além disso, neste problema, algumas atividades dependem de que outras atividades já tenham sido planejadas para que possam então ser planejadas. Um caso típico é o das retiradas de produtos de uma unidade de processo, que dependem de que a carga já tenha sido planejada, assim como em qual campanha a unidade estará naquele instante. Por isso, o uso de modelos revolucionários convencionais, como os baseados em ordem, pode gerar muitas soluções inválidas, que deverão ser posteriormente corrigidas ou descartadas, comprometendo o desempenho e a viabilidade do algoritmo. O objetivo do trabalho foi, então, desenvolver um modelo evolucionário para otimizar a programação da produção (scheduling), segundo objetivos bem definidos, capaz de lidar com as restrições do problema, gerando apenas soluções viáveis. O trabalho consistiu em três etapas principais: um estudo sobre o refino de petróleo e a programação da produção em refinarias; a definição de um modelo usando algoritmos genéticos e co-evolução cooperativa para otimização da programação da produção e a implementação de uma ferramenta para estudo de caso. O estudo sobre o refino e a programação da produção envolveu o levantamento das várias etapas do processamento do petróleo em uma refinaria, desde o seu recebimento, destilação e transformação em diversos produtos acabados, que são então enviados a seus respectivos destinos. Neste estudo, também foi levantada a estrutura de tomada de decisão em uma refinaria e seus vários níveis, diferenciando os objetivos destes níveis e explicitando o papel da programação da produção nesta estrutura. A partir daí, foram estudadas em detalhes todas as atividades que normalmente ocorrem na refinaria e que são definidas na programação, e seus papéis na produção da refinaria. A decisão de quando e com que recursos executar estas atividades é o resultado final da programação e, portanto, a saída principal do algoritmo. A modelagem do algoritmo genético consistiu inicialmente em um estudo de representações utilizadas para problemas de scheduling. O modelo coevolucionário adotado considera a decomposição do problema em duas partes e,portanto, emprega duas populações com responsabilidades diferentes: uma é responsável por indicar quando uma atividade deve ser planejada e a outra é responsável por indicar com quais recursos essa mesma atividade deve ser realizada. A primeira população teve sua representação baseada em um modelo usado para problemas do tipo Dial-A-Ride (Moon et al, 2002), que utiliza um grafo para indicar à função de avaliação a ordem na qual o planejamento deve ser construído. Esta representação foi elaborada desta forma para que fosse levada em conta a existência de restrições de precedência (atividades que devem ser planejadas antes de outras), e assim não fossem geradas soluções inválidas pelo algoritmo. A segunda população, que se responsabiliza pela alocação dos recursos para a execução das atividades, conta com uma representação onde os operadores genéticos podem atuar na ordem de escolha dos recursos que podem realizar cada uma das atividades. Finalmente, des / [en] This work investigates the use of Genetic Algorithms and Cooperative Coevolution in refinery scheduling optimization. Oil refineries are one of the most important examples of multiproduct continuous plants, that is, a continuous processing system that generates a number of products simultaneously. A refinery processes various crude oil types and produces a wide range of products, including LPG (liquefied petroleum gas), gasoline, kerosene and diesel. It is a complex optimization problem, mainly due to the number of different tasks involved and different objective criteria. In addition, some of the tasks have precedence constraints that require other tasks to be scheduled first. For example, in order to schedule a task that transfers one of the yields of a certain crude distillation unit, both the task that feeds the crude oil into the unit and the task that sets the unit`s current operation mode must already be scheduled. Therefore, applying traditional evolutionary models, like the order- based ones, can create many infeasible solutions that will have to be corrected or rejected later on, thereby jeopardizing the algorithm performance and feasibility. The main goal was the development an evolutionary model satisfying well-defined objectives, which would optimize production scheduling and address the various constraints entailed in the problem, thus generating only feasible solutions. This work consisted on three main steps: a survey on crude oil refining and refinery scheduling; the development of a cooperative coevolutionary model to optimize the refinery scheduling and the development of a software tool for case studies. The study about refining and scheduling involved gathering information about the existent processes in a refinery, starting from the arrival of crude oil, its distillation and transformation into several products and, finally, the delivery of these products to their respective destination. The levels of decision making in a refinery were surveyed too, in order to identify the main goals for each one, and how the scheduling level fits into the structure as whole. Then, all the routine scheduling tasks and their roles in a refinery were carefully studied. The decision of when and how to assign those tasks is the final output of the scheduling task, so it must be the main output of the algorithm too. The development of the evolutionary model consisted of a survey on some of the most common evolutionary approaches to scheduling. The adopted coevolutionary model breaks the problem down into two parts, thus using two species with different responsibilities: One is responsible for deciding when a task should be scheduled, while the other is responsible for assigning a resource for this task. The first species representation was based on a model used for the Dial-a- Ride (Moon et al, 2002) kind of problems, and uses a graph to help the fitness evaluation function find the right order in which to schedule the tasks. This representation was devised in such a way that the precedence constraints were satisfied and no infeasible solutions were generated. The representation of the second species, which assigns resources for the tasks, let genetic operators change the selection order when picking a resource for a task. Finally, a software tool was developed to be used for implement this model and for performing a case study. This case study should comprise all the needed characteristics, in order to test the quality of the representation as well as evaluate the results. A simple refinery was designed, containing all equipment types, tasks and constraints found in a real-world refinery. The constraints mentioned are the precedence constraints, handled by the graph used by the first species, plus other operational constraints found in refinery scheduling. It was possible, then, to see the decoding of chromosomes into feasible solutions, always satisfying all the constraints. Several tests

Page generated in 0.1494 seconds