131 |
Busca tabu aplicada ao problema de roteamento periodico de veiculos / A tabu search algorithm for the periodic vehicle routing problemMortati, Camila Frederico 17 June 2005 (has links)
Orientador: Vinicius Amaral Armentano / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-04T14:30:07Z (GMT). No. of bitstreams: 1
Mortati_CamilaFrederico_M.pdf: 341927 bytes, checksum: 5b1a9d9a8e6c0852c0e86425852e2584 (MD5)
Previous issue date: 2005 / Resumo: Este trabalho aborda o problema de roteamento periódico de veículos, que consiste em designar uma combinação de dias de visitas a cada cliente, e definir as rotas de veículos em cada dia de um horizonte de planejamento, de forma a minimizar o custo ou a duração total das rotas. Um algoritmo de busca tabu é proposto para a resolução do problema. A história da busca tabu, usada para guiar o processo de busca, é representada através de memórias de curto e longo prazo. A eficiência das estratégias sugeridas para diversificação e intensificação, associadas à memória de logo prazo, são verificadas experimentalmente. O desempenho do algoritmo de busca tabu é testado computacionalmente em problemas da literatura. Um procedimento de busca tabu proposto na literatura é implementado e comparado com o algoritmo aqui proposto / Abstract: This work addresses the periodic vehicle routing problem that consists of assigning a combination of visiting days to each client, and defining the routes every day of a planning horizon, in such a way as to minimize the cost or duration of the routes. A tabu search algorithm is proposed for solving this problem. The history of the tabu search, used to guide the search process, is represented by short and long term memories. The efficacy of the suggested strategies for diversification and intensification, associated to the long term memory, is verified experimentally. The performance of the tabu search algorithm is tested computationally in instances from the literature. A tabu search procedure suggested in the literature is implemented and its performance is tested against the tabu search algorithm developed in this work / Mestrado / Automação / Mestre em Engenharia Elétrica
|
132 |
Probabilistic Risk Assessment in Clouds: Models and AlgorithmsPalhares, André Vitor de Almeida 08 March 2012 (has links)
Submitted by Pedro Henrique Rodrigues (pedro.henriquer@ufpe.br) on 2015-03-04T17:17:29Z
No. of bitstreams: 2
dissert-avap.pdf: 401311 bytes, checksum: 5bd3f82323bd612e8265a6ab8a55eda0 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-04T17:17:29Z (GMT). No. of bitstreams: 2
dissert-avap.pdf: 401311 bytes, checksum: 5bd3f82323bd612e8265a6ab8a55eda0 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Previous issue date: 2012-03-08 / Cloud reliance is critical to its success. Although fault-tolerance mechanisms are employed by cloud
providers, there is always the possibility of failure of infrastructure components. We consequently
need to think proactively of how to deal with the occurrence of failures, in an attempt to minimize
their effects. In this work, we draw the risk concept from probabilistic risk analysis in order to
achieve this.
In probabilistic risk analysis, consequence costs are associated to failure events of the target
system, and failure probabilities are associated to infrastructural components. The risk is the
expected consequence of the whole system. We use the risk concept in order to present
representative mathematical models for which computational optimization problems are formulated
and solved, in a Cloud Computing environment. In these problems, consequence costs are
associated to incoming applications that must be allocated in the Cloud and the risk is either seen as
an objective function that must be minimized or as a constraint that should be limited.
The proposed problems are solved either by optimal algorithm reductions or by
approximation algorithms with provably performance guarantees. Finally, the models and problems
are discussed from a more practical point of view, with examples of how to assess risk using these
solutions. Also, the solutions are evaluated and results on their performance are established, showing
that they can be used in the effective planning of the Cloud.
|
133 |
Metaheuristics for Group Shop SchedulingBlum, Christian January 2002 (has links)
Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
|
134 |
Code optimisation using discrete optimisation techniques.Dopler, Tristan Didier 29 May 2008 (has links)
The topic for this dissertation is the optimisation of computer programs, as they are being compiled, using discrete optimisation techniques. The techniques introduced aim to optimise the runtime performance of programs executing on certain types of processors. A very important component of this dissertation is the movement of complexity from the processor to the compiler. Therefore both computer architecture and compilers are important supporting topics. The data output of the compiler is processed using information about the processor to produce execution information which is the goal of this dissertation. Concepts related to instruction level parallelism are covered in two parts. The first part discusses implicit parallelism, where parallel instruction scheduling is performed by the processor. The second part discusses explicit parallelism, where the compiler schedules the instructions. Explicit parallelism is attractive because it allows processor design to be simplified resulting in multiple benefits. Scheduling the instructions to execute while adhering to resource limitations is the area of focus for the rest of the dissertation. In order to find optimal schedules the problem is modelled as a mathematical program. Expressing instructions, instruction dependencies and resource limitations as a mathematical program are discussed in detail with several algorithms being introduced. Several aspects prevent the mathematical programs from being solved in their initial state, therefore additional techniques are introduced. A heuristic algorithm is introduced for scheduling instructions in a resource limited environment. The primary use of this heuristic is to reduce the computational complexity of the problem. However, this heuristic algorithm can be used to generate good schedules on its own. Finally information regarding a practical implementation of a compiler that implements the introduced techniques is introduced as well as experimental results. The experimental results are generated from a series of test programs illustrating the complete process and the computational complexity of the algorithms employed. / Smith, T.H.C., Prof.
|
135 |
A Tour Construction Framework for the Travelling Salesman ProblemAhrens, Barry 01 January 2012 (has links)
The Tour Construction Framework (TCF) integrates both global and local heuristics in a complementary framework in order to efficiently solve the Travelling Salesman Problem (TSP). Most tour construction heuristics are strictly local in nature. However, the experimental method presented in this research includes a global heuristic to efficiently solve the TSP. The Global Path (GP) component and Super Node (SN) component comprise the TCF. Each component heuristic is tuned with one or more parameters. Genetic Algorithms (GA) are used to train the collection of parameters for the TCF components on subsets of benchmark TSPs. The GA results are used to run the TCF on the full TSP instances. The performance of the TCF is evaluated for speed, accuracy, and computational complexity, and it is compared against six mainstream TSP solvers: Lin-Kernighan-Helsgaun (LKH-2), 2-Opt, Greedy, Boruvka, Quick-Boruvka, and Nearest Neighbor. The empirical study demonstrates the effectiveness of the TCF in achieving near-optimal solutions for the TSP with reasonable costs.
|
136 |
Algoritmos para problemas de empacotamento / Algorithms for packing problemsXavier, Eduardo Candido, 1979- 12 May 2006 (has links)
Orientador: Flavio Keidi Miyazawa / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-07T21:41:01Z (GMT). No. of bitstreams: 1
Xavier_EduardoCandido_D.pdf: 20666026 bytes, checksum: 5e051653d938a813e227b1e2eebcd415 (MD5)
Previous issue date: 2006 / Resumo: Neste trabalho estudamos diversos problemas de empacotamento considerados NP-difíceis. Assumindo a hipótese de que P ? NP, sabemos que não existem algoritmos eficientes (complexidade de tempo polinomial) exatos para resolver tais problemas. Uma das abordagens consideradas para tratar tais problemas é a de algoritmos de aproximação, que são algoritmos eficientes e que geram soluções com garantia de qualidade. Neste trabalho apresentamos alguns algoritmos aproximados para problemas de empacotamento com aplicações práticas. Outra maneira de se lidar com problemas NP-difíceis é o desenvolvimento de heurísticas. Neste trabalho também apresentamos heurísticas baseadas no método de geração de colunas para problemas de corte e empacotamento bidimensional. Resultados computacionais sugerem que tais heurísticas são eficientes e geram soluções de muito boa qualidade. / Abstract: In this work we study several packing problems that are NP-hard. If we consider that P ? NP, we know that there are no efficient (polynomial time complexity) exact algorithms to solve these problems. One way to deal with these kind of problems is to use approximation algorithms, that are efficient algorithms that produce solutions with quality guarantee. We present several approximation algorithms for some packing problems that have practical applications. Another way to deal with NP-hard problems is to develop heuristics. We also consider column generation based heuristics for packing problems. In this case, we present column generation algorithms for some two dimensional packing problems and also present computational tests with the proposed algorithms. The computational results shows that the heuristics are efficient and produce solutions of very good quality. / Doutorado / Doutor em Ciência da Computação
|
137 |
Comparação entre uma solução combinatória e um método de planos-de-corte para o problema do emparelhamento de peso máximo / Comparison between a combinatorial solution and plane-cut method for the maximum weight matching problem.Ander Conselvan de Oliveira 10 December 2010 (has links)
Um emparelhamento em um grafo é um conjunto de arestas duas a duas não adjacentes. Dado um grafo G com pesos em suas arestas, o problema do emparelhamento de peso é máximo é encontrar um emparelhamento cuja soma dos pesos de suas arestas é máxima. Neste trabalho estudamos diferentes soluções para esse problema. Estudamos algoritmos combinatórios que resolvem o problema no caso em que G é bipartido e no caso geral. O algoritmo de Edmonds é um algoritmo polinomial cuja complexidade de tempo é O(n^4), onde n é o número de vértices do grafo G. Discutimos nesse trabalho nossa implementação desse algoritmo. Num trabalho de 1985, Grötschel e Holland propuseram o uso de ferramentas de programação linear para resolver o mesmo problema. O método chamado de planos-de-corte baseia-se em um resultado de Padberg e Rao de que o problema da separação associado ao poliedro dos emparelhamentos pode ser resolvido em tempo polinomial. Neste trabalho fizemos implementações dos dois métodos e os utilizamos para resolver diversos tipos de instâncias do problema. Nossa conclusão é que o método poliédrico, apesar de utilizar ferramentas genéricas, é bastante eficiente na prática. / A matching in a graph G is a set of pairwise disjoint edges of G. Given a graph G with edge weights, we define the maximum weight matching problem as that of finding a matching which maximizes the sum of its weights. In this thesis we study different solutions to this problem. We studied combinatorial algorithms that solve this problem in the case where G is bipartite and also in the general case. Edmonds algorithm [Edm65a] is a polynomial time algorithm with complexity O(n4 ), where n is the number of vertices in the graph G. We discuss in this document our implementation of this algorithm. In a paper from 1985, Gr tschel & Holland [GH85] discussed the use of linear programming o tools for solving the maximum weight matching problem. This so called cut-plane method relies on a result by Padberg & Rao [PR82] that proves that the separation problem associated with matching polyhedron is solvable in polinomial time. In this work we implemented both methods and used then to solve different instances of the problem. Our conclusion is that the polyhedral method, although using generical tools is very efficient in practice.
|
138 |
Kombinatorisk Optimering med Pointer Networks och Reinforcement LearningHolmberg, Axel, Hansson, Wilhelm January 2021 (has links)
Given the complexity and range of combinatorial optimization problems, solving them can be computationally easy or hard. There are many ways to solve them, but all available methods share a problem: they take a long time to run and have to be rerun when new cases are introduced. Machine learning could prove a viable solution to solving combinatorial optimization problems due to the possibility for models to learn and generalize, eliminating the need to run a complex algorithm every time a new instance is presented. Uniter is a management consulting firm that provides services within product modularization. Product modularization results in the possibility for many different product variations to be created based on customer needs. Finding the best combination given a specific customer's need will require solving a combinatorial optimization problem. Based on Uniter's need, this thesis sought to develop and evaluate a machine learning model consisting of a Pointer Network architecture and trained using Reinforcement Learning. The task was to find the combination of parts yielding the lowest cost, given a use case. Each use case had different attributes that specified the need for the final product. For each use case, the model was tasked with selecting the most appropriate combination from a set of 4000 distinct combinations. Three experiments were conducted: examining if the model could suggest an optimal solution after being trained on one use case, if the model could suggest an optimal solution of a previously seen use case, and if the model could suggest an optimal solution of an unseen use case. For all experiments, a single data set was used. The suggested model was compared to three baselines: a weighted random selection, a naive model implementing a feed-forward network, and an exhaustive search. The results showed that the proposed model could not suggest an optimal solution in any of the experiments. In most tests conducted, the proposed model was significantly slower at suggesting a solution than any baseline. The proposed model had high accuracy in all experiments, meaning it suggested almost only feasible solutions in the allowed solution space. However, when the model converged, it suggested only one combination for every use case, with the feed-forward baseline showing the same behavior. This behavior could suggest that the model misinterpreted the task and identified a solution that would work in most cases instead of suggesting the optimal solution for each use case. The discussion concludes that an exhaustive search is preferable for the studied data set and that an alternate approach using supervised learning may be a better solution.
|
139 |
Data-Driven Combinatorial Optimization and Efficient Machine Learning FrameworksSakr, Nourhan January 2019 (has links)
Contemporary research in building optimization models and designing algorithms has become more data-centric and application-specific. While addressing three problems in the fields of combinatorial optimization and machine learning (ML), this work highlights the value of making data an important driver for building frameworks and designing solutions.
In Chapter 2, we look into building frameworks for data-intensive applications, such as ML algorithms. We introduce a family of matrices, Structured Spinners, and use these to perform input data projections. This operation is commonly needed for a plethora of ML algorithms such as dimension reduction, cross-polytope LSH techniques or kernel approximation, yet comprises a major bottleneck in terms of time and space complexity. We design a generic framework that speeds up ML algorithms which perform such projections, with no or minor loss in accuracy. Our method applies for projections in both randomized and learned settings. We confirm our results via theoretical guarantees and numerical experiments. Moreover, we are the first to provide theoretical results for that type of work under adaptive settings.
In Chapter 3, we rely on empirical analysis to design algorithms for an online packet scheduling problem with weighted packets and agreeable deadlines. We adopt the model presented in Jez et al. and use their Modified Greedy MG algorithm, which was the best-performing one in the literature, as our benchmark. The technique used for proving the competitive ratio for this benchmark is worst case analysis. Via extensive simulations, we shed light on practical bottlenecks and observe that these are not captured by the competitive analysis. We design three new algorithms, two of which make deterministic online decisions, while the third learns an adaptive one. When contrasted against the MG benchmark, our algorithms have significantly worse competitive ratios, yet noticeably better empirical performance. Our methodology is particularly useful for online algorithms and underlines the significance of leveraging data or simulations to guide algorithm design and testing.
In Chapter 4, we study two different applications in cybersecurity: an adaptive ML problem and a game-theoretic model called PLADD. The common objective between both problems is to protect cybersystems against attacks by intelligent, adaptable and well-resourced adversaries while maintaining a cost budget. We introduce a novel combinatorial scheduling formulation to design useful defense strategies that meet this goal. Our work separates the formulation from the data-driven analysis and solution. As such, the scheduling formulation, which does not resemble any previously studied formulations from the scheduling literature, may be used as a new model by other researchers for different motivations. We keep the model generic enough for others to use, but design the algorithms that work best for our context. The formulation is inspired by stochastic programming and cast as a mixed integer program (MIP). We provide theoretical analysis, e.g. explore integrality gaps, exploit the combinatorial structure, prove NP-hardness, develop dynamic programming solutions for two-machine cases, then work towards data-driven heuristics and approximation algorithms using distribution assumptions and real data.
|
140 |
New solution approaches for the quadratic assignment problemFomeni, Franklin Djeumou 18 January 2012 (has links)
MSc., Faculty of Science, University of the Witwatersrand, 2011 / A vast array of important practical problems, in many di erent elds, can be modelled and
solved as quadratic assignment problems (QAP). This includes problems such as university
campus layout, forest management, assignment of runners in a relay team, parallel and
distributed computing, etc. The QAP is a di cult combinatorial optimization problem
and solving QAP instances of size greater than 22 within a reasonable amount of time
is still challenging. In this dissertation, we propose two new solution approaches to the
QAP, namely, a Branch-and-Bound method and a discrete dynamic convexized method.
These two methods use the standard quadratic integer programming formulation of the
QAP. We also present a lower bounding technique for the QAP based on an equivalent
separable convex quadratic formulation of the QAP. We nally develop two di erent new
techniques for nding initial strictly feasible points for the interior point method used in
the Branch-and-Bound method. Numerical results are presented showing the robustness
of both methods.
|
Page generated in 0.0231 seconds