• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 262
  • 193
  • 73
  • 18
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 638
  • 638
  • 184
  • 177
  • 177
  • 154
  • 113
  • 112
  • 110
  • 95
  • 72
  • 71
  • 68
  • 66
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Bi-objective multi-assignment capacitated location-allocation problem

Maach, Fouad 01 June 2007 (has links)
Optimization problems of location-assignment correspond to a wide range of real situations, such as factory network design. However most of the previous works seek in most cases at minimizing a cost function. Traffic incidents routinely impact the performance and the safety of the supply. These incidents can not be totally avoided and must be regarded. A way to consider these incidents is to design a network on which multiple assignments are performed. Precisely, the problem we focus on deals with power supplying that has become a more and more complex and crucial question. Many international companies have customers who are located all around the world; usually one customer per country. At the other side of the scale, power extraction or production is done in several sites that are spread on several continents and seas. A strong willing of becoming less energetically-dependent has lead many governments to increase the diversity of supply locations. For each kind of energy, many countries expect to deal ideally with 2 or 3 location sites. As a decrease in power supply can have serious consequences for the economic performance of a whole country, companies prefer to balance equally the production rate among all sites as the reliability of all the sites is considered to be very similar. Sharing equally the demand between the 2 or 3 sites assigned to a given area is the most common way. Despite the cost of the network has an importance, it is also crucial to balance the loading between the sites to guarantee that no site would take more importance than the others for a given area. In case an accident happens in a site or in case technical problems do not permit to satisfy the demand assigned to the site, the overall power supply of this site is still likely to be ensured by the one or two available remaining site(s). It is common to assign a cost per open power plant and another cost that depends on the distance between the factory or power extraction point and the customer. On the whole, such companies who are concerned in the quality service of power supply have to find a good trade-off between this factor and their overall functioning cost. This situation exists also for companies who supplies power at the national scale. The expected number of areas as well that of potential sites, can reach 100. However the targeted size of problem to be solved is 50. This thesis focuses on devising an efficient methodology to provide all the solutions of this bi-objective problem. This proposal is an investigation of close problems to delimit the most relevant approaches to this untypical problem. All this work permits us to present one exact method and an evolutionary algorithm that might provide a good answer to this problem. / Master of Science
142

Blending Methods for Composite Laminate Optimization

Adams, David Bruce 30 August 2002 (has links)
Composite panel structure optimization is commonly decomposed into panel optimization subproblems, with specified local loads, resulting in manufacturing incompatibilities between adjacent panel designs. Using genetic algorithms to optimize local panel stacking sequences allows panel populations of stacking sequences to evolve in parallel and send migrants to adjacent panels, so as to blend the local panel designs globally. The blending process is accomplished using the edit distance between individuals of a population and the set of migrants from adjacent panels. The objective function evaluating the fitness of designs is modified according to the severity of mismatches detected between neighboring populations. This lays the ground work for natural evolution to a blended global solution without leaving the paradigm of genetic algorithms. An additional method proposed here for constructing globally blended panel designs uses a parallel decomposition antithetical to that of earlier work. Rather than performing concurrent panel genetic optimizations, a single genetic optimization is conducted for the entire structure with the parallelism solely within the fitness evaluations. A guide based genetic algorithm approach is introduced to exclusively generate and evaluate valid globally blended designs, utilizing a simple master-slave parallel implementation, implicitly reducing the size of the problem design space and increasing the quality of discovered local optima. / Master of Science
143

Parallelizing a nondeterministic optimization algorithm

D'Souza, Sammy Raymond 01 January 2007 (has links)
This research explores the idea that for certain optimization problems there is a way to parallelize the algorithm such that the parallel efficiency can exceed one hundred percent. Specifically, a parallel compiler, PC, is used to apply shortcutting techniquest to a metaheuristic Ant Colony Optimization (ACO), to solve the well-known Traveling Salesman Problem (TSP) on a cluster running Message Passing Interface (MPI). The results of both serial and parallel execution are compared using test datasets from the TSPLIB.
144

The development and application of metaheuristics for problems in graph theory : a computational study

Consoli, Sergio January 2008 (has links)
It is known that graph theoretic models have extensive application to real-life discrete optimization problems. Many of these models are NP-hard and, as a result, exact methods may be impractical for large scale problem instances. Consequently, there is a great interest in developing e±cient approximate methods that yield near-optimal solutions in acceptable computational times. A class of such methods, known as metaheuristics, have been proposed with success. This thesis considers some recently proposed NP-hard combinatorial optimization problems formulated on graphs. In particular, the min- imum labelling spanning tree problem, the minimum labelling Steiner tree problem, and the minimum quartet tree cost problem, are inves- tigated. Several metaheuristics are proposed for each problem, from classical approximation algorithms to novel approaches. A compre- hensive computational investigation in which the proposed methods are compared with other algorithms recommended in the literature is reported. The results show that the proposed metaheuristics outper- form the algorithms recommended in the literature, obtaining optimal or near-optimal solutions in short computational running times. In addition, a thorough analysis of the implementation of these methods provide insights for the implementation of metaheuristic strategies for other graph theoretic problems.
145

Two new combinatorial problems involving dominating sets for lottery schemes

Grundlingh, Werner R. 12 1900 (has links)
Thesis (PhD (Mathematical Sciences. Applied Mathematics))--University of Stellenbosch, 2004. / Suppose a lottery scheme consists of randomly selecting an unordered winning n-subset from a universal set of m numbers, while a player participates in the scheme by purchasing a playing set of any number of unordered n-subsets from the same universal set prior to a winning draw, and is awarded a prize if ...
146

Protein and Drug Design Algorithms Using Improved Biophysical Modeling

Hallen, Mark Andrew January 2016 (has links)
<p>This thesis focuses on the development of algorithms that will allow protein design calculations to incorporate more realistic modeling assumptions. Protein design algorithms search large sequence spaces for protein sequences that are biologically and medically useful. Better modeling could improve the chance of success in designs and expand the range of problems to which these algorithms are applied. I have developed algorithms to improve modeling of backbone flexibility (DEEPer) and of more extensive continuous flexibility in general (EPIC and LUTE). I’ve also developed algorithms to perform multistate designs, which account for effects like specificity, with provable guarantees of accuracy (COMETS), and to accommodate a wider range of energy functions in design (EPIC and LUTE).</p> / Dissertation
147

Metaheuristická metóda mravčej kolónie pri riešení kombinatorických optimalizačných úloh / Solving the combinatorial optimization problems with the Ant Colony Optimization metaheuristic method

Chu, Andrej January 2005 (has links)
The Ant Colony Optimization belongs into the metaheuristic methods category and it has been developing quite recently. So far it has shown its capabalities to over-perform other metaheuristic methods in quality of the solutions. This work brings analysis of the possible applications of the method on the classical optimization combinatorial problems -- traveling salesman problem, vehicle routing problem, knapsack problem, generalized assignment problem and maximal clique problem. It also deals with the practical experiments with application on several optimization problems and analysis of the time and memory complexity of such algorithms. The last part of the work is dedicated to the possibility of parallelization of the algorithm, which was result of the application of the ACO method on the traveling salesman problem. It brings analysis of the crucial operations and data synchronization issues, as well as practical example and demonstration of the parallelized version of the algorithm.
148

A new hybrid meta-heuristic algorithm for solving single machine scheduling problems

Zlobinsky, Natasha January 2017 (has links)
A dissertation submitted in partial ful lment of the degree of Master of Science in Engineering (Electrical) (50/50) in the Faculty of Engineering and the Built Environment Department of Electrical and Information Engineering May 2017 / Numerous applications in a wide variety of elds has resulted in a rich history of research into optimisation for scheduling. Although it is a fundamental form of the problem, the single machine scheduling problem with two or more objectives is known to be NP-hard. For this reason we consider the single machine problem a good test bed for solution algorithms. While there is a plethora of research into various aspects of scheduling problems, little has been done in evaluating the performance of the Simulated Annealing algorithm for the fundamental problem, or using it in combination with other techniques. Speci cally, this has not been done for minimising total weighted earliness and tardiness, which is the optimisation objective of this work. If we consider a mere ten jobs for scheduling, this results in over 3.6 million possible solution schedules. It is thus of de nite practical necessity to reduce the search space in order to nd an optimal or acceptable suboptimal solution in a shorter time, especially when scaling up the problem size. This is of particular importance in the application area of packet scheduling in wireless communications networks where the tolerance for computational delays is very low. The main contribution of this work is to investigate the hypothesis that inserting a step of pre-sampling by Markov Chain Monte Carlo methods before running the Simulated Annealing algorithm on the pruned search space can result in overall reduced running times. The search space is divided into a number of sections and Metropolis-Hastings Markov Chain Monte Carlo is performed over the sections in order to reduce the search space for Simulated Annealing by a factor of 20 to 100. Trade-o s are found between the run time and number of sections of the pre-sampling algorithm, and the run time of Simulated Annealing for minimising the percentage deviation of the nal result from the optimal solution cost. Algorithm performance is determined both by computational complexity and the quality of the solution (i.e. the percentage deviation from the optimal). We nd that the running time can be reduced by a factor of 4.5 to ensure a 2% deviation from the optimal, as compared to the basic Simulated Annealing algorithm on the full search space. More importantly, we are able to reduce the complexity of nding the optimal from O(n:n!) for a complete search to O(nNS) for Simulated Annealing to O(n(NMr +NS)+m) for the input variables n jobs, NS SA iterations, NM Metropolis- Hastings iterations, r inner samples and m sections. / MT 2017
149

Massive parallelism for combinatorial problems by hardware acceleration with an application to the label switching problem

Steere, Edward January 2016 (has links)
A dissertation submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, in fulfilment of the requirements for the degree of Master of Science in Engineering. / This dissertation proposes an approach to solving hard combinatorial problems in massively parallel architectures using parallel metaheuristics. Combinatorial problems are common in many scientific fields. Scientific progress is constrained by the fact that, even using state of the art algorithms, solving hard combinatorial problems can take days or weeks. This is the case with the Label Switching Problem (LSP) in the field of Bioinformatics. In this field, prior work to solve the LSP has resulted in the program CLUMPP (CLUster Matching and Permutation Program). CLUMPP focuses solely on the use of a sequential, classical heuristic, and has had success in smaller low complexity problems. By contrast this dissertation proposes the Parallel Solvers model for the acceleration of hard combinatorial problems. This model draws on the commonalities evident in algorithms and strategies in metaheuristics. After investigating the effectiveness of the mechanisms apparent in the Parallel Solvers model with regards to the LSP, the author developed DePermute, an algorithm which can be used to solve the LSP significantly faster. Results were generated from time based testing of simulated data, as well as data freely available on the Internet as part of various projects. An investigation into the effectiveness of DePermute was carried out on a CPU (Central Processing Unit) based computer. The time based testing was carried out on a CPU based computer and on a Graphics Processing Unit (GPU) attached to a CPU host computer. The dissertation also proposes the design of an Field Programmable Gate Arrays (FGPA) based implementation of DePermute. Using Parallel Solvers, in the DePermute algorithm, the time taken for population group sizes, K, ranging from K = 5 to 20 was improved by up to two orders of magnitude using the GPU implementation and aggressive settings for CLUMPP. The CPU implementation, while slower than the GPU implementation still outperforms CLUMPP, using aggressive settings, marginally and usually with better quality. In addition it outperforms CLUMPP by at least an order of magnitude when CLUMPP is set to use higher quality settings. Combinatorial problems can be very difficult. Parallel Solvers has been effective in the field of Bioinformatics in solving the LSP. This dissertation proposes that it might assist in the reasoning and design of algorithms in other fields. / MT2017
150

Métodos heurísticos para um problema de planejamento da produção em uma indústria química / Heuristic methods for a problem of production planning in a chemical industry

Cunha, Artur Lovato da 09 August 2013 (has links)
Neste trabalho foi estudado um problema de dimensionamento de lotes em uma indústria química brasileira, cujo objetivo era determinar o tamanho dos lotes dos produtos para atender às demandas, minimizando os custos produtivos. Os itens podem ser produzidos em máquinas paralelas distintas, através de diferentes processos, e devem ser armazenados em taques cativos, exclusivos a um produto, ou multipropósitos, compartilhado entre produtos, desde que não simultaneamente. Foram propostos dois modelos matemáticos de programação inteira mista para representar o problema, o primeiro apresentava uma função objetivo compreendendo o preço das matérias-primas consumidas nas reações, os gastos com a estocagem de produtos e o custo de descarte de produtos quando os tanques de armazenamento não tiverem capacidade suficiente para armazená-los, já o segundo estendendo este modelo para considerar custos de preparação de máquina. Experimentos computacionais com os modelos propostos, utilizando instâncias geradas a partir dos dados fornecidos pela empresa, mostraram que o software de otimização empregado foi capaz de resolver poucas instâncias, após uma hora de processamento. Portanto, foram propostas heurísticas construtivas do tipo LP-and-fix e relax-and-fix, além de heurísticas de melhoria do tipo fix-and-optimize. Após serem realizados testes com essas heurísticas, constatou-se que algumas proporcionaram a obtenção de soluções factíveis de boa qualidade, quando comparadas às obtidas pelo software, sendo ainda capazes de resolver um maior número de instâncias / In this dissertation the lot sizing problem in a chemical Brazilian industry was studied, with the goal to determine the products lot size to satisfy the demands, minimizing the production costs. The items can be produced on distinct parallel machines through different processes and then must be stored in exclusive tanks, used by only one product, or multipurpose tanks, when more than one product can use the tank, but not simultaneously. Two models were proposed to represent the problem, the first one aiming to minimize the price of raw material consumed in the reactions, storage product spending and the cost of discarting products when the storage tanks do not have enough capacity to store them, and the second one considering setup cost either. Computational experiments using the proposed models, with instances were generated from the data provided by the company, showed that the used optimization software was able to solve only few instances after processing for one hour. In this dissertation we propose constructives heuristics such LP-and-fix and relax-and-fix, and improving heuristics like fix-and-optimize. After performing the tests with those heuristics, it was found that some of them provided feasible solutions with good quality, when compared to the ones obtained by the software, and they were also able to solve a larger number of instances

Page generated in 0.1047 seconds