• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 262
  • 193
  • 73
  • 18
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 638
  • 638
  • 184
  • 177
  • 177
  • 154
  • 113
  • 112
  • 110
  • 95
  • 72
  • 71
  • 68
  • 66
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

The traveling salesman problem and its applications

Hui, Ming-Ki. January 2002 (has links)
Thesis (M.Phil.)--University of Hong Kong, 2003. / Includes bibliographical references (leaves 49-50) Also available in print.
42

A knapsack-type cryptographic system using algebraic number rings

Moyer, Nathan Thomas. January 2010 (has links) (PDF)
Thesis (Ph. D.)--Washington State University, May 2010. / Title from PDF title page (viewed on May 21, 2010). "Department of Mathematics." Includes bibliographical references (p. 83-86).
43

Combinatorial optimization in VLSI physical design

Walsh, Peter Anthony 05 July 2018 (has links)
Simulated Annealing is a general purpose combinatorial optimization technique which has been applied to many problems in VLSI design. In essence, simulated annealing is Monte Carlo iterative improvement with the ability to conditionally accept uphill moves. The notion of a cooling schedule is common to all simulated annealing implementations. A cooling schedule can be thought of as simulated annealing's control mechanisms. Experiential work has been done on estimating the cost of an optimal solution to some combinatorial optimization problem instances. Such an estimate can be used to determine termination criteria for general purpose optimization techniques such as iterative improvement or simulated annealing. We have extended this idea and designed a complete simulated annealing general cooling schedule based on the cost of an optimal solution to a problem instance. We call the resultant schedule an extended goal-directed general cooling schedule. One of the major problems with simulated annealing is its long computation times. This problem can be addressed by first using a fast heuristic to find a good initial configuration and then applying simulated annealing. This approach is called Simulated Sintering. To exploit the potential of simulated sintering one needs an appropriate general cooling schedule. The extended goal-directed cooling schedule is equally applicable to simulated annealing and simulated sintering. To date, no one cooling schedule has proven suitable for all optimization problem instances. In our view, no such cooling schedule exits. Consequently, we have attempted to identify the type of problem best suited to optimization by simulated annealing and simulated sintering using the extended goal-directed schedule. We have applied the extended goal-directed schedule to standard-cell placement and floorplanning problems using both simulated annealing and simulated sintering. Within this context, we have compared the performance of the extended goal-directed schedule to other published schedules. Our results indicate that in terms of layout quality, the extended goal-directed schedule performs as well or better than the other schedules. In this dissertation, we have developed a new general cooling schedule. Our evaluation of the extended goal-directed schedule suggests that it is a useful research contribution in the area of simulated annealing algorithms. / Graduate
44

Contribution à la résolution des problèmes combinatoires : optimisation séquentielle et parallèle / Contribution for solving combinatorial problems : sequential and parallel optimization

Saleh, Sagvan Ali 15 June 2015 (has links)
Les problèmes d’optimisation combinatoire sont d’un grand intérêt à la fois pour le monde scientifique et le monde industriel. La communauté scientifique a oeuvré pour la simplification de certains problèmes issus du monde industriel vers des modèles d’optimisation combinatoire. Parmi ces problèmes, on peut trouver des problèmes appartenant à la famille du problème du sac à dos (knapsack). Dans cette thèse, nous considérons une variante du problème du sac à dos : le problème du sac à dos avec des contraintes disjonctives (Knapsack with Disjunctive Constraints). En raison de la difficulté de cette problématique, nous nous sommes intéressés au développement de méthodes heuristiques produisant des solutions de bonne qualité en un temps de calcul modéré. Nos travaux de recherche s’appuient sur le principe de la recherche par voisinage. Bien que cette façon de faire nous conduise vers des solutions approchées,leur utilisation ainsi que les résultats que nous avons obtenus restent intéressants tout en gardant un temps d’exécution raisonnable. Afin de résoudre des instances de grande taille pour la problématique étudiée, nous avons proposé des méthodes séquentielles et parallèles. Ces deux techniques de résolution sont basées sur la recherche par voisinage. Dans un premier temps, une première méthode de recherche par voisinage aléatoire a été proposée. Elle s’appuie sur la combinaison de deux procédures : une première procédure qui cherche à construire une série de solutions partielles et une deuxième procédure qui complète chacune des solutions partielles courantes par une exploration de son voisinage. Ensuite, une deuxième méthode adaptative a été mise en place. Elle s’appuie sur un système d’optimisation par colonie de fourmis pour simuler une recherche guidée et une procédure de descente pour explorer au mieux les solutions produites au cours du processus de recherche. Finalement, une troisième méthode a été élaborée dans le but de faire évoluer la performance des méthodes de recherche par voisinage. Dans cette partie de nos travaux de recherche, nous avons proposé une recherche par voisinage aléatoire parallèle. Nous nous appuyés sur l’exploration simultanée de différents (sous) espaces de recherche par différents processeurs, où chaque processeur adopte sa propre stratégie aléatoire pour construire ses propres voisinages en fonction de ses informations internes récoltées / Combinatorial optimization problems are of high interest both for the scientific world and for the industrial world. The research community has simplified many practical situations as combinatorial optimization problems. Among these problems, we can find some problems belonging to the knapsack family. This thesis considers a particular problem belonging to the knapsack family, known as the disjunctively constrained knapsack problem. Because of the difficulty of this problem, we are searching for approximate solution techniques with fast solution times for its large scale instances. A promising way to solve the disjunctively constrained knapsack problem is to consider some techniques based upon the principle of neighborhood search. Although such techniques produce approximate solution methods, they allow us to present fast algorithms that yield interesting solutions within a short average running time. In order to tackle large scale instances of the disjunctively constrained knapsack problem, we present sequential and parallel algorithms based upon neighborhood search techniques. The first algorithm can be viewed as a random neighborhood search method. This algorithm uses a combination of neighborhood search techniques in order to randomly explore a series of sub-solution spaces, where each subspace is characterized by a neighborhood of a local optimum. The second algorithm is an adaptive neighborhood search that guides the search process in the feasible solution space towards high quality solutions. This algorithm uses an ant colony optimization system to simulate the guided search. The third andlast algorithm is a parallel random neighborhood search method which exploits the parallelism for exploring simultaneously different sub-solution spaces by several processors. Each processor adopts its own random strategy to yield its own neighborhoods according to its internal information
45

Using genetic algorithms to solve combinatorial optimization problems

Cui, Xinwei 19 April 1991 (has links)
Genetic algorithms are stochastic search techniques based on the mechanics of natural selection and natural genetics. Genetic algorithms differ from traditional analytical methods by using genetic operators and historic cumulative information to prune the search space and generate plausible solutions. Recent research has shown that genetic algorithms have a large range and growing number of applications. The research presented in this thesis is that of using genetic algorithms to solve some typical combinatorial optimization problems, namely the Clique, Vertex Cover and Max Cut problems. All of these are NP-Complete problems. The empirical results show that genetic algorithms can provide efficient search heuristics for solving these combinatorial optimization problems. Genetic algorithms are inherently parallel. The Connection Machine system makes parallel implementation of these inherently parallel algorithms possible. Both sequential genetic algorithms and parallel genetic algorithms for Clique, Vertex Cover and Max Cut problems have been developed and implemented on the SUN4 and the Connection Machine systems respectively.
46

Contributions to the solution of the crew scheduling problem

Paek, Gwan-Ho January 1992 (has links)
No description available.
47

The Problem of Tuning Metaheuristics as seen from a Machine Learning Perspective

Birattari, Mauro 20 December 2004 (has links)
<p>A metaheuristic is a generic algorithmic template that, once properly instantiated, can be used for finding high quality solutions of combinatorial optimization problems. For obtaining a fully functioning algorithm, a metaheuristic needs to be configured: typically some modules need to be instantiated and some parameters need to be tuned. For the sake of precision, we use the expression <em>parametric tuning</em> for referring to the tuning of numerical parameters, either continuous or discrete but in any case ordinal. On the other hand, we use the expression <em>structural tuning</em> for referring to the problem of defining which modules should be included and, in general, to the problem of tuning parameters that are either boolean or categorical. Finally, with <em>tuning</em> we refer to the composite <em>structural and parametric tuning</em>.</p> <p>Tuning metaheuristics is a very sensitive issue both in practical applications and in academic studies. Nevertheless, a precise definition of the tuning problem is missing in the literature. In this thesis, we argue that the problem of tuning a metaheuristic can be profitably described and solved as a machine learning problem.</p> <p>Indeed, looking at the problem of tuning metaheuristics from a machine learning perspective, we are in the position of giving a formal statement of the tuning problem and to propose an algorithm, called F-Race, for tackling the problem itself. Moreover, always from this standpoint, we are able to highlight and discuss some catches and faults in the current research methodology in the metaheuristics field, and to propose some guidelines.</p> <p>The thesis contains experimental results on the use of F-Race and some examples of practical applications. Among others, we present a feasibility study carried out by the German-based software company <em>SAP</em>, that concerned the possible use of F-Race for tuning a commercial computer program for vehicle routing and scheduling problems. Moreover, we discuss the successful use of F-Race for tuning the best performing algorithm submitted to the <em>International Timetabling Competition</em> organized in 2003 by the <em>Metaheuristics Network</em> and sponsored by <em>PATAT</em>, the international series of conferences on the <em>Practice and Theory of Automated Timetabling</em>.</p>
48

O problema de minimização de pilhas abertas - novas contribuições / The minization of open stacks problem - new contribuctions

Fink, Claudia 19 October 2012 (has links)
O Problema de Minimização do Número Máximo de Pilhas Abertas (MOSP, do inglês minimization of open stacks problem) é um problema de otimização combinatória da família NP-Difícil que vem recebendo grande atenção na literatura especializada. Este trabalho apresenta novas contribuições em termos de modelos e técnicas de resolução para o problema. A primeira parte deste trabalho lidou com modelos matemáticos, sendo analisados os modelos existentes que se baseiam em programação inteira mista. Variações de um modelo da literatura foram propostas, com o objetivo de tentar diminuir o tempo de execução necessário para se obter uma solução exata com a utilização de pacotes comerciais. Os resultados mostraram que as propostas são capazes de acelerar a solução de algumas classes de instâncias mas, que de maneira geral, métodos baseados em relaxação linear encontram dificuldade em provar a otimalidade devido à baixa qualidade dos limitantes inferiores. Uma outra contribuição deste trabalho foi o desenvolvimento de um modelo conjunto para o problema MOSP e para o problema de minimização da duração de pedidos (MORP, do inglês minimization of order spread problem). Este modelo propõe um framework unificado em que os dois problemas podem ser resolvidos ao mesmo tempo, tendo suas funções objetivo individuais ponderadas através de pesos definidos pelo usuário. A segunda parte do trabalho voltou-se para o desenvolvimento de métodos heurísticos para o MOSP. Duas estratégias de solução foram desenvolvidas. O primeiro método propõe uma transformação heurística entre o problema MOSP e o clássico problema do caixeiro viajente (TSP, do inglês traveling salesman problem). A partir de uma representação em grafo do MOSP, o TSP é definido por meio de uma regra de atribuição de distâncias baseadas nos graus dos nós. Nos testes computacionais, a estratégia proposta mostrou-se eficiente em relação às heurísticas específicas para o MOSP, obtendo a solução ótima do MOSP em 80,42% das instâncias testadas e sendo competitiva em termos de tempo computacional com algumas das melhores heurísticas da literatura. O segundo método heurístico proposto utilizou a ideia de decomposição. De fato, neste método, um corte no grafo associado ao problema original divide-o em problemas menores, que são resolvidos. A solução global é obtida através da junção das soluções dos subproblemas e, em alguns casos, é possível demonstrar a otimalidade da solução obtida. Testes computacionais indicam a validade da proposta e apontam caminhos para pesquisas futuras / The minimization of open stacks problem (MOSP) is a well known NP-hard combinatorial optimization problem that has been extensively discussed in the specialized literature. This study presents some new contributions in terms of models and solution methods for this problem. The first part of this thesis dealt with mathematical models. The existing mixedinteger models have been analyzed and variants of a well known model have been proposed, with the goal of reducing the time needed by commercial packages to obtain proved-optimal solutions. The results of computational tests on a widely used set of instances have indicated that the modifications proposed are able to reduce the time needed to obtain optimal solutions for some classes of instances. Nevertheless, a conclusion has been the fact that mixed-integer programming models have difficulty in obtaining convergence due to the low quality linear relaxation bounds. Another contribution of this thesis is the proposal of a single model that is able to deal with both the MOSP and with the Minimization of Order Spread Problem (MORP). This unified framework allows both problems to be jointly solved, by using a weighted objective function that included both original objectives. The second part of this thesis dealt with the development of heuristic strategies. Two solution strategies have been proposed. The first method proposes a heuristic conversion between MOSP and Traveling Salesman Problem (TSP) instances. This conversion relies the assignment distances to the TSP instance based on the degree of the vertices of the associated MOSP graph. Computational tests have shown that the proposed methodology is efficient, both in terms of solution quality (optimal solutions were obtained for 80.42% of the tested instances) and computational effort. The second method uses a decomposition idea. A cut is made in the graph associated with the original MOSP problem, yielding two smaller problems, which are solved. In some cases, the obtained combined solution can be prover optimal. Computational tests have shown the validity of the proposal and indicate new research opportunities
49

Triangulation by Continuous Embedding

Meila, Marina, Jordan, Michael I. 01 March 1997 (has links)
When triangulating a belief network we aim to obtain a junction tree of minimum state space. Searching for the optimal triangulation can be cast as a search over all the permutations of the network's vaeriables. Our approach is to embed the discrete set of permutations in a convex continuous domain D. By suitably extending the cost function over D and solving the continous nonlinear optimization task we hope to obtain a good triangulation with respect to the aformentioned cost. In this paper we introduce an upper bound to the total junction tree weight as the cost function. The appropriatedness of this choice is discussed and explored by simulations. Then we present two ways of embedding the new objective function into continuous domains and show that they perform well compared to the best known heuristic.
50

Geometric Ramifications of the Lovász Theta Function and Their Interplay with Duality

de Carli Silva, Marcel Kenji January 2013 (has links)
The Lovasz theta function and the associated convex sets known as theta bodies are fundamental objects in combinatorial and semidefinite optimization. They are accompanied by a rich duality theory and deep connections to the geometric concept of orthonormal representations of graphs. In this thesis, we investigate several ramifications of the theory underlying these objects, including those arising from the illuminating viewpoint of duality. We study some optimization problems over unit-distance representations of graphs, which are intimately related to the Lovasz theta function and orthonormal representations. We also strengthen some known results about dual descriptions of theta bodies and their variants. Our main goal throughout the thesis is to lay some of the foundations for using semidefinite optimization and convex analysis in a way analogous to how polyhedral combinatorics has been using linear optimization to prove min-max theorems. A unit-distance representation of a graph $G$ maps its nodes to some Euclidean space so that adjacent nodes are sent to pairs of points at distance one. The hypersphere number of $G$, denoted by $t(G)$, is the (square of the) minimum radius of a hypersphere that contains a unit-distance representation of $G$. Lovasz proved a min-max relation describing $t(G)$ as a function of $\vartheta(\overline{G})$, the theta number of the complement of $G$. This relation provides a dictionary between unit-distance representations in hyperspheres and orthonormal representations, which we exploit in a number of ways: we develop a weighted generalization of $t(G)$, parallel to the weighted version of $\vartheta$; we prove that $t(G)$ is equal to the (square of the) minimum radius of an Euclidean ball that contains a unit-distance representation of $G$; we abstract some properties of $\vartheta$ that yield the famous Sandwich Theorem and use them to define another weighted generalization of $t(G)$, called ellipsoidal number of $G$, where the unit-distance representation of $G$ is required to be in an ellipsoid of a given shape with minimum volume. We determine an analytic formula for the ellipsoidal number of the complete graph on $n$ nodes whenever there exists a Hadamard matrix of order $n$. We then study several duality aspects of the description of the theta body $\operatorname{TH}(G)$. For a graph $G$, the convex corner $\operatorname{TH}(G)$ is known to be the projection of a certain convex set, denoted by $\widehat{\operatorname{TH}}(G)$, which lies in a much higher-dimensional matrix space. We prove that the vertices of $\widehat{\operatorname{TH}}(G)$ are precisely the symmetric tensors of incidence vectors of stable sets in $G$, thus broadly generalizing previous results about vertices of the elliptope due to Laurent and Poljak from 1995. Along the way, we also identify all the vertices of several variants of $\widehat{\operatorname{TH}}(G)$ and of the elliptope. Next we introduce an axiomatic framework for studying generalized theta bodies, based on the concept of diagonally scaling invariant cones, which allows us to prove in a unified way several characterizations of $\vartheta$ and the variants $\vartheta'$ and $\vartheta^+$, introduced independently by Schrijver, and by McEliece, Rodemich, and Rumsey in the late 1970's, and by Szegedy in 1994. The beautiful duality equation which states that the antiblocker of $\operatorname{TH}(G)$ is $\operatorname{TH}(\overline{G})$ is extended to this setting. The framework allows us to treat the stable set polytope and its classical polyhedral relaxations as generalized theta bodies, using the completely positive cone and its dual, and it allows us to derive a (weighted generalization of a) copositive formulation for the fractional chromatic number due to Dukanovic and Rendl in 2010 from a completely positive formulation for the stability number due to de Klerk and Pasechnik in 2002. Finally, we study a non-convex constraint for semidefinite programs (SDPs) that may be regarded as analogous to the usual integrality constraint for linear programs. When applied to certain classical SDPs, it specializes to the standard rank-one constraint. More importantly, the non-convex constraint also applies to the dual SDP, and for a certain SDP formulation of $\vartheta$, the modified dual yields precisely the clique covering number. This opens the way to study some exactness properties of SDP relaxations for combinatorial optimization problems akin to the corresponding classical notions from polyhedral combinatorics, as well as approximation algorithms based on SDP relaxations.

Page generated in 0.1514 seconds