Spelling suggestions: "subject:"combinatorial aptimization broblems"" "subject:"combinatorial aptimization 2problems""
1 |
A Multi-Parent Crossover for Combinatorial Optimization ProblemsSu, Chien-hao 31 August 2006 (has links)
Optimization problems are divided into numerical optimization problems and combinatorial optimization problems. Genetic algorithms (GAs) are applied to solve optimization problems widely. GAs with multi-parent crossover are often used to solve numerical optimization problems. However, no effective multi-parent crossover is used for combinatorial optimization problems. Partially mapped crossover (PMX) is the most popular crossover for combinatorial optimization problems. In this thesis, we propose multi-parent partially mapped crossover (MPPMX). A large amount of experimental results show that the improvement ratio of MPPMX reaches 38.63 % over PMX. The p-values of t-test on the difference between MPPMX and PMX range from 10-6 to 10-14, which indicates the significant improvement of MPPMX over PMX.
|
2 |
Solving Hard Combinatorial Optimization Problems using Cooperative Parallel Metaheuristics / Utilisation de méta-heuristiques coopératives parallèles pour la résolution de problèmes d'optimisation combinatoire difficilesMunera Ramirez, Danny 27 September 2016 (has links)
Les Problèmes d’Optimisation Combinatoire (COP) sont largement utilisés pour modéliser et résoudre un grand nombre de problèmes industriels. La résolution de ces problèmes pose un véritable défi en raison de leur inhérente difficulté, la plupart étant NP-difficiles. En effet, les COP sont difficiles à résoudre par des méthodes exactes car la taille de l’espace de recherche à explorer croît de manière exponentielle par rapport à la taille du problème. Les méta-heuristiques sont souvent les méthodes les plus efficaces pour résoudre les problèmes les plus difficiles. Malheureusement, bien des problèmes réels restent hors de portée des meilleures méta-heuristiques. Le parallélisme permet d’améliorer les performances des méta-heuristiques. L’idée de base est d’avoir plusieurs instances d’une méta-heuristique explorant de manière simultanée l’espace de recherche pour accélérer la recherche de solution. Les meilleures techniques font communiquer ces instances pour augmenter la probabilité de trouver une solution. Cependant, la conception d’une méthode parallèle coopérative n’est pas une tâche aisée, et beaucoup de choix cruciaux concernant la communication doivent être résolus. Malheureusement, nous savons qu’il n’existe pas d’unique configuration permettant de résoudre efficacement tous les problèmes. Ceci explique que l’on trouve aujourd’hui des systèmes coopératifs efficaces mais conçus pour un problème spécifique ou bien des systèmes plus génériques mais dont les performances sont en général limitées. Dans cette thèse nous proposons un cadre général pour les méta-heuristiques parallèles coopératives (CPMH). Ce cadre prévoit plusieurs paramètres permettant de contrôler la coopération. CPMH organise les instances de méta-heuristiques en équipes ; chaque équipe vise à intensifier la recherche dans une région particulière de l’espace de recherche. Cela se fait grâce à des communications intra-équipes. Des communications inter-équipes permettent quant a` elles d’assurer la diversification de la recherche. CPMH offre à l’utilisateur la possibilité d’ajuster le compromis entre intensification et diversification. De plus, ce cadre supporte différentes méta-heuristiques et permet aussi l’hybridation de méta-heuristiques. Nous proposons également X10CPMH, une implémentation de CPMH, écrite en langage parallèle X10. Pour valider notre approche, nous abordons deux COP du monde industriel : des variantes difficiles du Problème de Stable Matching (SMP) et le Problème d’Affectation Quadratique (QAP). Nous proposons plusieurs méta-heuristiques originales en version séquentielle et parallèle, y compris un nouvelle méthode basée sur l’optimisation extrémale ainsi qu’un nouvel algorithme hybride en parallèle coopératif pour QAP. Ces algorithmes sont implémentés grâce à X10CPMH. L’évaluation expérimentale montre que les versions avec parallélisme coopératif offrent un très bon passage à l’échelle tout en fournissant des solutions de haute qualité. Sur les variantes difficiles de SMP, notre méthode coopérative offre des facteurs d’accélération super-linéaires. En ce qui concerne QAP, notre méthode hybride en parallèle coopératif fonctionne très bien sur les cas les plus difficiles et permet d’améliorer les meilleures solutions connues de plusieurs instances. / Combinatorial Optimization Problems (COP) are widely used to model and solve real-life problems in many different application domains. These problems represent a real challenge for the research community due to their inherent difficulty, as many of them are NP-hard. COPs are difficult to solve with exact methods due to the exponential growth of the problem’s search space with respect to the size of the problem. Metaheuristics are often the most efficient methods to make the hardest problems tractable. However, some hard and large real-life problems are still out of the scope of even the best metaheuristic algorithms. Parallelism is a straightforward way to improve metaheuristics performance. The basic idea is to perform concurrent explorations of the search space in order to speed up the search process. Currently, the most advanced techniques implement some communication mechanism to exchange information between metaheuristic instances in order to try and increase the probability to find a solution. However, designing an efficient cooperative parallel method is a very complex task, and many issues about communication must be solved. Furthermore, it is known that no unique cooperative configuration may efficiently tackle all problems. This is why there are currently efficient cooperative solutions dedicated to some specific problems or more general cooperative methods but with limited performances in practice. In this thesis we propose a general framework for Cooperative Parallel Metaheuristics (CPMH). This framework includes several parameters to control the cooperation. CPMH organizes the explorers into teams; each team aims at intensifying the search in a particular region of the search space and uses intra-team communication. In addition, inter-team communication is used to ensure search diversification. CPMH allows the user to tune the trade-off between intensification and diversification. However, our framework supports different metaheuristics and metaheuristics hybridization. We also provide X10CPMH, an implementation of our CPMH framework developed in the X10 parallel language. To assess the soundness of our approach we tackle two hard real-life COP: hard variants of the Stable Matching Problem (SMP) and the Quadratic Assignment Problem (QAP). For all problems we propose new sequential and parallel metaheuristics, including a new Extremal Optimization-based method and a new hybrid cooperative parallel algorithm for QAP. All algorithms are implemented thanks to X10CPMH. A complete experimental evaluation shows that the cooperative parallel versions of our methods scale very well, providing high-quality solutions within a limited timeout. On hard and large variants of SMP, our cooperative parallel method reaches super-linear speedups. Regarding QAP, the cooperative parallel hybrid algorithm performs very well on the hardest instances, and improves the best known solutions of several instances.
|
3 |
The Quantum Approximate Optimization Algorithm and it's ApplicationsBashore, Erik January 2023 (has links)
This is a project with the ambition of demonstrating the possibilities and applications of the quantum approximation optimization algorithm (QAOA). Throughout the paper discussions on the theoretical background and fundamentals of the algorithm will be done by examining the relevant nomenclature. Then a set of possible application problems will be considered where it will be discussed why this specific algorithm is of interest for each individual problem. In the fourth section these problems will concretely be tested via simulations of the QAOA and lastly an analysis of the outcomes will be done.
|
4 |
Partial preference models in discrete multi-objective optimization / Intégration de préférences expertes en optimisation multicritèreKaddani, Sami 10 March 2017 (has links)
Les problèmes d’optimisation multi-objectifs mènent souvent à considérer des ensembles de points non-dominés très grands à mesure que la taille et le nombre d’objectifs du problème augmentent. Générer l’ensemble de ces points demande des temps de calculs prohibitifs. De plus, la plupart des solutions correspondantes ne sont pas pertinentes pour un décideur. Une autre approche consiste à utiliser des informations de préférence, ce qui produit un nombre très limité de solutions avec des temps de calcul réduits. Cela nécessite la plupart du temps une élicitation précise de paramètres. Cette étape est souvent difficile pour un décideur et peut amener à délaisser certaines solutions intéressantes. Une approche intermédiaire consiste à raisonner avec des relations de préférences construites à partir d’informations partielles. Nous présentons dans cette thèse plusieurs modèles de relations partielles de préférences. En particulier, nous nous sommes intéressés à la génération de l’ensemble des points non-dominés selon ces relations. Les expérimentations démontrent la pertinence de notre approche en termes de temps de calcul et qualité des points générés. / Multi-objective optimization problems often lead to large nondominated sets, as the size of the problem or the number of objectives increases. Generating the whole nondominated set requires significant computation time, while most of the corresponding solutions are irrelevant to the decision maker. Another approach consists in obtaining preference information, which reduces the computation time and produces one or a very limited number of solutions. This requires the elicitation of precise preference parameters most of the time, which is often difficult and partly arbitrary, and might discard solutions of interest. An intermediate approach consists in using partial preference models.In this thesis, we present several partial preference models. We especially focused on the generation of the nondominated set according to these preference relations. This approach shows competitive performances both on computation time and quality of the generated preferred sets.
|
5 |
Ant Colony Optimization Technique to Solve Min-Max MultiDepot Vehicle Routing ProblemVenkata Narasimha, Koushik Srinath January 2011 (has links)
No description available.
|
6 |
Résolution de problèmes d'optimisation combinatoire mono et multi-objectifs par énumération ordonnée / Solving single and multi-objective combinatorial optimization problems by ordered enumerationBelhoul, Lyes 09 December 2014 (has links)
Notre objectif dans cette thèse est de proposer des algorithmes efficaces pour résoudre des problèmes d’optimisation combinatoire difficiles. Dans un premier temps, nous établissons le principe de l’énumération ordonnée qui consiste à générer dans un ordre adéquat les solutions d’un problème relâché associé au problème principal jusqu’à l’obtention de la preuve d’optimalité d’une solution. Nous construisons une procédure générique dans le cadre général des problème d’optimisation combinatoire. Dans un second temps nous abordons les applications de notre algorithme sur des problèmes qui admettent le problème d’affectation comme relaxation. Le premier cas particulier que nous étudions est la recherche d’une solution de bon compromis pour le problème d’affectation multiobjectif. La seconde application se rapporte au problème du voyageur de commerce asymétrique qui présente la difficulté de comporter des contraintes qui interdisent les sous-tournées, en plus des contraintes du problème d’affectation. / Our aim in this thesis is to propose efficient algorithms for solving difficult combinatorial optimization problems. Our algorithms are based on a generic method of ordered enumeration. Initially, we describe the principle of ordered enumeration which consists in generating in a specific order solutions of a relaxed problem associated to the difficult main problem, until meeting a proof of the optimality of a feasible solution. We construct a generic procedure in the general context of combinatorial optimization problems. In a second step we discuss applications of our algorithm on some difficult problems which admit the assignment problem as relaxation. The first special case we study is the search for a compromise solution to the multiobjective assignment problem. The second application is the asymmetric travelling salesman problem, which contains sub-tour constraints in addition to the constraints of the assignment problem.
|
7 |
Scalable Parallel Machine Learning on High Performance Computing Systems–Clustering and Reinforcement LearningWeijian Zheng (14226626) 08 December 2022 (has links)
<p>High-performance computing (HPC) and machine learning (ML) have been widely adopted by both academia and industries to address enormous data problems at extreme scales. While research has reported on the interactions of HPC and ML, achieving high performance and scalability for parallel and distributed ML algorithms is still a challenging task. This dissertation first summarizes the major challenges for applying HPC to ML applications: 1) poor performance and scalability, 2) loss of the convergence rate, 3) lower quality of the trained model, and 4) a lack of performance optimization techniques designed for specific applications. Researchers can address the four challenges in new ML applications. This dissertation shows how to solve them for two specific applications: 1) a clustering algorithm and 2) graph optimization algorithms that use reinforcement learning (RL).</p>
<p>As to the clustering algorithm, we first propose an algorithm called the simulated-annealing clustering algorithm. By combining a blocked data layout and asynchronous local optimization within each thread, the simulated-annealing enhanced clustering algorithm has a convergence rate that is comparable to the K-means algorithm but with much higher performance. Experiments with synthetic and real-world datasets show that the simulated-annealing enhanced clustering algorithm is significantly faster than the MPI K-means library using up to 1024 cores. However, the optimization costs (Sum of Square Error (SSE)) of the simulated-annealing enhanced clustering algorithm became higher than the original costs. To tackle this problem, we devise a new algorithm called the full-step feel-the-way clustering algorithm. In the full-step feel-the-way algorithm, there are L local steps within each block of data points. We use the first local step’s results to compute accurate global optimization costs. Our results show that the full-step algorithm can significantly reduce the global number of iterations needed to converge while obtaining low SSE costs. However, the time spent on the local steps is greater than the benefits of the saved iterations. To improve this performance, we next optimize the local step time by incorporating a sampling-based method called reassignment-history-aware sampling. Extensive experiments with various synthetic and real world datasets (e.g., MNIST, CIFAR-10, ENRON, and PLACES-2) show that our parallel algorithms can outperform the fastest open-source MPI K-means implementation by up to 110% on 4,096 CPU cores with comparable SSE costs.</p>
<p>Our evaluations of the sampling-based feel-the-way algorithm establish the effectiveness of the local optimization strategy, the blocked data layout, and the sampling methods for addressing the challenges of applying HPC to ML applications. To explore more parallel strategies and optimization techniques, we focus on a more complex application: graph optimization problems using reinforcement learning (RL). RL has proved successful for automatically learning good heuristics to solve graph optimization problems. However, the existing RL systems either do not support graph RL environments or do not support multiple or many GPUs in a distributed setting. This has compromised RL’s ability to solve large scale graph optimization problems due to the lack of parallelization and high scalability. To address the challenges of parallelization and scalability, we develop OpenGraphGym-MG, a high performance distributed-GPU RL framework for solving graph optimization problems. OpenGraphGym-MG focuses on a class of computationally demanding RL problems in which both the RL environment and the policy model are highly computation intensive. In this work, we distribute large-scale graphs across distributed GPUs and use spatial parallelism and data parallelism to achieve scalable performance. We compare and analyze the performance of spatial and data parallelism and highlight their differences. To support graph neural network (GNN) layers that take data samples partitioned across distributed GPUs as input, we design new parallel mathematical kernels to perform operations on distributed 3D sparse and 3D dense tensors. To handle costly RL environments, we design new parallel graph environments to scale up all RL-environment-related operations. By combining the scalable GNN layers with the scalable RL environment, we are able to develop high performance OpenGraphGym-MG training and inference algorithms in parallel.</p>
<p>To summarize, after proposing the major challenges for applying HPC to ML applications, this thesis explores several parallel strategies and performance optimization techniques using two ML applications. Specifically, we propose a local optimization strategy, a blocked data layout, and sampling methods for accelerating the clustering algorithm, and we create a spatial parallelism strategy, a parallel graph environment, agent, and policy model, and an optimized replay buffer, and multi-node selection strategy for solving large optimization problems over graphs. Our evaluations prove the effectiveness of these strategies and demonstrate that our accelerations can significantly outperform the state-of-the-art ML libraries and frameworks without loss of quality in trained models.</p>
|
Page generated in 0.1145 seconds