• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 263
  • 193
  • 73
  • 18
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 639
  • 639
  • 184
  • 179
  • 177
  • 154
  • 113
  • 112
  • 110
  • 95
  • 72
  • 71
  • 68
  • 66
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Procédures de décision par élicitation incrémentale de préférences en optimisation multicritère, multi-agents et dans l'incertain / Decision processes based on incremental preference elicitation for multicriteria optimization, collective decision making and decision making under uncertainty

Benabbou, Nawal 05 May 2017 (has links)
Les travaux menés dans cette thèse s'inscrivent dans le cadre de la théorie de la décision algorithmique, domaine de recherche à la croisée de la théorie de la décision, de la recherche opérationnelle et de l'intelligence artificielle. Notre objectif dans cette thèse est de concevoir des algorithmes efficaces pour la résolution de problèmes de décision dans des environnements complexes (multicritère, multi-agents, incertain). Nous nous intéressons d'une part à l'élicitation des préférences fondée sur des modèles décisionnels et d'autre part à l'exploitation de ces préférences pour la recherche des solutions optimales sur des espaces définis de manière explicite ou implicite (optimisation combinatoire). Pour la résolution de problèmes combinatoires, nous proposons et étudions une nouvelle approche, consistant à combiner l'élicitation incrémentale des préférences et l'exploration implicite des solutions potentielles. L'intuition sous-jacente est d'utiliser l'exploration des solutions potentielles pour identifier des questions informatives tout en exploitant les réponses obtenues pour mieux focaliser la recherche sur les solutions préférées. Cette approche nous a conduit à proposer des procédures de décision par élicitation incrémentale pour les problèmes de recherche dans un graphe d'états multi-objectifs, les problèmes de chemins optimaux et d'arbre couvrants dans les graphes multicritères, les problèmes de sac à dos multi-agents et les problèmes de décision séquentielle dans l'incertain. Nous établissons des résultats théoriques garantissant la correction des algorithmes proposés et présentons des tests numériques montrant leur efficacité pratique. / This thesis work falls within the area of algorithmic decision theory which is at the junction of decision theory, operations research and artificial intelligence. Our aim is to produce algorithms allowing the fast resolution of decision problems in complex environments (multiple criteria, multi-agents, uncertainty). This work focuses on decision-theoretic elicitation and uses preferences to efficiently determine the best solutions among a set of alternatives explicitly or implicitly defined (combinatorial optimization). For combinatorial optimization problems, we propose and study a new approach consisting in interleaving incremental preference elicitation and preference-based search. The idea is to use the exploration to identify informative preference queries while exploiting answers to better focus the search on the preferred solutions. This approach leads us to propose incremental elicitation procedures for multi-objective state-space search problems, multicriteria shortest path problems, multicriteria minimum spanning tree problems, multi-agents knapsack problems and sequential decision problems under uncertainty. We provide theoretical guarantees on the correctness of the proposed algorithms and we present numerical tests showing their practical efficiency.
412

Placement de graphes de tâches de grande taille sur architectures massivement multicoeurs / Mapping of large task network on manycore architecture

Berger, Karl-Eduard 08 December 2015 (has links)
Ce travail de thèse de doctorat est dédié à l'étude d'un problème de placement de tâches dans le domaine de la compilation d'applications pour des architectures massivement parallèles. Ce problème vient en réponse à certains besoins industriels tels que l'économie d'énergie, la demande de performances pour les applications de type flots de données synchrones. Ce problème de placement doit être résolu dans le respect de trois critères: les algorithmes doivent être capable de traiter des applications de tailles variables, ils doivent répondre aux contraintes de capacités des processeurs et prendre en compte la topologie des architectures cibles. Dans cette thèse, les tâches sont organisées en réseaux de communication, modélisés sous forme de graphes. Pour évaluer la qualité des solutions produites par les algorithmes, les placements obtenus sont comparés avec un placement aléatoire. Cette comparaison sert de métrique d'évaluation des placements des différentes méthodes proposées. Afin de résoudre à ce problème, deux algorithmes de placement de réseaux de tâches de grande taille sur des architectures clusterisées de processeurs de type many-coeurs ont été développés. Ils s'appliquent dans des cas où les poids des tâches et des arêtes sont unitaires. Le premier algorithme, nommé Task-wise Placement, place les tâches une par une en se servant d'une notion d'affinité entre les tâches. Le second, intitulé Subgraph-wise Placement, rassemble les tâches en groupes puis place les groupes de tâches sur les processeurs en se servant d'une relation d'affinité entre les groupes et les tâches déjà affectées. Ces algorithmes ont été testés sur des graphes, représentants des applications, possédant des topologies de types grilles ou de réseaux de portes logiques. Les résultats des placements sont comparés avec un algorithme de placement, présent dans la littérature qui place des graphes de tailles modérée et ce à l'aide de la métrique définie précédemment. Les cas d'application des algorithmes de placement sont ensuite orientés vers des graphes dans lesquels les poids des tâches et des arêtes sont variables similairement aux valeurs qu'on peut retrouver dans des cas industriels. Une heuristique de construction progressive basée sur la théorie des jeux a été développée. Cet algorithme, nommé Regret Based Approach, place les tâches une par une. Le coût de placement de chaque tâche en fonction des autres tâches déjà placées est calculée. La phase de sélection de la tâche se base sur une notion de regret présente dans la théorie des jeux. La tâche qu'on regrettera le plus de ne pas avoir placée est déterminée et placée en priorité. Afin de vérifier la robustesse de l'algorithme, différents types de graphes de tâches (grilles, logic gate networks, series-parallèles, aléatoires, matrices creuses) de tailles variables ont été générés. Les poids des tâches et des arêtes ont été générés aléatoirement en utilisant une loi bimodale paramétrée de manière à obtenir des valeurs similaires à celles des applications industrielles. Les résultats de l'algorithme ont également été comparés avec l'algorithme Task-Wise Placement, qui a été spécialement adapté pour les valeurs non unitaires. Les résultats sont également évalués en utilisant la métrique de placement aléatoire. / This Ph.D thesis is devoted to the study of the mapping problem related to massively parallel embedded architectures. This problem arises from industrial needs like energy savings, performance demands for synchronous dataflow applications. This problem has to be solved considering three criteria: heuristics should be able to deal with applications with various sizes, they must meet the constraints of capacities of processors and they have to take into account the target architecture topologies. In this thesis, tasks are organized in communication networks, modeled as graphs. In order to determine a way of evaluating the efficiency of the developed heuristics, mappings, obtained by the heuristics, are compared to a random mapping. This comparison is used as an evaluation metric throughout this thesis. The existence of this metric is motivated by the fact that no comparative heuristics can be found in the literature at the time of writing of this thesis. In order to address this problem, two heuristics are proposed. They are able to solve a dataflow process network mapping problem, where a network of communicating tasks is placed into a set of processors with limited resource capacities, while minimizing the overall communication bandwidth between processors. They are applied on task graphs where weights of tasks and edges are unitary set. The first heuristic, denoted as Task-wise Placement, places tasks one after another using a notion of task affinities. The second algorithm, named Subgraph-wise Placement, gathers tasks in small groups then place the different groups on processors using a notion of affinities between groups and processors. These algorithms are tested on tasks graphs with grid or logic gates network topologies. Obtained results are then compared to an algorithm present in the literature. This algorithm maps task graphs with moderated size on massively parallel architectures. In addition, the random based mapping metric is used in order to evaluate results of both heuristics. Then, in a will to address problems that can be found in industrial cases, application cases are widen to tasks graphs with tasks and edges weights values similar to those that can be found in the industry. A progressive construction heuristic named Regret Based Approach, based on game theory, is proposed. This heuristic maps tasks one after another. The costs of mapping tasks according to already mapped tasks are computed. The process of task selection is based on a notion of regret, present in game theory. The task with the highest value of regret for not placing it, is pointed out and is placed in priority. In order to check the strength of the algorithm, many types of task graphs (grids, logic gates networks, series-parallel, random, sparse matrices) with various size are generated. Tasks and edges weights are randomly chosen using a bimodal law parameterized in order to have similar values than industrial applications. Obtained results are compared to the Task Wise placement, especially adapted for non-unitary values. Moreover, results are evaluated using the metric defined above.
413

Optimisation des allocations de données pour des applications du Calcul Haute Performance sur une architecture à mémoires hétérogènes / Data Allocation Optimisation for High Performance Computing Application on Heterogeneous Architecture

Brunie, Hugo 28 January 2019 (has links)
Le Calcul Haute Performance, regroupant l’ensemble des acteurs responsables de l’amélioration des performances de calcul des applications scientifiques sur supercalculateurs, s’est donné pour objectif d’atteindre des performances exaflopiques. Cette course à la performance se caractérise aujourd’hui par la fabrication de machines hétérogènes dans lesquelles chaque composant est spécialisé. Parmi ces composants, les mémoires du système se spécialisent, et la tendance va vers une architecture composée de plusieurs mémoires aux caractéristiques complémentaires. La question se pose alors de l’utilisation de ces nouvelles machines dont la performance pratique dépend du placement des données de l’application sur les différentes mémoires. Dans cette thèse, nous avons développé une formulation du problème d’allocation de donnée sur une Architecture à Mémoires Hétérogènes. Dans cette formulation, nous avons fait apparaître le bénéfice que pourrait apporter une analyse temporelle du problème, parce que de nombreux travaux reposaient uniquement sur une approche spatiale. À partir de cette formulation, nous avons développé un outil de profilage hors ligne pour approximer les coefficients de la fonction objective afin de résoudre le problème d’allocation et d’optimiser l’allocation des données sur une architecture composée deux de mémoires principales aux caractéristiques complémentaires. Afin de réduire la quantité de modifications nécessaires pour prendre en compte la stratégie d’allocation recommandée par notre boîte à outils, nous avons développé un outil capable de rediriger automatiquement les allocations de données à partir d’un minimum d’instrumentation dans le code source. Les gains de performances obtenus sur des mini-applications représentatives des applications scientifiques codées par la communauté permet d’affirmer qu’une allocation intelligente des données est nécessaire pour bénéficier pleinement de ressources mémoires hétérogènes. Sur certaines tailles de problèmes, le gain entre un placement naïf est une allocation instruite peut atteindre un facteur ×3.75. / High Performance Computing, which brings together all the players responsible for improving the computing performance of scientific applications on supercomputers, aims to achieve exaflopic performance. This race for performance is today characterized by the manufacture of heterogeneous machines in which each component is specialized. Among these components, system memories specialize too, and the trend is towards an architecture composed of several memories with complementary characteristics. The question arises then of these new machines use whose practical performance depends on the application data placement on the different memories. Compromising code update against performance is challenging. In this thesis, we have developed a data allocation on Heterogeneous Memory Architecture problem formulation. In this formulation, we have shown the benefit of a temporal analysis of the problem, because many studies were based solely on a spatial approach this result highlight their weakness. From this formulation, we developed an offline profiling tool to approximate the coefficients of the objective function in order to solve the allocation problem and optimize the allocation of data on a composite architecture composed of two main memories with complementary characteristics. In order to reduce the amount of code changes needed to execute an application according to our toolbox recommended allocation strategy, we have developed a tool that can automatically redirect data allocations from a minimum source code instrumentation. The performance gains obtained on mini-applications representative of the scientific applications coded by the community make it possible to assert that intelligent data allocation is necessary to fully benefit from heterogeneous memory resources. On some problem sizes, the gain between a naive data placement strategy, and an educated data allocation one, can reach up to ×3.75 speedup.
414

STRUCTURED PREDICTION: STATISTICAL AND COMPUTATIONAL GUARANTEES IN LEARNING AND INFERENCE

Kevin Segundo Bello Medina (11196552) 28 July 2021 (has links)
<div>Structured prediction consists of receiving a structured input and producing a combinatorial structure such as trees, clusters, networks, sequences, permutations, among others. From the computational viewpoint, structured prediction is in general considered <i>intractable</i> because of the size of the output space being exponential in the input size. For instance, in image segmentation tasks, the number of admissible segments is exponential in the number of pixels. A second factor is the combination of the input dimensionality along with the amount of data under availability. In structured prediction it is common to have the input live in a high-dimensional space, which involves to jointly reason about thousands or millions of variables, and at the same time contend with limited amount of data. Thus, learning and inference methods with strong computational and statistical guarantees are desired. The focus of our research is then to propose <i>principled methods</i> for structured prediction that are both polynomial time, i.e., <i>computationally efficient</i>, and require a polynomial number of data samples, i.e., <i>statistically efficient</i>.</div><div><br></div><div>The main contributions of this thesis are as follows:</div><div><br></div><div>(i) We develop an efficient and principled learning method of latent variable models for structured prediction under Gaussian perturbations. We derive a Rademacher-based generalization bound and argue that the use of non-convex formulations in learning latent-variable models leads to tighter bounds of the Gibbs decoder distortion.</div><div><br></div><div>(ii) We study the fundamental limits of structured prediction, i.e., we characterize the necessary sample complexity for learning factor graph models in the context of structured prediction. In particular, we show that the finiteness of our novel MaxPair-dimension is necessary for learning. Lastly, we show a connection between the MaxPair-dimension and the VC-dimension---which allows for using existing results on VC-dimension to calculate the MaxPair-dimension.</div><div><br></div><div>(iii) We analyze a generative model based on connected graphs, and find the structural conditions of the graph that allow for the exact recovery of the node labels. In particular, we show that exact recovery is realizable in polynomial time for a large class of graphs. Our analysis is based on convex relaxations, where we thoroughly analyze a semidefinite program and a degree-4 sum-of-squares program. Finally, we extend this model to consider linear constraints (e.g., fairness), and formally explain the effect of the added constraints on the probability of exact recovery.</div><div><br></div>
415

Algorithmic Optimization of Sensor Placement on Civil Structures for Fault Detection and Isolation

Mohan, Rathish January 2012 (has links)
No description available.
416

Path and Route Planning for Indoor Monitoring with UAV : An Evaluation of Algorithms for Time-constrained Path and Route Planning in an Indoor Environment with Several Waypoints and Limited Battery Time / Väg och ruttplanering för innomhusövervakning med UAV : En utvärdering av algoritmer för tidsbegränsad väg och ruttplanering med flera målpunkter och begränsad batteritid

Johansson, Ola January 2022 (has links)
Unmanned flying vehicles (UAVs) are tools that can be used in a variety of scenarios. The most common areas of application are outdoors, where there are not many obstacles to take into consideration when planning a route. In indoor scenarios, the requirements on the path planning system of the UAV becomes stricter, as these scenarios tend to contain more obstacles compared to flying at higher altitude outdoors. Considering a drone with multiple objectives (waypoints to visit), two problems initially come to mind, namely path planning and combinatorial optimization. To finish the objectives in the most effective way, the optimal path between the waypoints needs to be calculated, as well as the order in which the waypoints need to be visited. Another challenge is the fact that the UAV runs on limited battery capacity, and thus might not be able to finish the objectives before running out of battery. Therefore, the combinatorial optimization needs to include visits to a recharging station. The objective of this thesis is to combine and modify methods for path planning and combinatorial optimization in a way that a route can be calculated within a limited time budget, to allow the computation to be executed “on the fly”. The method used for path planning is ANYA, and the two methods used for the combinatorial optimization are ant colony optimization (ACO) as well as the Lin-Kernighan-Helsgaun (LKH) method. The nearest neighbor method (NN) will be used as a baseline for comparison. We propose an algorithm to include the battery constraint in the optimization. To evaluate the algorithms, we measure the computational time, to know if the method works in real-time, and also the estimated time for the UAV to finish the route, which will determine the energy efficiency of the route. We find that the ACO’s solutions improve over time, but require a long computational time, which makes it not suitable for small time budgets. LKH produces better routes than the NN method, and does so within the chosen time budget, as long as the number of waypoints is limited. The algorithm to optimize the trips to the recharging station works better than the previous use of LKH for this specific problem. / Obemannade flygande fordon är verktyg som är användbara inom en mängd områden. Vanligaste miljön för användning av dessa verktyg är i utomhusmiljöer där inte många fysiska hinder existerar. I inomhusscenarion blir kraven på vägplanering större, då dessa miljöer ofta innehåller fler hinder än vid flygning på högre altituder utomhus. Givet ett scenario med en drönare med flera målpunkter att besöka, finns två stora utmaningar, nämligen vägplanering och ruttplanering. För att besöka alla målpunkter behöver vi en metod för att identifiera närmsta, kollisionsfria vägen mellan målpunkterna, men också en metod för att hitta den optimala ordningen att besöka punkterna i. En annan utmaning som uppstår på grund av drönarens begränsade batteritid är att det inte finns någon garanti på att den hinner besöka alla målpunkter innan batteriet tar slut. Därför måste ruttplaneringen innefatta besök till en laddningsstation. Målet med detta examensarbete är att kombinera och modifiera metoder för väg och ruttplanering på ett sätt så en effektiv rutt mellan alla målpunkter kan identifieras, utan att drönaren får slut på batteri, samt med kravet att metoden ska kunna hitta en lösning inom begränsad tid. Metoden som används för vägplanering är ANYA, och de två metoderna som används för ruttplanering är myrkolonioptimering och Lin-Kernighan-Helsgaun-metoden. Närmsta-granne-metoden kommer användas som en baseline för jämförelsen mellan metoderna. Vi föreslår en algoritm som inkluderar batteribegränsningen i ruttplaneringen. För att utvärdera algoritmerna mäter vi beräkningstiden, för att ta reda på om metoden fungerar i realtid. Vi mäter även den uppskattade tiden det tar för drönaren att slutföra rutten, vilket kommer beskriva hur energieffektiv rutten är. Vi finner att myrkolonioptimering ger en bättre och bättre lösning över tid, men kräver en lång beräkningstid, vilket gör den opassande för korta tidsbegränsningar. LKH producerar bättre rutter än närmsta-granne-metoden, och gör det inom den givna tidsramen, så länge antal målpunkter är begränsade. Algoritmen för att optimera besöken till laddstationen fungerar bättre än tidigare appliceringar av LKH på samma problem.
417

Genetic algorithms for route planning of bank employee : master's thesis / Генетические алгоритмы планирование маршрутов банковских работников

Sadoon, A. M., Садун, А. М. January 2020 (has links)
Evolutionary algorithms are machine learning techniques that can be used in many applications of optimization problems in various fields. Banking route planning is a combinatorial optimization problem. The paper proposes a genetic algorithm for planning routes for bank employees. Computational experiments have been carried out, and the effectiveness of the proposed method has been shown. / Эволюционные алгоритмы - это методы машинного обучения, которые можно использовать во многих приложениях задач оптимизации в различных областях. Планирование банковских маршрутов представляет собой задачу комбинаторной оптимизации. В работе предложен генетический алгоритм планирования маршрутов банковских работников. Проведены вычислительные эксперименты, показана эффективность предложенного метода.
418

Metaheuristic approaches to realistic portfolio optimisation

Busetti, Franco Raoul 06 1900 (has links)
In this thesis we investigate the application of two heuristic methods, genetic algorithms and tabu/scatter search, to the optimisation of realistic portfolios. The model is based on the classical mean-variance approach, but enhanced with floor and ceiling constraints, cardinality constraints and nonlinear transaction costs which include a substantial illiquidity premium, and is then applied to a large I 00-stock portfolio. It is shown that genetic algorithms can optimise such portfolios effectively and within reasonable times, without extensive tailoring or fine-tuning of the algorithm. This approach is also flexible in not relying on any assumed or restrictive properties of the model and can easily cope with extensive modifications such as the addition of complex new constraints, discontinuous variables and changes in the objective function. The results indicate that that both floor and ceiling constraints have a substantial negative impact on portfolio performance and their necessity should be examined critically relative to their associated administration and monitoring costs. Another insight is that nonlinear transaction costs which are comparable in magnitude to forecast returns will tend to diversify portfolios; the effect of these costs on portfolio risk is, however, ambiguous, depending on the degree of diversification required for cost reduction. Generally, the number of assets in a portfolio invariably increases as a result of constraints, costs and their combination. The implementation of cardinality constraints is essential for finding the bestperforming portfolio. The ability of the heuristic method to deal with cardinality constraints is one of its most powerful features. / Decision Sciences / M. Sc. (Operations Research)
419

Hardness of Constraint Satisfaction and Hypergraph Coloring : Constructions of Probabilistically Checkable Proofs with Perfect Completeness

Huang, Sangxia January 2015 (has links)
A Probabilistically Checkable Proof (PCP) of a mathematical statement is a proof written in a special manner that allows for efficient probabilistic verification. The celebrated PCP Theorem states that for every family of statements in NP, there is a probabilistic verification procedure that checks the validity of a PCP proof by reading only 3 bits from it. This landmark theorem, and the works leading up to it, laid the foundation for many subsequent works in computational complexity theory, the most prominent among them being the study of inapproximability of combinatorial optimization problems. This thesis focuses on a broad class of combinatorial optimization problems called Constraint Satisfaction Problems (CSPs). In an instance of a CSP problem of arity k, we are given a set of variables taking values from some finite domain, and a set of constraints each involving a subset of at most k variables. The goal is to find an assignment that simultaneously satisfies as many constraints as possible. An alternative formulation of the goal that is commonly used is Gap-CSP, where the goal is to decide whether a CSP instance is satisfiable or far from satisfiable, where the exact meaning of being far from satisfiable varies depending on the problems.We first study Boolean CSPs, where the domain of the variables is {0,1}. The main question we study is the hardness of distinguishing satisfiable Boolean CSP instances from those for which no assignment satisfies more than some epsilon fraction of the constraints. Intuitively, as the arity increases, the CSP gets more complex and thus the hardness parameter epsilon should decrease. We show that for Boolean CSPs of arity k, it is NP-hard to distinguish satisfiable instances from those that are at most 2^{~O(k^{1/3})}/2^k-satisfiable. We also study coloring of graphs and hypergraphs. Given a graph or a hypergraph, a coloring is an assignment of colors to vertices, such that all edges or hyperedges are non-monochromatic. The gap problem is to distinguish instances that are colorable with a small number of colors, from those that require a large number of colors. For graphs, we prove that there exists a constant K_0&gt;0, such that for any K &gt;= K_0, it is NP-hard to distinguish K-colorable graphs from those that require 2^{Omega(K^{1/3})} colors. For hypergraphs, we prove that it is quasi-NP-hard to distinguish 2-colorable 8-uniform hypergraphs of size N from those that require 2^{(log N)^{1/4-o(1)}} colors. In terms of techniques, all these results are based on constructions of PCPs with perfect completeness, that is, PCPs where the probabilistic proof verification procedure always accepts a correct proof. Not only is this a very natural property for proofs, but it can also be an essential requirement in many applications. It has always been particularly challenging to construct PCPs with perfect completeness for NP statements due to limitations in techniques. Our improved hardness results build on and extend many of the current approaches. Our Boolean CSP result and GraphColoring result were proved by adapting the Direct Sum of PCPs idea by Siu On Chan to the perfect completeness setting. Our proof for hypergraph coloring hardness improves and simplifies the recent work by Khot and Saket, in which they proposed the notion of superposition complexity of CSPs. / Ett probabilistiskt verifierbart bevis (eng: Probabilistically Checkable Proof, PCP) av en matematisk sats är ett bevis skrivet på ett speciellt sätt vilket möjliggör en effektiv probabilistisk verifiering. Den berömda PCP-satsen säger att för varje familj av påståenden i NP finns det en probabilistisk verifierare som kontrollerar om en PCP bevis är giltigt genom att läsa endast 3 bitar från det. Denna banbrytande sats, och arbetena som ledde fram till det, lade grunden för många senare arbeten inom komplexitetsteorin, framförallt inom studiet av approximerbarhet av kombinatoriska optimeringsproblem. I denna avhandling fokuserar vi på en bred klass av optimeringsproblem i form av villkorsuppfyllningsproblem (engelska ``Constraint Satisfaction Problems'' CSPs). En instans av ett CSP av aritet k ges av en mängd variabler som tar värden från någon ändlig domän, och ett antal villkor som vart och ett beror på en delmängd av högst k variabler. Målet är att hitta ett tilldelning av variablerna som samtidigt uppfyller så många som möjligt av villkoren. En alternativ formulering av målet som ofta används är Gap-CSP, där målet är att avgöra om en CSP-instans är satisfierbar eller långt ifrån satisfierbar, där den exakta innebörden av att vara ``långt ifrån satisfierbar'' varierar beroende på problemet.Först studerar vi booleska CSPer, där domänen är {0,1}. Den fråga vi studerar är svårigheten av att särskilja satisfierbara boolesk CSP-instanser från instanser där den bästa tilldelningen satisfierar högst en andel epsilon av villkoren. Intuitivt, när ariten ökar blir CSP mer komplexa och därmed bör svårighetsparametern epsilon avta med ökande aritet. Detta visar sig vara sant och ett första resultat är att för booleska CSP av aritet k är det NP-svårt att särskilja satisfierbara instanser från dem som är högst 2^{~O(k^{1/3})}/2^k-satisfierbara. Vidare studerar vi färgläggning av grafer och hypergrafer. Givet en graf eller en hypergraf, är en färgläggning en tilldelning av färger till noderna, så att ingen kant eller hyperkant är monokromatisk. Problemet vi analyserar är att särskilja instanser som är färgbara med ett litet antal färger från dem som behöver många färger. För grafer visar vi att det finns en konstant K_0&gt;0, så att för alla K &gt;= K_0 är det NP-svårt att särskilja grafer som är K-färgbara från dem som kräver minst 2^{Omega(K^{1/3})} färger. För hypergrafer visar vi att det är kvasi-NP-svårt att särskilja 2-färgbara 8-likformiga hypergrafer som har N noder från dem som kräv minst 2^{(log N)^{1/4-o(1)}} färger. Samtliga dessa resultat bygger på konstruktioner av PCPer med perfekt fullständighet. Det vill säga PCPer där verifieraren alltid accepterar ett korrekt bevis. Inte bara är detta en mycket naturlig egenskap för PCPer, men det kan också vara ett nödvändigt krav för vissa tillämpningar. Konstruktionen av PCPer med perfekt fullständighet för NP-påståenden ger tekniska komplikationer och kräver delvis utvecklande av nya metoder. Vårt booleska CSPer resultat och vårt Färgläggning resultat bevisas genom att anpassa ``Direktsumman-metoden'' introducerad av Siu On Chan till fallet med perfekt fullständighet. Vårt bevis för hypergraffärgningssvårighet förbättrar och förenklar ett färskt resultat av Khot och Saket, där de föreslog begreppet superpositionskomplexitet av CSP. / <p>QC 20150916</p>
420

Label Cover Reductions for Unconditional Approximation Hardness of Constraint Satisfaction / Ettikettäckningsreduktioner för Obetingad Approximationssvårighet av Vilkorsuppfyllning

Wenner, Cenny January 2014 (has links)
Combinatorial optimization include such tasks as finding the quickest route to work, scheduling jobs to specialists, and placing bus stops so as to minimize commuter times. We consider problems where one is given a collection of constraints with the objective of finding an assignment satisfying as many constraints as possible, also known as Constraint Satisfaction Problems (CSPs). Most CSPs are NP-hard to solve optimally and we turn to approximations - a solution is said to be a factor-c approximation if its satisfies at least c times the optimal number of constraints. This thesis presents new results on the approximation limits of CSPs in various settings. In ordering CSPs, one is given constraints which specify the relative order of items, and the objective is order the items so as to satisfy as many constraints as possible. We give improved approximation hardness results for two classical problems: it is NP-hard to approximate Maximum Acyclic Subgraph with a factor better than 14/15 and Maximum Betweenness with a factor better than 1/2. We present ordering problems which are NP-hard to approximate better than random assignments, and that there are ordering problems arbitrarily hard to approximate. Next, Gaussian elimination can efficiently find exact solutions for satisfiable collections of so-called parity constraints. We show that whenever constraints accept at least one assignment in addition to a parity, then the problem is NP-hard to approximate better than random assignments. Finally, we study the uselessness property which basically states that if one is given a collection where almost all constraints are simultaneously satisfiable and one is permitted to relax the constraints to accept or reject additional assignments, then it is still NP-hard to find solutions noticeably better than random assignments. We consider the setting where all variables appear unnegated and provide the first examples of non-trivially useless predicates assuming only P != NP. / Kombinatoriska optimering inkluderar naturliga uppgifter såsom att hitta den snabbaste vägen till sitt arbetet, att schemalägga jobb hos specialister, eller att placera hållplatser för att minimera resenärers restid.Vi begränsar vi oss till de problem i vilket man ges en samling vilkor på variablermed målet att hitta en tilldelning av variablerna uppfyllandes så många som möjligt av vilkoren;så kallade Vilkorsuppfyllningsproblem (eng: Constraint Satisfaction Problems, CSPs).De flesta CSPs är NP-svåra att lösa optimalt och vi beaktar istället approximationer. Specifikt kallas, för 0 &lt;= c &lt;= 1, en lösning för en faktor-c approximation om antalet vilkor uppfyllda av lösningen är minst cgånger det största antalet uppfyllda av någon läsning. Denna avhandling består av tre artiklar som presenterar nya resultat begränsande hurpass väl man kan approximera CSPs i diverse situationer.För paritetsvilkor är en samling konsistenta vilkor enkla att lösa genom Gausselimination. Vi visar att för samtliga vilkor som uppfylls av en paritet och åtminstonde en ytterliggare tilldelning så är det inte bara NP-svårt at lösa utan till och med att ge en icke-trivial approximation.Oanvändarbarhet är en stark svårighetsegenskap som i princip säger att det är NP-svårt att ge icke-triviala approximationer även när man gemensamt för alla vilkor får ändra vad som uppfyller dem eller inte. Vi ger de första exemplen på icke-trivialt oanvändbara vilkor utan negationer betingat endast på P != NP.Vi visar på approximerbarhet för diverse ordningsvilorsproblem. I dessa ges man vilkor på hur objekt ska vara ordnade relativt varandra och målet är att hitta en ordning som uppfyller så många av vilkoren som möjligt. Vi ger bättre svårighetsresultat för de två mest välkända ordningsproblem, visar att det finns problem där det är NP-svårt att approximera bättre än triviala strategier, och att det finns ordningsproblem där godtyckligt dåliga approximationsgarantier är NP-svåra. / <p>NADA är en delad institution mellan SU och KTH där senare nu kallar den CSC.</p> / ApproxNP

Page generated in 0.0364 seconds