Spelling suggestions: "subject:"[een] NP-HARD"" "subject:"[enn] NP-HARD""
11 |
Network partitioning techniques based on network natural properties for power system applicationAlkhelaiwi, Ali Mani Turki January 2002 (has links)
In this thesis, the problem of partitioning a network into interconnected sub-networks is addressed. The goal is to achieve a partitioning which satisfies a set of specific engineering constraints, imposed in this case, by the requirements of the decomposed state-estimation (DSE) in electrical power systems. The network-partitioning problem is classified as NP-hard problem. Although many heuristic algorithms have been proposed for its solution, these often lack directness and computational simplicity. In this thesis, three new partitioning techniques are described which (i) satisfy the DSE constraints, and (ii) simplify the NP-hard problem by using the natural graph properties of a network. The first technique is based on partitioning a spanning tree optimally using the natural property of the spanning tree branches. As with existing heuristic techniques, information on the partitioning is obtained only at the end of the partitioning process. The study of the DSE constraints leads to define conditions of an ideal balanced partitioning. This enables data on the balanced partitioning to be obtained, including the numbers of boundary nodes and cut-edges. The second partitioning technique is designed to obtain these data for a given network, by finding the minimum covering set of nodes with maximum nodal degree. Further simplification is then possible if additional graph-theoretical properties are used. A new natural property entitled the 'edge state phenomenon' is defined. The edge state phenomenon may be exploited to generate new network properties. In the third partitioning technique, two of these, the 'network external closed path' and the 'open internal paths', are used to identify the balanced partitioning, and hence to partition the network. Examples of the application of all three methods to network partitioning are provided.
|
12 |
Optimization of Joint Cell, Channel and Power Allocation in Wireless Communication NetworksFallgren, Mikael January 2011 (has links)
In this thesis we formulate joint cell, channel and power allocation problems within wireless communication networks. The objectives are to maximize the user with mini- mum data throughput (Shannon capacity) or to maximize the total system throughput, referred to as the max-min and max-sum problem respectively. The complexity is stud- ied together with proposed optimization- and heuristic-based approaches. In the first paper an overall joint cell, channel and power allocation max-min prob- lem is formulated. We show that the decision problem is NP-hard and that the op- timization problem is not approximable unless P is equal to NP, for instances with a sufficiently large number of channels. Further, it follows that for a feasible binary cell and channel allocation, the remaining continuous power allocation optimization problem is still not approximable unless P is equal to NP. In addition, it is shown that first-order optimality conditions give global optimum of the single channel power al- location optimization problem, although the problem is in general not convex. In the following two papers heuristics for solving the overall problem are proposed. In the second paper we consider the single channel problem with convex combinations of the max-min and the max-sum objective functions. This variable utility provides the ability of tuning the amount of fairness and total throughput. The third paper investi- gates the multiple channel setting. On a system with three cells, eight mobile users and three channels, we perform an exhaustive search over feasible cell and channel alloca- tions. The exhaustive search is then compared to the less computationally expensive heuristic approaches, presenting potential earnings to strive for. A conclusion is that several of the proposed heuristics perform very well. The final paper incorporates fixed relay stations into the overall joint cell, channel and power allocation max-min problem. The complexity is inherited from the formula- tion without relay stations. Further, we propose a heuristic channel allocation approach that shows good performance, compared to an optimization based approach, in numer- ical simulations on the relay setting. / Financial support by the Swedish Foundation for Strategic Research (SSF) QC 20110915
|
13 |
The trade off between diversity and quality for multi-objective workforce schedulingCowling, Peter I., Colledge, N.J., Dahal, Keshav P., Remde, Stephen M. January 2006 (has links)
In this paper we investigate and compare multi-objective and
weighted single objective approaches to a real world workforce scheduling
problem. For this difficult problem we consider the trade off in solution quality
versus population diversity, for different sets of fixed objective weights. Our
real-world workforce scheduling problem consists of assigning resources with
the appropriate skills to geographically dispersed task locations while satisfying
time window constraints. The problem is NP-Hard and contains the Resource
Constrained Project Scheduling Problem (RCPSP) as a sub problem. We investigate
a genetic algorithm and serial schedule generation scheme together with
various multi-objective approaches. We show that multi-objective genetic algorithms
can create solutions whose fitness is within 2% of genetic algorithms using
weighted sum objectives even though the multi-objective approaches know
nothing of the weights. The result is highly significant for complex real-world
problems where objective weights are seldom known in advance since it suggests
that a multi-objective approach can generate a solution close to the user
preferred one without having knowledge of user preferences.
|
14 |
Algorithmic Problems in Access ControlMousavi, Nima 29 July 2014 (has links)
Access control is used to provide regulated access
to resources by principals. It is an important and foundational
aspect of information security. Role-Based Access Control (RBAC) is
a popular and widely-used access control model,
that, as prior work argues,
is ideally suited for enterprise settings. In this dissertation,
we address two problems in the context of RBAC.
One is the User Authorization Query (UAQ) problem, which relates
to sessions that a user creates to exercise permissions.
UAQ's objective is the identification of a
set of roles that a user needs to activate such that the session is
authorized to all permissions that the user wants to exercise in
that session. The roles that are activated must respect
a set of Separation of Duty constraints. Such constraints restrict the
roles that can be activated together in a session.
UAQ is known to be intractable (NP-hard).
In this dissertation, we give a precise formulation of UAQ as a
joint-optimization problem, and analyze it.
We examine the manner in which each input parameter contributes to its
intractability.
We then propose an approach to mitigate its intractability based on
our observation that a corresponding decision version of the problem
is in NP. We efficiently
reduce UAQ to Boolean satisfiability in conjunctive normal form
(CNF-SAT), a well-known
NP-complete problem for which solvers exist that are efficient for large
classes of instances. We also present results for UAQ posed
as an approximation problem; our results
suggest that efficient approximation is not promising for UAQ.
We discuss an open-source implementation of our approach and a
corresponding empirical assessment that we have conducted.
The other problem we consider in this dissertation regards
an efficient data structure for distributed
access enforcement. Access enforcement is the process of validating an access
request to a resource.
Distributed access enforcement has become important
with the proliferation of data, which requires access control systems
to scale to tens of thousands of resources and permissions.
Prior work has shown the effectiveness of a data structure called
the Cascade Bloom Filter (CBF) for this problem.
In this dissertation, we study the construction of instances
of the CBF.
We formulate the problem of finding an optimal instance of a
CBF, where optimality refers to the number of false positives
incurred and the number
of hash functions used. We prove that this problem
is NP-hard, and a meaningful decision version is in NP.
We then propose an approach to mitigate the intractability of
the problem by reducing it to
CNF-SAT, that allows us to use a SAT solver for instances that
arise in practice.
We discuss an open-source implementation of our approach
and an empirical assessment based on it.
|
15 |
Optimisation quadratique en variables binaires : quelques résultats et techniquesMbuntcha Wuntcha, Calvin January 2009 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal.
|
16 |
Algorithms for the satisfiability problemRolf, Daniel 22 November 2006 (has links)
Diese Arbeit befasst sich mit Worst-Case-Algorithmen für das Erfüllbarkeitsproblem boolescher Ausdrücke in konjunktiver Normalform. Im Wesentlichen betrachten wir Laufzeitschranken drei verschiedener Algorithmen, zwei für 3-SAT und einen für Unique-k-SAT. Wir entwickeln einen randomisierten Algorithmus, der eine Lösung eines erfüllbaren 3-KNF-Ausdrucks G mit n Variablen mit einer erwarteten Laufzeit von O(1.32793^n) findet. Der Algorithmus basiert auf der Analyse sogenannter Strings, welche Sequenzen von Klauseln der Länge drei sind. Dabei dürfen einerseits nicht aufeinanderfolgende Klauseln keine Variablen und andererseits aufeinanderfolgende Klauseln ein oder zwei Variablen gemeinsam haben. Gibt es wenige Strings, so treffen wir wahrscheinlich bereits während der String-Suche auf eine Lösung von G. 1999 entwarf Schöning einen Algorithmus mit einer Schranke von O(1.3334^n) für 3-SAT. Viele Strings erlauben es, die Laufzeit dieses Algorithmusses zu verbessern. Weiterhin werden wir den PPSZ-Algorithmus für Unique-k-SAT derandomisieren. Der 1998 von Paturi, Pudlak, Saks und Zane vorgestellte PPSZ-Algorithmus hat die besondere Eigenschaft, dass die Lösung eines eindeutig erfüllbaren 3-KNF-Ausdrucks in höchstens O(1.3071^n) erwarteter Laufzeit gefunden wird. Die derandomisierte Variante des Algorithmusses für Unique-k-SAT hat im Wesentlichen die gleiche Laufzeitschranke. Wir erreichen damit die momentan beste deterministische Worst-Case-Schranke für Unique-k-SAT. Zur Derandomisierung wenden wir die "Methode der kleinen Zufallsräume" an. Schließlich verbessern wir die Schranke für den Algorithmus von Iwama und Tamaki. 2003 kombinierten Iwama und Tamaki den PPSZ-Algorithmus mit Schönigs Algorithmus und konnten eine Schranke von O(1.3238^n) bewiesen. Um seinen Beitrag zum kombinierten Algorithmus zu steigern, justieren wir die Schranke des PPSZ-Algorithmusses. Damit erhalten wir die momentan beste randomisierte Worst-Case-Schranke für das 3-SAT-Problem von O(1.32216^n). / This work deals with worst-case algorithms for the satisfiability problem regarding boolean formulas in conjunctive normal form. The main part of this work consists of the analysis of the running time of three different algorithms, two for 3-SAT and one for Unique-k-SAT. We establish a randomized algorithm that finds a satisfying assignment for a satisfiable 3-CNF formula G on n variables in O(1.32793^n) expected running time. The algorithm is based on the analysis of so-called strings, which are sequences of clauses of size three, whereby non-succeeding clauses do not share a variable, and succeeding clauses share one or two variables. If there are not many strings, it is likely that we already encounter a solution of G while searching for strings. In 1999, Schöning proved a bound of O(1.3334^n) for 3-SAT. If there are many strings, we use them to improve the running time of Schöning''s algorithm. Furthermore, we derandomize the PPSZ algorithm for Unique-k-SAT. The PPSZ algorithm presented by Paturi, Pudlak, Saks, and Zane in 1998 has the feature that the solution of a uniquely satisfiable 3-CNF formula can be found in expected running time at most O(1.3071^n). In general, we will obtain a derandomized version of the algorithm for Unique-k-SAT that has essentially the same bound as the randomized version. This settles the currently best known deterministic worst-case bound for the Unique-k-SAT problem. We apply the `Method of Small Sample Spaces'' in order to derandomize the algorithm. Finally, we improve the bound for the algorithm of Iwama and Tamaki to get the currently best known randomized worst-case bound for the 3-SAT problem of O(1.32216^n). In 2003 Iwama and Tamaki combined Schöning''s and the PPSZ algorithm to yield an O(1.3238^n) bound. We tweak the bound for the PPSZ algorithm to get a slightly better contribution to the combined algorithm.
|
17 |
Contribution à la modélisation et à la régulation du trafic aux intersections : intégration des communications Vehicule-Infrastructure / Contribution of modelling and traffic control at intersections : Integration with the communication Vehicles-InfrastructureYan, Fei 14 March 2012 (has links)
Dans ce mémoire de thèse, nous avons étudié le problème de régulation du trafic en considérant les nouvelles technologies dans le cadre des Systèmes de Transport Intelligent (STI). Une nouvelle stratégie de contrôle est introduite afin d’exploiter le potentiel des infrastructures de la circulation à un niveau maximum. Plus précisément, basée sur la technologie VII « Intégration Véhicule-Infrastructure », l'infrastructure routière aux carrefours (considérée aussi comme contrôleur) peut communiquer avec les véhicules autonomes qui arrivent à un carrefour de manière continue. Les données importantes sur les véhicules telles que la vitesse, la position et la destination sont alors reçues par des capteurs avancés et envoyées au contrôleur en temps réel. Par conséquent, il est possible d'élaborer une stratégie de contrôle du trafic en considérant chaque véhicule comme une entité indépendante. En d'autres termes, le droit de passage est attribué à chaque véhicule en fonction de son état et en fonction de l'état global du trafic au carrefour. Seuls les véhicules qui ont reçu le droit de passage peuvent traverser le carrefour. Le contrôle du trafic au niveau d’un carrefour vise donc à déterminer les séquences de passage des véhicules, c’est-à-dire les séquences de distribution des droits de passage.Cependant, la plus grande difficulté pour appliquer cette nouvelle stratégie est la contradiction entre l'optimisation des séquences de passages des véhicules et la complexité temporelle. Pour résoudre cette contradiction, nous avons d’abord formulé mathématiquement la problématique de régulation et nous avons ensuite étudié sa complexité. Nous avons prouvé dans un premier temps que le problème de régulation du trafic formulé à l’intersection isolée est NP-hard sous certaines conditions (nombre arbitraire de groupes de flux compatibles GFC,…) et ceci en se basant sur la réduction au problème de 3-Partition. Dans un deuxième temps, nous avons appliqué les méthodes de résolutions exactes sur un carrefour isolé pour proposer des algorithmes exacts (Branch and Bound et Programmation dynamique) permettant de trouver une séquence de passage optimale. Plusieurs propriétés du problème ont été introduites et prouvées et ceci afin qu’elles soient exploitées par ces algorithmes. Ces propriétés ont pour objectif de réduire considérablement l’espace de recherche et par conséquent le temps d’exécution de ces algorithmes exacts.Par ailleurs, nous n’avons pas limité nos recherches sur des carrefours isolées mais nous avons appliqué l’approche de contrôle proposée sur un réseau de carrefours tout en considérant un seul contrôleur. Cependant, un algorithme exact appliqué sur plusieurs carrefours ne peut pas être assez rapide surtout lorsqu’on a besoin de communiquer presque instantanément des informations aux véhicules (en temps réel). Nous avons proposé donc des méthodes de résolutions approchées afin de trouver en un temps raisonnable une séquence de passage satisfaisante pour chaque carrefour. Ces algorithmes (Algorithmes génétiques) ont en effet, besoin de moins de temps de calcul tout en assurant une bonne qualité de solution.Enfin, nous illustrons la mise en œuvre des déférentes approches proposées à travers des résultats de simulation afin d’évaluer leurs performances. / In this thesis, we studied the problem of traffic control by considering the new technologies as part of Intelligent Transport Systems (ITS). A new control strategy is introduced to exploit the potential of infrastructure traffic at a maximum level. Specifically, based Technology VII "Vehicle-Infrastructure Integration", the road infrastructure at intersections (considered also as a controller) can communicate with autonomous vehicles that arrive at a crossroads on a continuous basis. Important data such as vehicle speed, position and destination are then received by advanced sensors and sent to the controller in real time. Therefore, it is possible to develop a strategy for traffic control by treating each vehicle as an independent entity. In other words, the right of way is assigned to each vehicle based on its status and function of the overall state of traffic at the intersection. Only vehicles that have received the right of way may cross the junction. Traffic control at an intersection is therefore to determine the sequence of passage of vehicles, that is to say the sequences distribution rights passage.Cependant, the greatest difficulty to implement this new strategy is the contradiction between the optimization of sequences of passes of vehicles and time complexity. To resolve this contradiction, we first mathematically formulated the problem of regulation and we then studied its complexity. We proved initially that the problem of traffic control at the intersection isolated formulated is NP-hard under certain conditions (arbitrary number of groups CFA compliant streams, ...) and this is based on reducing the problem of 3-Partition. In a second step, we applied the methods of accurate resolutions on an isolated intersection to propose exact algorithms (Branch and Bound and Dynamic Programming) for finding an optimal sequence of passage. Several properties of the problem have been introduced and this proved and so they are exploited by these algorithms. These properties are intended to significantly reduce the search space and consequently the execution time of these algorithms exacts.Par Moreover, we have not limited our research on isolated intersections but we applied the approach control proposed a network of nodes while considering a single controller. However, an exact algorithm applied to several intersections can not be fast enough especially when you need to communicate information almost instantaneously to vehicles (real time). So we proposed methods to find approximate resolutions in a reasonable time a sequence of way satisfactory to each intersection. These algorithms (Genetic Algorithms) have indeed require less computation time while maintaining a good quality of solution.Enfin, we illustrate the implementation of deferential proposed approaches through simulation results to evaluate their performance .
|
18 |
Optimisation quadratique en variables binaires : quelques résultats et techniquesMbuntcha Wuntcha, Calvin January 2009 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal
|
19 |
The Binary String-to-String Correction ProblemSpreen, Thomas D. 30 August 2013 (has links)
String-to-String Correction is the process of transforming some mutable string M into an exact copy of some other string (the target string T), using a shortest sequence of well-defined edit operations. The formal STRING-TO-STRING CORRECTION problem asks for the optimal solution using just two operations: symbol deletion, and swap of adjacent symbols. String correction problems using only swaps and deletions are computationally interesting; in his paper On the Complexity of the Extended String-to-String Correction Problem (1975), Robert Wagner proved that the String-to-String Correction problem under swap and deletion operations only is NP-complete for unbounded alphabets.
In this thesis, we present the first careful examination of the binary-alphabet case, which we call Binary String-to-String Correction (BSSC). We present several special cases of BSSC for which an optimal solution can be found in polynomial time; in particular, the case where T and M have an equal number of occurrences of a given symbol has a polynomial-time solution. As well, we demonstrate and prove several properties of BSSC, some of which do not necessarily hold in the case of String-to-String Correction. For instance: that the order of operations is irrelevant; that symbols in the mutable string, if swapped, will only ever swap in one direction; that the length of the Longest Common Subsequence (LCS) of the two strings is monotone nondecreasing during the execution of an optimal solution; and that there exists no correlation between the effect of a swap or delete operation on LCS, and the optimality of that operation. About a dozen other results that are applicable to Binary String-to-String Correction will also be presented. / Graduate / 0984 / 0715 / tspreen@gmail.com
|
20 |
Efficient search of an underwater area based on probabilityPukitis Furhoff, Hampus January 2019 (has links)
Today more and more different types of autonomous robots and vehicles are being developed. Most of these rely on the global positioning system and/or communication with other robots and vehicles to determine their global position. However, these are not viable options for the autonomous underwater vehicles (AUVs) of today since radio-waves does not travel well in water. Instead, various techniques for determining the AUVs position are used which comes with a margin of error. This thesis examines the problem of efficiently performing a local search within this margin of error with the objective of finding a docking-station or a bouy.To solve this problem research was made on the subject of search theory and how it previously has been applied in this context. What was found was that classical bayesian search theory had not been used very often in this context since it would require to much processing power to be a viable option in the embedded systems that is AUVs. Instead different heuristics were used to get solutions that still were viable for the situations in which they were used, even though they maybe wasn’t optimal.Based on this the search-strategies Spiral, Greedy, Look-ahead and Quadtree were developed and evaluated in a simulator. Their mean time to detection (MTTD) were compared as well as the average time it took for the strategies to process a search. Look-ahead was the best one of the four different strategies with respect to the MTTD and based on this it is suggested that it should be implemented and evaluated in a real AUV. / Idag utvecklas allt fler olika typer av autonoma robotar och fordon. De flesta av dessa är beroende av det globala positioneringssystemet och/eller kommunikation med andra robotar och fordon för att bestämma deras globala position. Detta är dock inte realistiska alternativ för autonoma undervattensfordon (AUV) idag eftersom radiovågor inte färdas bra i vatten. I stället används olika tekniker för att bestämma AUVens position, tekniker som ofta har en felmarginal. Denna rapport undersöker problemet med att effektivt utföra en lokal sökning inom denna felmarginal med målet att hitta en dockningsstation eller en boj.För att lösa detta problem gjordes en litteraturstudie om ämnet sökteori och hur det tidigare har tillämpats i detta sammanhang. Det som hittades var att den klassiska bayesiska sökteorin inte hade använts mycket ofta i detta sammanhang eftersom det skulle kräva för mycket processorkraft för att det skulle vara ett rimligt alternativ för de inbyggda systemen på en AUV. Istället användes olika heuristiska metoder för att få lösningar som fortfarande var dugliga för de situationer där de användes, även om de kanske inte var optimala.Baserat på detta utvecklades sökstrategierna Spiral, Greedy, Look-ahead och Quad-tree och utvärderades i en simulator. Deras genomsnittliga tid för att upptäcka målet (MTTD) jämfördes liksom den genomsnittliga tiden det tog för strategierna att bearbeta en sökning. Look-ahead var den bästa av de fyra olika strategierna med avseende på MTTD och baserat på detta föreslås det att den ska implementeras och utvärderas i en verklig AUV.
|
Page generated in 0.0546 seconds