• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 11
  • 10
  • 7
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 106
  • 45
  • 29
  • 22
  • 20
  • 19
  • 18
  • 17
  • 14
  • 14
  • 14
  • 14
  • 13
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Characterizing Hardness in Parameterized Complexity

Islam, Tarique January 2007 (has links)
Parameterized complexity theory relaxes the classical notion of tractability and allows to solve some classically hard problems in a reasonably efficient way. However, many problems of interest remain intractable in the context of parameterized complexity. A completeness theory to categorize such problems has been developed based on problems on circuits and Model Checking problems. Although a basic machine characterization was proposed, it was not explored any further. We develop a computational view of parameterized complexity theory based on resource-bounded programs that run on alternating random access machines. We develop both natural and normalized machine characterizations for the W[t] and L[t] classes. Based on the new characterizations, we derive the basic completeness results in parameterized complexity theory, from a computational perspective. Unlike the previous cases, our proofs follow the classical approach for showing basic NP-completeness results (Cook's Theorem, in particular). We give new proofs of the Normalization Theorem by showing that (i) the computation of a resource-bounded program on an alternating RAM can be represented by instances of corre- sponding basic parametric problems, and (ii) the basic parametric problems can be decided by programs respecting the corresponding resource bounds. Many of the fundamental results follow as a consequence of our new proof of the Normalization Theorem. Based on a natural characterization of the W[t] classes, we develop new structural results establishing relationships among the classes in the W-hierarchy, and the W[t] and L[t] classes. Nontrivial upper-bound beyond the second level of the W-hierarchy is quite uncommon. We make use of the ability to implement natural algorithms to show new upper bounds for several parametric problems. We show that Subset Sum, Maximal Irredundant Set, and Reachability Distance in Vector Addition Systems (Petri Nets) are in W[3], W[4], and W[5], respectively. In some cases, the new bounds result in new completeness results. We derive new lower bounds based on the normalized programs for the W[t] and L[t] classes. We show that Longest Common Subsequence, with parameter the number of strings, is hard for L[t], t >= 1, and for W[SAT]. We also show that Precedence Constrained Multiprocessor Scheduling, with parameter the number of processors, is hard for L[t], t >= 1.
2

Characterizing Hardness in Parameterized Complexity

Islam, Tarique January 2007 (has links)
Parameterized complexity theory relaxes the classical notion of tractability and allows to solve some classically hard problems in a reasonably efficient way. However, many problems of interest remain intractable in the context of parameterized complexity. A completeness theory to categorize such problems has been developed based on problems on circuits and Model Checking problems. Although a basic machine characterization was proposed, it was not explored any further. We develop a computational view of parameterized complexity theory based on resource-bounded programs that run on alternating random access machines. We develop both natural and normalized machine characterizations for the W[t] and L[t] classes. Based on the new characterizations, we derive the basic completeness results in parameterized complexity theory, from a computational perspective. Unlike the previous cases, our proofs follow the classical approach for showing basic NP-completeness results (Cook's Theorem, in particular). We give new proofs of the Normalization Theorem by showing that (i) the computation of a resource-bounded program on an alternating RAM can be represented by instances of corre- sponding basic parametric problems, and (ii) the basic parametric problems can be decided by programs respecting the corresponding resource bounds. Many of the fundamental results follow as a consequence of our new proof of the Normalization Theorem. Based on a natural characterization of the W[t] classes, we develop new structural results establishing relationships among the classes in the W-hierarchy, and the W[t] and L[t] classes. Nontrivial upper-bound beyond the second level of the W-hierarchy is quite uncommon. We make use of the ability to implement natural algorithms to show new upper bounds for several parametric problems. We show that Subset Sum, Maximal Irredundant Set, and Reachability Distance in Vector Addition Systems (Petri Nets) are in W[3], W[4], and W[5], respectively. In some cases, the new bounds result in new completeness results. We derive new lower bounds based on the normalized programs for the W[t] and L[t] classes. We show that Longest Common Subsequence, with parameter the number of strings, is hard for L[t], t >= 1, and for W[SAT]. We also show that Precedence Constrained Multiprocessor Scheduling, with parameter the number of processors, is hard for L[t], t >= 1.
3

Kernelization and Enumeration: New Approaches to Solving Hard Problems

Meng, Jie 2010 May 1900 (has links)
NP-Hardness is a well-known theory to identify the hardness of computational problems. It is believed that NP-Hard problems are unlikely to admit polynomial-time algorithms. However since many NP-Hard problems are of practical significance, different approaches are proposed to solve them: Approximation algorithms, randomized algorithms and heuristic algorithms. None of the approaches meet the practical needs. Recently parameterized computation and complexity has attracted a lot of attention and been a fruitful branch of the study of efficient algorithms. By taking advantage of the moderate value of parameters in many practical instances, we can design efficient algorithms for the NP-Hard problems in practice. In this dissertation, we discuss a new approach to design efficient parameterized algorithms, kernelization. The motivation is that instances of small size are easier to solve. Roughly speaking, kernelization is a preprocess on the input instances and is able to significantly reduce their sizes. We present a 2k kernel for the cluster editing problem, which improves the previous best kernel of size 4k; We also present a linear kernel of size 7k 2d for the d-cluster editing problem, which is the first linear kernel for the problem. The kernelization algorithm is simple and easy to implement. We propose a quadratic kernel for the pseudo-achromatic number problem. This implies that the problem is tractable in term of parameterized complexity. We also study the general problem, the vertex grouping problem and prove it is intractable in term of parameterized complexity. In practice, many problems seek a set of good solutions instead of a good solution. Motivated by this, we present the framework to study enumerability in term of parameterized complexity. We study three popular techniques for the design of parameterized algorithms, and show that combining with effective enumeration techniques, they could be transferred to design efficient enumeration algorithms.
4

An IP Generator for Multifunctional Discrete Transforms using Parameterized Modules

Lee, Chung-Han 16 August 2004 (has links)
Fast algorithms for N-point shifted discrete Fourier transform (SDFT) are proposed by efficient matrix factorization¡DThe resulted matrix decomposition is realized by a cascade of several basic computation blocks with each block implemented by a parameterized IP module¡DBy combining these modules with different parameters, it is easy to implement a wide variety of digital transforms, such as DCT/IDCT in image/video coding, and modified DCT (MDCT) in audio coding¡D The transform processors realized using the parameterized IP modules have advantages of locality¡Amodularity¡Aregularity¡Alow-cost¡Aand high-throughput¡D Furthermore ¡Athe computation accuracy can be easily controlled by selecting different numbers of IP modules with proper parameters in the processors.
5

Parameterized Enumeration of Neighbour Strings and Kemeny Aggregations

Simjour, Narges January 2013 (has links)
In this thesis, we consider approaches to enumeration problems in the parameterized complexity setting. We obtain competitive parameterized algorithms to enumerate all, as well as several of, the solutions for two related problems Neighbour String and Kemeny Rank Aggregation. In both problems, the goal is to find a solution that is as close as possible to a set of inputs (strings and total orders, respectively) according to some distance measure. We also introduce a notion of enumerative kernels for which there is a bijection between solutions to the original instance and solutions to the kernel, and provide such a kernel for Kemeny Rank Aggregation, improving a previous kernel for the problem. We demonstrate how several of the algorithms and notions discussed in this thesis are extensible to a group of parameterized problems, improving published results for some other problems.
6

Parameterized Hardware/Software modules for Embedded ICE

Chen, Po-chou 12 July 2005 (has links)
The in-circuit emulator (ICE) is commonly adopted as a microprocessor debugging technique which features many advantages, such as low demand for hardware and repeatable use of the pins on the JTAG port. The development of system-on-chip technology has matured significantly in recent years. The microprocessors in system-on-chip designs have been applied in a variety of ways, and different microprocessors are being used in the embedded system. The traditional modus operandi of debug control, in which an ad hoc hardware/software package is required for each microprocessor, is not economical as far as programming and designing are concerned. Thus it is advisable to design a more flexible debug control hardware/software package which can fit into different embedded microprocessors with in-circuit emulators. This thesis reviews several types of embedded in-circuit emulator structure and comes up with a parameterized, modularized hardware/software package for controlling in-circuit emulators. An initial analysis of microprocessor systems and embedded debug circuits helps us to elicit reusable parameters so that we can achieve our desired debug control by simply adjusting parameters when we work on different microprocessor architectures and embedded debug circuits. An ensuing examination of the reusability and functionality of our designed debug control hardware/software enables us to group all the functions of our hardware/software package into different functional modules so that we can simply replace relevant functional modules on different microprocessor architectures and embedded debug circuits. The parameterized design allows us to use a single debug control software program on different microprocessor systems with the slightest change of parameter setting. The modularized model has the merit of minimizing our effort of debug control through module replacement when we need to adapt our software to a new environment (as when we want to use it on a different operating system or when we want to apply it to a different communication interface).
7

Parameterized Enumeration of Neighbour Strings and Kemeny Aggregations

Simjour, Narges January 2013 (has links)
In this thesis, we consider approaches to enumeration problems in the parameterized complexity setting. We obtain competitive parameterized algorithms to enumerate all, as well as several of, the solutions for two related problems Neighbour String and Kemeny Rank Aggregation. In both problems, the goal is to find a solution that is as close as possible to a set of inputs (strings and total orders, respectively) according to some distance measure. We also introduce a notion of enumerative kernels for which there is a bijection between solutions to the original instance and solutions to the kernel, and provide such a kernel for Kemeny Rank Aggregation, improving a previous kernel for the problem. We demonstrate how several of the algorithms and notions discussed in this thesis are extensible to a group of parameterized problems, improving published results for some other problems.
8

Analysis of Parameterized Networks

Nazari, Siamak January 2008 (has links)
In particular, the thesis will focus on parameterized networks of discrete-event systems. These are collections of interacting, isomorphic subsystems, where the number of subsystems is, for practical purposes, arbitrary; thus, the system parameter of interest is, in this case, the size of the network as characterized by the number of subsystems. Parameterized networks are reasonable models of real systems where the number of subsystems is large, unknown, or time-varying: examples include communication, computer and transportation networks. Intuition and engineering practice suggest that, in checking properties of such networks , it should be sufficient to consider a ``testbed'' network of limited size. However, there is presently little rigorous support for such an approach. In general, the problem of deciding whether a temporal property holds for a parameterized network of finite-state systems is undecidable; and the only decidable subproblems that have so far been identified place unreasonable restrictions on the means by which subsystems may interact. The key to ensuring decidability, and therefore the existence of effective solutions to the problem, is to identify restrictions that limit the computational power of the network. This can be done not only by limiting communication but also by restricting the structure of individual subsystems. In this thesis, we take both approaches, and also their combination on two different network topologies: ring networks and fully connected networks.
9

Analysis of Parameterized Networks

Nazari, Siamak January 2008 (has links)
In particular, the thesis will focus on parameterized networks of discrete-event systems. These are collections of interacting, isomorphic subsystems, where the number of subsystems is, for practical purposes, arbitrary; thus, the system parameter of interest is, in this case, the size of the network as characterized by the number of subsystems. Parameterized networks are reasonable models of real systems where the number of subsystems is large, unknown, or time-varying: examples include communication, computer and transportation networks. Intuition and engineering practice suggest that, in checking properties of such networks , it should be sufficient to consider a ``testbed'' network of limited size. However, there is presently little rigorous support for such an approach. In general, the problem of deciding whether a temporal property holds for a parameterized network of finite-state systems is undecidable; and the only decidable subproblems that have so far been identified place unreasonable restrictions on the means by which subsystems may interact. The key to ensuring decidability, and therefore the existence of effective solutions to the problem, is to identify restrictions that limit the computational power of the network. This can be done not only by limiting communication but also by restricting the structure of individual subsystems. In this thesis, we take both approaches, and also their combination on two different network topologies: ring networks and fully connected networks.
10

Design and Implementation of High Performance Algorithms for the (n,k)-Universal Set Problem

Luo, Ping 14 January 2010 (has links)
The k-path problem is to find a simple path of length k. This problem is NP-complete and has applications in bioinformatics for detecting signaling pathways in protein interaction networks and for biological subnetwork matching. There are algorithms implemented to solve the problem for k up to 13. The fastest implementation has running time O^*(4.32^k), which is slower than the best known algorithm of running time O^*(4^k). To implement the best known algorithm for the k-path problem, we need to construct (n,k)-universal set. In this thesis, we study the practical algorithms for constructing the (n,k)-universal set problem. We propose six algorithm variants to handle the increasing computational time and memory space needed for k=3, 4, ..., 8. We propose two major empirical techniques that cut the time and space tremendously, yet generate good results. For the case k=7, the size of the universal set found by our algorithm is 1576, and is 4611 for the case k=8. We implement the proposed algorithms with the OpenMP parallel interface and construct universal sets for k=3, 4, ..., 8. Our experiments show that our algorithms for the (n,k)-universal set problem exhibit very good parallelism and hence shed light on its MPI implementation. Ours is the first implementation effort for the (n,k)-universal set problem. We share the effort by proposing an extensible universal set construction and retrieval system. This system integrates universal set construction algorithms and the universal sets constructed. The sets are stored in a centralized database and an interface is provided to access the database easily. The (n,k)-universal set have been applied to many other NP-complete problems such as the set splitting problems and the matching and packing problems. The small (n,k)-universal set constructed by us will reduce significantly the time to solve those problems.

Page generated in 0.0777 seconds