• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 178
  • 22
  • 18
  • 13
  • 9
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 2
  • 2
  • Tagged with
  • 320
  • 320
  • 105
  • 87
  • 76
  • 67
  • 44
  • 40
  • 37
  • 35
  • 28
  • 28
  • 26
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

NP vyhledávací problémy / NP vyhledávací problémy

Jirotka, Tomáš January 2011 (has links)
Title: NP search problems Author: Tomáš Jirotka Department: Department of Algebra Supervisor: Prof. RNDr. Jan Krajíček, DrSc. Abstract: The thesis summarizes known results in the field of NP search pro- blems. We discuss the complexity of integer factoring in detail, and we propose new results which place the problem in known classes and aim to separate it from PLS in some sense. Furthermore, we define several new search problems. Keywords: Computational complexity, TFNP, integer factorization. 1
72

Computational approaches in compressed sensing

Woolway, Matthew 01 September 2014 (has links)
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science. Johannesburg, 2014. / This thesis aims to provide a summary on computational approaches to solving the Compressed Sensing problem. The theoretical problem of solving systems of linear equations has long been investigated in academic literature. A relatively new field, Compressed Sensing is an application of such a problem. Specifically, with the ability to change the way in which we obtain and process signals. Under the assumption of sparse signals, Compressed Sensing is able to recover signals sampled at a rate much lower than that of the current Shannon/Nyquist sampling rate. The primary goal of this thesis, is to describe major algorithms currently used in the Compressed Sensing problem. This is done as a means to provide the reader with sufficient up to date knowledge on current approaches as well as their means of implementation, on central processing units (CPUs) and graphical processing units (GPUs), when considering computational concerns such as computational time, storage requirements and parallelisability.
73

Parallelism with limited nondeterminism

Finkelstein, Jeffrey 05 March 2017 (has links)
Computational complexity theory studies which computational problems can be solved with limited access to resources. The past fifty years have seen a focus on the relationship between intractable problems and efficient algorithms. However, the relationship between inherently sequential problems and highly parallel algorithms has not been as well studied. Are there efficient but inherently sequential problems that admit some relaxed form of highly parallel algorithm? In this dissertation, we develop the theory of structural complexity around this relationship for three common types of computational problems. Specifically, we show tradeoffs between time, nondeterminism, and parallelizability. By clearly defining the notions and complexity classes that capture our intuition for parallelizable and sequential problems, we create a comprehensive framework for rigorously proving parallelizability and non-parallelizability of computational problems. This framework provides the means to prove whether otherwise tractable problems can be effectively parallelized, a need highlighted by the current growth of multiprocessor systems. The views adopted by this dissertation—alternate approaches to solving sequential problems using approximation, limited nondeterminism, and parameterization—can be applied practically throughout computer science.
74

Computational Structure of GPSG Models: Revised Generalized Phrase Structure Grammar

Ristad, Eric Sven 01 September 1989 (has links)
The primary goal of this report is to demonstrate how considerations from computational complexity theory can inform grammatical theorizing. To this end, generalized phrase structure grammar (GPSG) linguistic theory is revised so that its power more closely matches the limited ability of an ideal speaker--hearer: GPSG Recognition is EXP-POLY time hard, while Revised GPSG Recognition is NP-complete. A second goal is to provide a theoretical framework within which to better understand the wide range of existing GPSG models, embodied in formal definitions as well as in implemented computer programs. A grammar for English and an informal explanation of the GPSG/RGPSG syntactic features are included in appendices.
75

Complexity of Human Language Comprehension

Ristad, Eric Sven 01 December 1988 (has links)
The goal of this article is to reveal the computational structure of modern principle-and-parameter (Chomskian) linguistic theories: what computational problems do these informal theories pose, and what is the underlying structure of those computations? To do this, I analyze the computational complexity of human language comprehension: what linguistic representation is assigned to a given sound? This problem is factored into smaller, interrelated (but independently statable) problems. For example, in order to understand a given sound, the listener must assign a phonetic form to the sound; determine the morphemes that compose the words in the sound; and calculate the linguistic antecedent of every pronoun in the utterance. I prove that these and other subproblems are all NP-hard, and that language comprehension is itself PSPACE-hard.
76

The Asymmetric Traveling Salesman Problem

Mattsson, Per January 2010 (has links)
This thesis is a survey on the approximability of the asymmetric traveling salesmanproblem with triangle inequality (ATSP).In the ATSP we are given a set of cities and a function that gives the cost of travelingbetween any pair of cities. The cost function must satisfy the triangle inequality, i.e.the cost of traveling from city A to city B cannot be larger than the cost of travelingfrom A to some other city C and then to B. However, we allow the cost function tobe asymmetric, i.e. the cost of traveling from city A to city B may not equal the costof traveling from B to A. The problem is then to find the cheapest tour that visit eachcity exactly once. This problem is NP-hard, and thus we are mainly interested in approximationalgorithms. We study the repeated cycle cover heuristic by Frieze et al. We alsostudy the Held-Karp heuristic, including the recent result by Asadpour et al. that givesa new upper bound on the integrality gap. Finally we present the result ofPapadimitriou and Vempala which shows that it is NP-hard to approximate the ATSP with a ratio better than 117/116.
77

Computational Voting Theory: Game-Theoretic and Combinatorial Aspects

Xia, Lirong January 2011 (has links)
<p>For at least two thousand years, voting has been used as one of the most effective ways to aggregate people's ordinal preferences. In the last 50 years, the rapid development of Computer Science has revolutionize every aspect of the world, including voting. This motivates us to study (1) <bold>conceptually, how computational thinking changes the traditional voting theory</bold>, and (2) <bold>methodologically, how to better use voting for preference/information aggregation with the help of Computer</p><p>Science</bold>.</p><p>My Ph.D. work seeks to investigate and foster the interplay between Computer Science and Voting Theory. In this thesis, I will discuss two specific research directions pursued in my Ph.D. work, one for each question asked above. The first focuses on investigating how computational thinking affects the game-theoretic aspects of voting. More precisely, I will discuss the rationale and possibility of using computational complexity to protect voting from a type of strategic behavior of the voters, called <italic>manipulation</italic>. The second studies a voting setting called <italic>Combinatorial Voting</italic>, where the set of alternative is exponentially large and has a combinatorial structure. I will focus on the design and analysis of novel voting rules for combinatorial voting that balance computational efficiency and the expressivity of the voting language, in light of some recent developments in Artificial Intelligence.</p> / Dissertation
78

Computing with Polynomials over Composites

Gopalan, Parikshit 07 July 2006 (has links)
In the last twenty years, algebraic techniques have been applied with great success to several areas in theoretical computer science. However, for many problems involving modular counting, there is a huge gap in our understanding depending on whether the modulus is prime or composite. A prime example is the problem of showing lower bounds for circuits with Mod gates in circuit complexity. Proof techniques that work well for primes break down over composites. Moreover, in some cases, the problem for composites turns out to be very different from the prime case. Making progress on these problems seems to require a better understanding of polynomials over composites. In this thesis, we address some such "prime vs. composite" problems from algorithms, complexity and combinatorics, and the surprising connections between them. We consider the complexity-theoretic problem of computing Boolean functions using polynomials modulo composites. We show that symmetric polynomials can viewed as simultaneous communication protocols. This equivalence allows us to use techniques from communication complexity and number theory to prove degree bounds. We use these to give the first tight degree bounds for a number of Boolean functions. We consider the combinatorial problem of explicit construction of Ramsey graphs. We present a simple construction of such graphs using polynomials modulo composites. This approach gives a unifying view of many known constructions,and explains why they all achieve the same bound.We show that certain approaches to this problem cannot give better bounds. Finally, we consider the algorithmic problem of interpolation for polynomials modulo composites. We present the first query-efficient algorithms for interpolation and learning under a distribution. These results rely on some new structural results about such polynomials.
79

Parameterized algorithms and computational lower bounds: a structural approach

Xia, Ge 30 October 2006 (has links)
Many problems of practical significance are known to be NP-hard, and hence, are unlikely to be solved by polynomial-time algorithms. There are several ways to cope with the NP-hardness of a certain problem. The most popular approaches include heuristic algorithms, approximation algorithms, and randomized algorithms. Recently, parameterized computation and complexity have been receiving a lot of attention. By taking advantage of small or moderate parameter values, parameterized algorithms provide new venues for practically solving problems that are theoretically intractable. In this dissertation, we design efficient parameterized algorithms for several wellknown NP-hard problems and prove strong lower bounds for some others. In doing so, we place emphasis on the development of new techniques that take advantage of the structural properties of the problems. We present a simple parameterized algorithm for Vertex Cover that uses polynomial space and runs in time O(1.2738k + kn). It improves both the previous O(1.286k + kn)-time polynomial-space algorithm by Chen, Kanj, and Jia, and the very recent O(1.2745kk4 + kn)-time exponential-space algorithm, by Chandran and Grandoni. This algorithm stands out for both its performance and its simplicity. Essential to the design of this algorithm are several new techniques that use structural information of the underlying graph to bound the search space. For Vertex Cover on graphs with degree bounded by three, we present a still better algorithm that runs in time O(1.194k + kn), based on an “almost-global” analysis of the search tree. We also show that an important structural property of the underlying graphs – the graph genus – largely dictates the computational complexity of some important graph problems including Vertex Cover, Independent Set and Dominating Set. We present a set of new techniques that allows us to prove almost tight computational lower bounds for some NP-hard problems, such as Clique, Dominating Set, Hitting Set, Set Cover, and Independent Set. The techniques are further extended to derive computational lower bounds on polynomial time approximation schemes for certain NP-hard problems. Our results illustrate a new approach to proving strong computational lower bounds for some NP-hard problems under reasonable conditions.
80

Realizable paths and the NL vs L problem

Prasad, Kintali Shiva 29 August 2011 (has links)
A celebrated theorem of Savitch [Savitch'70] states that NSPACE(S) is contained in DSPACE(S²). In particular, Savitch gave a deterministic algorithm to solve ST-Connectivity (an NL-complete problem) using O({log}²{n}) space, implying NL (non-deterministic logspace) is contained in DSPACE({log}²{n}). While Savitch's theorem itself has not been improved in the last four decades, several graph connectivity problems are shown to lie between L and NL, providing new insights into the space-bounded complexity classes. All the connectivity problems considered in the literature so far are essentially special cases of ST-Connectivity. In this dissertation, we initiate the study of auxiliary PDAs as graph connectivity problems and define sixteen different "graph realizability problems" and study their relationships. The complexity of these connectivity problems lie between L (logspace) and P (polynomial time). ST-Realizability, the most general graph realizability problem is P-complete. 1DSTREAL(poly), the most specific graph realizability problem is L-complete. As special cases of our graph realizability problems we define two natural problems, Balanced ST-Connectivity and Positive Balanced ST-Connectivity, that lie between L and NL. We study the space complexity of SGSLOGCFL, a graph realizability problem lying between L and LOGCFL. We define generalizations of graph squaring and transitive closure, present efficient parallel algorithms for SGSLOGCFL and use the techniques of Trifonov to show that SGSLOGCFL is contained in DSPACE(lognloglogn). This implies that Balanced ST-Connectivity is contained in DSPACE(lognloglogn). We conclude with several interesting new research directions.

Page generated in 0.1106 seconds