• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Lower bounds in communication complexity and learning theory via analytic methods

Sherstov, Alexander Alexandrovich 23 October 2009 (has links)
A central goal of theoretical computer science is to characterize the limits of efficient computation in a variety of models. We pursue this research objective in the contexts of communication complexity and computational learning theory. In the former case, one seeks to understand which distributed computations require a significant amount of communication among the parties involved. In the latter case, one aims to rigorously explain why computers cannot master some prediction tasks or learn from past experience. While communication and learning may seem to have little in common, they turn out to be closely related, and much insight into both can be gained by studying them jointly. Such is the approach pursued in this thesis. We answer several fundamental questions in communication complexity and learning theory and in so doing discover new relations between the two topics. A consistent theme in our work is the use of analytic methods to solve the problems at hand, such as approximation theory, Fourier analysis, matrix analysis, and duality. We contribute a novel technique, the pattern matrix method, for proving lower bounds on communication. Using our method, we solve an open problem due to Krause and Pudlák (1997) on the comparative power of two well-studied circuit classes: majority circuits and constant-depth AND/OR/NOT circuits. Next, we prove that the pattern matrix method applies not only to classical communication but also to the more powerful quantum model. In particular, we contribute lower bounds for a new class of quantum communication problems, broadly subsuming the celebrated work by Razborov (2002) who used different techniques. In addition, our method has enabled considerable progress by a number of researchers in the area of multiparty communication. Second, we study unbounded-error communication, a natural model with applications to matrix analysis, circuit complexity, and learning. We obtain essentially optimal lower bounds for all symmetric functions, giving the first strong results for unbounded-error communication in years. Next, we resolve a longstanding open problem due to Babai, Frankl, and Simon (1986) on the comparative power of unbounded-error communication and alternation, showing that [mathematical equation]. The latter result also yields an unconditional, exponential lower bound for learning DNF formulas by a large class of algorithms, which explains why this central problem in computational learning theory remains open after more than 20 years of research. We establish the computational intractability of learning intersections of halfspaces, a major unresolved challenge in computational learning theory. Specifically, we obtain the first exponential, near-optimal lower bounds for the learning complexity of this problem in Kearns’ statistical query model, Valiant’s PAC model (under standard cryptographic assumptions), and various analytic models. We also prove that the intersection of even two halfspaces on {0,1}n cannot be sign-represented by a polynomial of degree less than [Theta](square root of n), which is an exponential improvement on previous lower bounds and solves an open problem due to Klivans (2002). We fully determine the relations and gaps among three key complexity measures of a communication problem: product discrepancy, sign-rank, and discrepancy. As an application, we solve an open problem due to Kushilevitz and Nisan (1997) on distributional complexity under product versus nonproduct distributions, as well as separate the communication classes PPcc and UPPcc due to Babai, Frankl, and Simon (1986). We give interpretations of our results in purely learning-theoretic terms. / text
2

Computational applications of invariance principles

Meka, Raghu Vardhan Reddy 14 August 2015 (has links)
This thesis focuses on applications of classical tools from probability theory and convex analysis such as limit theorems to problems in theoretical computer science, specifically to pseudorandomness and learning theory. At first look, limit theorems, pseudorandomness and learning theory appear to be disparate subjects. However, as it has now become apparent, there's a strong connection between these questions through a third more abstract question: what do random objects look like. This connection is best illustrated by the study of the spectrum of Boolean functions which directly or indirectly played an important role in a plethora of results in complexity theory. The current thesis aims to take this program further by drawing on a variety of fundamental tools, both classical and new, in probability theory and analytic geometry. Our research contributions broadly fall into three categories. Probability Theory: The central limit theorem is one of the most important results in all of probability and richly studied topic. Motivated by questions in pseudorandomness and learning theory we obtain two new limit theorems or invariance principles. The proofs of these new results in probability, of interest on their own, have a computer science flavor and fall under the niche category of techniques from theoretical computer science with applications in pure mathematics. Pseudorandomness: Derandomizing natural complexity classes is a fundamental problem in complexity theory, with several applications outside complexity theory. Our work addresses such derandomization questions for natural and basic geometric concept classes such as halfspaces, polynomial threshold functions (PTFs) and polytopes. We develop a reasonably generic framework for obtaining pseudorandom generators (PRGs) from invariance principles and suitably apply the framework to old and new invariance principles to obtain the best known PRGs for these complexity classes. Learning Theory: Learning theory aims to understand what functions can be learned efficiently from examples. As developed in the seminal work of Linial, Mansour and Nisan (1994) and strengthened by several follow-up works, we now know strong connections between learning a class of functions and how sensitive to noise, as quantified by average sensitivity and noise sensitivity, the functions are. Besides their applications in learning, bounding the average and noise sensitivity has applications in hardness of approximation, voting theory, quantum computing and more. Here we address the question of bounding the sensitivity of polynomial threshold functions and intersections of halfspaces and obtain the best known results for these concept classes.
3

Addressing Fundamental Limitations in Differentially Private Machine Learning

Nandi, Anupama January 2021 (has links)
No description available.
4

Intractability Results for some Computational Problems

Ponnuswami, Ashok Kumar 08 July 2008 (has links)
In this thesis, we show results for some well-studied problems from learning theory and combinatorial optimization. Learning Parities under the Uniform Distribution: We study the learnability of parities in the agnostic learning framework of Haussler and Kearns et al. We show that under the uniform distribution, agnostically learning parities reduces to learning parities with random classification noise, commonly referred to as the noisy parity problem. Together with the parity learning algorithm of Blum et al, this gives the first nontrivial algorithm for agnostic learning of parities. We use similar techniques to reduce learning of two other fundamental concept classes under the uniform distribution to learning of noisy parities. Namely, we show that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables and learning of k-juntas reduces to learning noisy parities of k variables. Agnostic Learning of Halfspaces: We give an essentially optimal hardness result for agnostic learning of halfspaces over rationals. We show that for any constant ε finding a halfspace that agrees with an unknown function on 1/2+ε fraction of examples is NP-hard even when there exists a halfspace that agrees with the unknown function on 1-ε fraction of examples. This significantly improves on a number of previous hardness results for this problem. We extend the result to ε = 2[superscript-Ω(sqrt{log n})] assuming NP is not contained in DTIME(2[superscript(log n)O(1)]). Majorities of Halfspaces: We show that majorities of halfspaces are hard to PAC-learn using any representation, based on the cryptographic assumption underlying the Ajtai-Dwork cryptosystem. This also implies a hardness result for learning halfspaces with a high rate of adversarial noise even if the learning algorithm can output any efficiently computable hypothesis. Max-Clique, Chromatic Number and Min-3Lin-Deletion: We prove an improved hardness of approximation result for two problems, namely, the problem of finding the size of the largest clique in a graph (also referred to as the Max-Clique problem) and the problem of finding the chromatic number of a graph. We show that for any constant γ > 0, there is no polynomial time algorithm that approximates these problems within factor n/2[superscript(log n)3/4+γ] in an n vertex graph, assuming NP is not contained in BPTIME(2[superscript(log n)O(1)]). This improves the hardness factor of n/2[superscript (log n)1-γ'] for some small (unspecified) constant γ' > 0 shown by Khot. Our main idea is to show an improved hardness result for the Min-3Lin-Deletion problem. An instance of Min-3Lin-Deletion is a system of linear equations modulo 2, where each equation is over three variables. The objective is to find the minimum number of equations that need to be deleted so that the remaining system of equations has a satisfying assignment. We show a hardness factor of 2[superscript sqrt{log n}] for this problem, improving upon the hardness factor of (log n)[superscriptβ] shown by Hastad, for some small (unspecified) constant β > 0. The hardness results for Max-Clique and chromatic number are then obtained using the reduction from Min-3Lin-Deletion as given by Khot. Monotone Multilinear Boolean Circuits for Bipartite Perfect Matching: A monotone Boolean circuit is said to be multilinear if for any AND gate in the circuit, the minimal representation of the two input functions to the gate do not have any variable in common. We show that monotone multilinear Boolean circuits for computing bipartite perfect matching require exponential size. In fact we prove a stronger result by characterizing the structure of the smallest monotone multilinear Boolean circuits for the problem.

Page generated in 0.0345 seconds