Spelling suggestions: "subject:"power bounds"" "subject:"power 3rounds""
1 |
Computational aspects of radiation hybrid mappingIvansson, Lars January 2000 (has links)
No description available.
|
2 |
Computational aspects of radiation hybrid mappingIvansson, Lars January 2000 (has links)
No description available.
|
3 |
Lower bounds and correctness results for locally decodable codesMills, Andrew Jesse 27 January 2012 (has links)
We study fundamental properties of Locally Decodable Codes (LDCs). LDCs are motivated by the intuition that traditional codes do not have a good tradeoff between resistance to arbitrary error and probe complexity. For example, if you apply a traditional code on a database, the resulting codeword can be resistant to error even if a constant fraction of it was corrupted; however, to accomplish this, the decoding procedure would typically have to analyze the entire codeword. For large data sizes, this is considered computationally expensive. This may be necessary even if you are only trying to recover a single bit of the database! This motivates the concept of LDCs, which encode data in such a way that up to a constant fraction of the result could be corrupted; while the decoding procedures only need to read a sublinear, ideally constant, number of codeword bits to retrieve any bit of the input with high probability. Our most exciting contribution is an exponential lower bound on the length of three query LDCs (binary or linear) with high correctness. This is the first strong length lower bound for any kind of LDC allowing more than two queries. For LDCs allowing three or more queries, the previous best lower bound, given by Woodruff, is below [omega](n2). Currently, the best upper bound is sub-exponential, but still very large. If polynomial length constructions exist, LDCs might be useful in practice. If polynomial length constructions do not exist, LDCs are much less likely to find adoption -- the resources required to implement them for large database sizes would be prohibitive. We prove that in order to achieve just slightly higher correctness than the current best constructions, three query LDCs (binary or linear) require exponential size. We also prove several impossibility results for LDCs. It has been observed that for an LDC that withstands up to a delta fraction of error, the probability of correctness cannot be arbitrarily close to 1. However, we are the first to estimate the largest correctness probability obtainable for a given delta. We prove close to tight bounds for arbitrary numbers of queries. / text
|
4 |
Parameterized algorithms and computational lower bounds: a structural approachXia, Ge 30 October 2006 (has links)
Many problems of practical significance are known to be NP-hard, and hence, are unlikely
to be solved by polynomial-time algorithms. There are several ways to cope with
the NP-hardness of a certain problem. The most popular approaches include heuristic
algorithms, approximation algorithms, and randomized algorithms. Recently, parameterized
computation and complexity have been receiving a lot of attention. By
taking advantage of small or moderate parameter values, parameterized algorithms
provide new venues for practically solving problems that are theoretically intractable.
In this dissertation, we design efficient parameterized algorithms for several wellknown
NP-hard problems and prove strong lower bounds for some others. In doing
so, we place emphasis on the development of new techniques that take advantage of
the structural properties of the problems.
We present a simple parameterized algorithm for Vertex Cover that uses polynomial
space and runs in time O(1.2738k + kn). It improves both the previous
O(1.286k + kn)-time polynomial-space algorithm by Chen, Kanj, and Jia, and the
very recent O(1.2745kk4 + kn)-time exponential-space algorithm, by Chandran and
Grandoni. This algorithm stands out for both its performance and its simplicity. Essential
to the design of this algorithm are several new techniques that use structural
information of the underlying graph to bound the search space.
For Vertex Cover on graphs with degree bounded by three, we present a still better algorithm that runs in time O(1.194k + kn), based on an âÂÂalmost-globalâÂÂ
analysis of the search tree.
We also show that an important structural property of the underlying graphs âÂÂ
the graph genus â largely dictates the computational complexity of some important
graph problems including Vertex Cover, Independent Set and Dominating Set.
We present a set of new techniques that allows us to prove almost tight computational
lower bounds for some NP-hard problems, such as Clique, Dominating Set,
Hitting Set, Set Cover, and Independent Set. The techniques are further extended
to derive computational lower bounds on polynomial time approximation schemes for
certain NP-hard problems. Our results illustrate a new approach to proving strong
computational lower bounds for some NP-hard problems under reasonable conditions.
|
5 |
Information, complexity and structure in convex optimizationGuzman Paredes, Cristobal 08 June 2015 (has links)
This thesis is focused on the limits of performance of large-scale convex optimization algorithms. Classical theory of oracle complexity, first proposed by Nemirovski and Yudin in 1983, successfully established the worst-case behavior of methods based on local oracles (a generalization of first-order oracle for smooth functions) for nonsmooth convex minimization, both in the large-scale and low-scale regimes; and the complexity of approximately solving linear systems of equations (equivalent to convex quadratic minimization) over Euclidean balls, under a matrix-vector multiplication oracle.
Our work extends the applicability of lower bounds in two directions:
Worst-Case Complexity of Large-Scale Smooth Convex Optimization: We generalize lower bounds on the complexity of first-order methods for convex optimization, considering classes of convex functions with Hölder continuous gradients. Our technique relies on the existence of a smoothing kernel, which defines a smooth approximation for any convex function via infimal convolution. As a consequence, we derive lower bounds for \ell_p/\ell_q-setups, where 1\leq p,q\leq \infty, and extend to its matrix analogue: Smooth convex minimization (with respect to the Schatten q-norm) over matrices with bounded Schatten p-norm.
The major consequences of this result are the near-optimality of the Conditional Gradient method over box-type domains (p=q=\infty), and the near-optimality of Nesterov's accelerated method over the cross-polytope (p=q=1).
Distributional Complexity of Nonsmooth Convex Optimization: In this work, we prove average-case lower bounds for the complexity of nonsmooth convex ptimization. We introduce an information-theoretic method to analyze the complexity of oracle-based algorithms solving a random instance, based on the reconstruction principle.
Our technique shows that all known lower bounds for nonsmooth convex optimization can be derived by an emulation procedure from a common String-Guessing Problem, which is combinatorial in nature. The derived average-case lower bounds extend to hold with high probability, and for algorithms with bounded probability error, via Fano's inequality.
Finally, from the proposed technique we establish the equivalence (up to constant factors) of distributional, randomized, and worst-case complexity for black-box convex optimization. In particular, there is no gain from randomization in this setup.
|
6 |
Lower bounds to eigenvalues by the method of arbitrary choice without truncationMarmorino, Matthew G. 30 April 1999 (has links)
After a detailed discussion of the variation theorem for upper bound calculation of eigenvalues, many standard procedures for determining lower bounds to eigenvalues are presented with chemical applications in mind. A new lower bound method, arbitrary choice without trunctation is presented and tested on the helium atom. This method is attractive because it does not require knowledge of the eigenvalues or eigenvectors of the base problem. In application, however, it is shown that the method is disappointing for two reasons: 1) the method does not guarantee improved bounds as calculational effort is increased; and 2) the method requires some a priori information which, in general, may not be available. A possible direction for future work is pointed out in the end.
An extension of a lower bound method by Calogero and Marchioro has been developed and is presented in appendix G along with comments on the effective field method in appendix H for Virginia Tech access only. / Ph. D. / To avoid copyright infringements, access to these three appendices (G, H, and I) has been permanently limited to the Virginia Tech campus. In the case that Virginia Tech places these appendices freely on the internet, Virginia Tech is solely responsible for copyright violations.
|
7 |
Constant Lower Bounds on the Cryptographic Security of Quantum Two-Party ComputationsOsborn, Sarah Anne 24 May 2022 (has links)
In this thesis, we generate a lower bound on the security of quantum protocols for secure function evaluation. Central to our proof is the concept of gentle measurements of quantum states, which do not greatly disturb a quantum state if a certain outcome is obtained with high probability. We show how a cheating party can leverage gentle measurements to learn more information than should be allowable. To quantify our lower bound, we reduce a specific cryptographic task known as die-rolling to secure function evaluation and use the concept of gentle measurements to relate their security notions. Our lower bound is then obtained using a known security bound for die-rolling known as Kitaev's bound.
Due to the generality of secure function evaluation, we are able to apply this lower bound to obtain lower bounds on the security of quantum protocols for many quantum tasks. In particular, we provide lower bounds for oblivious transfer, XOR oblivious transfer, the equality function, the inner product function, Yao's millionaires' problem, and the secret phrase problem. Note that many of these lower bounds are the first of their kind, which is a testament to the utility of our lower bound. As a consequence, these bounds prove that unconditional security for quantum protocols is impossible for these applications, and since these are constant lower bounds, this rules out any form of boosting toward perfect security.
Our work lends itself to future research on designing optimal protocols for the above listed tasks, and potentially others, by providing constant lower bounds to approximate or improve. / Master of Science / Quantifying the cryptographic security of quantum applications is the focus of much research in the quantum cryptography discipline. Quantum protocols might have better security than their classical counterparts, and this advantage might make the adoption of quantum cryptographic protocols a viable option. In this thesis, we introduce a method for generating constant lower bounds on the security of a variety of quantum applications. This is accomplished through finding a lower bound on the security of a protocol that is general, and by virtue of its generality, can be scoped to quantum applications such that the lower bound can be applied, and constant lower bounds generated for these applications. The significance of the work in this thesis is that many of the constant lower bounds presented are the first of their kind for these quantum applications, thus proving the impossibility of them having unconditional security. This also proves that one cannot asymptotically boost towards perfect security in these quantum tasks by any means. These constant lower bounds also provide a foundation for future work in the study of these quantum applications, specifically in the search for upper and lower bounds on their cryptographic security, as well as in the search for protocols that approximate these bounds.
|
8 |
Lower bounds for integer programming problemsLi, Yaxian 17 September 2013 (has links)
Solving real world problems with mixed integer programming (MIP) involves efforts in modeling and efficient algorithms. To solve a minimization MIP problem, a lower bound is needed in a branch-and-bound algorithm to evaluate the quality of a feasible solution and to improve the efficiency of the algorithm. This thesis develops a new MIP model and studies algorithms for obtaining lower bounds for MIP.
The first part of the thesis is dedicated to a new production planning model with pricing decisions. To increase profit, a company can use pricing to influence its demand to increase revenue, decrease cost, or both. We present a model that uses pricing discounts to increase production and delivery flexibility, which helps to decrease costs. Although the revenue can be hurt by introducing pricing discounts, the total profit can be increased by properly choosing the discounts and production and delivery decisions. We further explore the idea with variations of the model and present the advantages of using flexibility to increase profit.
The second part of the thesis focuses on solving integer programming(IP) problems by improving lower bounds. Specifically, we consider obtaining lower bounds for the multi- dimensional knapsack problem (MKP). Because MKP lacks special structures, it allows us to consider general methods for obtaining lower bounds for IP, which includes various relaxation algorithms. A problem relaxation is achieved by either enlarging the feasible region, or decreasing the value of the objective function on the feasible region. In addition, dual algorithms can also be used to obtain lower bounds, which work directly on solving the dual problems.
We first present some characteristics of the value function of MKP and extend some properties from the knapsack problem to MKP. The properties of MKP allow some large scale problems to be reduced to smaller ones. In addition, the quality of corner relaxation bounds of MKP is considered. We explore conditions under which the corner relaxation is
tight for MKP, such that relaxing some of the constraints does not affect the quality of the lower bounds. To evaluate the overall tightness of the corner relaxation, we also show the worst-case gap of the corner relaxation for MKP.
To identify parameters that contribute the most to the hardness of MKP and further evaluate the quality of lower bounds obtained from various algorithms, we analyze the characteristics that impact the hardness of MKP with a series of computational tests and establish a testbed of instances for computational experiments in the thesis.
Next, we examine the lower bounds obtained from various relaxation algorithms com- putationally. We study methods of choosing constraints for relaxations that produce high- quality lower bounds. We use information obtained from linear relaxations to choose con- straints to relax. However, for many hard instances, choosing the right constraints can be challenging, due to the inaccuracy of the LP information. We thus develop a dual heuristic algorithm that explores various constraints to be used in relaxations in the Branch-and- Bound algorithm. The algorithm uses lower bounds obtained from surrogate relaxations to improve the LP bounds, where the relaxed constraints may vary for different nodes. We also examine adaptively controlling the parameters of the algorithm to improve the performance.
Finally, the thesis presents two problem-specific algorithms to obtain lower bounds for MKP: A subadditive lifting method is developed to construct subadditive dual solutions, which always provide valid lower bounds. In addition, since MKP can be reformulated as a shortest path problem, we present a shortest path algorithm that uses estimated distances by solving relaxations problems. The recursive structure of the graph is used to accelerate the algorithm. Computational results of the shortest path algorithm are given on the testbed instances.
|
9 |
Těžké tautologie / Těžké tautologiePich, Ján January 2011 (has links)
We investigate the unprovability of NP$\not\subseteq$P/poly in various fragments of arithmetic. The unprovability is usually obtained by showing hardness of propositional formulas encoding superpolynomial circuit lower bounds. Firstly, we discuss few relevant techniques and known theorems. Namely, natural proofs, feasible interpolation, KPT theorem, iterability, gadget generators etc. Then we prove some original results. We show the unprovability of superpolynomial circuit lower bounds for systems admitting certain forms of feasible interpolation (modulo a hardness assumption) and for systems roughly described as tree-like Frege systems working with formulas using only a small fraction of variables of the statement that is supposed to be proved. These results are obtained by proving the hardness of the Nisan-Wigderson generators in corresponding proof systems.
|
10 |
Métodos heurísticos para resolução de problemas de empacotamento unidimensional. / Heuristic methods for solving one-dimensional bin packing problems.Turi, Leandro Maciel 03 April 2018 (has links)
Os problemas de corte e empacotamento são muito comuns nas indústrias e na logística. Dado um conjunto de N itens com diferentes pesos e um conjunto de M contentores com capacidade C, o problema de empacotamento unidimensional consiste em determinar o menor número de contentores a serem utilizados para alocar todos os itens respeitando a restrição de capacidade dos contentores. Nesse estudo pretende-se resolver o problema com instâncias benchmark da literatura, por meio de sessenta heurísticas diferentes, que são comparadas a quatro limitantes inferiores propostos na literatura com o intuito de avaliar a qualidade da solução heurística. Quatro limitantes inferiores e dez heurísticas construtivas diferentes foram programados em C++ num mesmo ambiente computacional, permitindo sua comparação tanto em termos de qualidade das soluções, quanto em termos dos tempos de processamento. Uma heurística simples de troca de itens entre contentores chamada Diferença-de-Quadrados foi proposta para melhorar as soluções iniciais do problema. A metaheurística simulated annealing foi acionada para melhorar a solução inicial quando o limitante inferior não foi atingido. Os parâmetros dos simulated annealing foram determinados com os dados das instâncias de forma diferente da utilizada na literatura. As combinações entre as dez soluções iniciais, a heurística Diferença-de-Quadrados e o simulated annealing geraram um conjunto de sessenta heurísticas diferentes. Os resultados mostraram que o algoritmo proposto é eficiente para resolver o problema com tempos de processamento adequados a tomada de decisão. / Cutting and packing problems are very common in industries and logistics. Given a set of N items with different weights and a set of M bins with full capacity C, the one-dimensional bin packing problem consists of determining the smallest number of bins capable to allocate all items respecting the capacity constraint of the bins. that impose that the sum of the weights of the items allocated to the bin is less than or equal to their capacity. In this study we intend to solve the problem with benchmark instances of the literature, by means of sixty different heuristics, which are compared to four lower bounds proposed in the literature in order to evaluate the quality of the heuristic solution. Four lower bounds and ten different constructive heuristics were programmed in C++ in the same computational environment, allowing their comparison both in terms of the quality of the solutions and in terms of processing times. A simple heuristic of item exchange between bins called Difference-of-Squares was proposed to improve the initial solutions of the problem. The simulated annealing metaheuristic was triggered to improve the initial solution when the lower bounds was not reached. The parameters of the simulated annealing were determined with the data of the instances differently from that used in the literature. The combinations of the ten initial solutions, the Difference-of-Squares heuristic and the simulated annealing generated a set of sixty different heuristics. The results showed that the proposed algorithm is efficient to solve the problem with adequate processing times for decision making.
|
Page generated in 0.0418 seconds