Spelling suggestions: "subject:"computational complexity"" "subject:"computational komplexity""
21 |
Completeness and incompleteness /Schaefer, Marcus Georg January 1999 (has links)
Thesis (Ph. D.)--University of Chicago, Dept. of Computer Science, June 1999. / Includes bibliographical references. Also available on the Internet.
|
22 |
Techniques for analyzing the computational power of constant-depth circuits and space-bounded computationTrifonov, Vladimir Traianov, January 1900 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2006. / Vita. Includes bibliographical references.
|
23 |
Systematic parameterized complexity analysis in computational phonologyWareham, Harold 20 November 2017 (has links)
Many computational problems are NP-hard and hence probably do not have fast, i.e., polynomial time, algorithms. Such problems may yet have non-polynomial time algorithms, and the non-polynomial time complexities of these algorithm will be functions of particular aspects of that problem, i.e., the algorithm's running time is upper bounded by f (k) |x|ᶜ, where f is an arbitrary function, |x| is the size of the input x to the algorithm, k is an aspect of the problem, and c is a constant independent of |x| and k. Given such algorithms, it may still be possible to obtain optimal solutions for large instances of NP-hard problems for which the appropriate aspects are of small size or value. Questions about the existence of such algorithms are most naturally addressed within the theory of parameterized computational complexity developed by Downey and Fellows.
This thesis considers the merits of a systematic parameterized complexity analysis in which results are derived relative to all subsets of a specified set of aspects of a given NP-hard problem. This set of results defines an “intractability map” that shows relative to which sets of aspects algorithms whose non-polynomial time complexities are purely functions of those aspects do and do not exist for that problem. Such maps are useful not only for delimiting the set of possible algorithms for an NP-hard problem but also for highlighting those aspects that are responsible for this NP-hardness.
These points will be illustrated by systematic parameterized complexity analyses of problems associated with five theories of phonological processing in natural languages—namely, Simplified Segmental Grammars, finite-state transducer based rule systems, the KIMMO system, Declarative Phonology, and Optimality Theory. The aspects studied in these analyses broadly characterize the representations and mechanisms used by these theories. These analyses suggest that the computational complexity of phonological processing depends not on such details as whether a theory uses rules or constraints or has one, two, or many levels of representation but rather on the structure of the representation-relations encoded in individual mechanisms and the internal structure of the representations. / Graduate
|
24 |
Membership testing in transformation monoidsBeaudry, Martin January 1987 (has links)
No description available.
|
25 |
The conceptual design of robotic architectures using complexity criteria /Khan, Waseem A. January 2007 (has links)
No description available.
|
26 |
Implicit Computational Complexity and Compilers / Complexité Implicite et compilateursRubiano, Thomas 01 December 2017 (has links)
Complexity theory helps us predict and control resources, usually time and space, consumed by programs. Static analysis on specific syntactic criterion allows us to categorize some programs. A common approach is to observe the program’s data’s behavior. For instance, the detection of non-size-increasing programs is based on a simple principle : counting memory allocation and deallocation, particularly in loops. This way, we can detect programs which compute within a constant amount of space. This method can easily be expressed as property on control flow graphs. Because analyses on data’s behaviour are syntactic, they can be done at compile time. Because they are only static, those analyses are not always computable or easily computable and approximations are needed. “Size-Change Principle” from C. S. Lee, N. D. Jones et A. M. Ben-Amram presented a method to predict termination by observing resources evolution and a lot of research came from this theory. Until now, these implicit complexity theories were essentially applied on more or less toy languages. This thesis applies implicit computational complexity methods into “real life” programs by manipulating intermediate representation languages in compilers. This give an accurate idea of the actual expressivity of these analyses and show that implicit computational complexity and compilers communities can fuel each other fruitfully. As we show in this thesis, the methods developed are quite generals and open the way to several new applications. / La théorie de la complexité´e s’intéresse à la gestion des ressources, temps ou espace, consommés par un programmel ors de son exécution. L’analyse statique nous permet de rechercher certains critères syntaxiques afin de catégoriser des familles de programmes. L’une des approches les plus fructueuses dans le domaine consiste à observer le comportement potentiel des données manipulées. Par exemple, la détection de programmes “non size increasing” se base sur le principe très simple de compter le nombre d’allocations et de dé-allocations de mémoire, en particulier au cours de boucles et on arrive ainsi à détecter les programmes calculant en espace constant. Cette méthode s’exprime très bien comme propriété sur les graphes de flot de contrôle. Comme les méthodes de complexité implicite fonctionnent à l’aide de critères purement syntaxiques, ces analyses peuvent être faites au moment de la compilation. Parce qu’elles ne sont ici que statiques, ces analyses ne sont pas toujours calculables ou facilement calculables, des compromis doivent être faits en s’autorisant des approximations. Dans le sillon du “Size-Change Principle” de C. S. Lee, N. D. Jones et A. M. Ben-Amram, beaucoup de recherches reprennent cette méthode de prédiction de terminaison par observation de l’évolution des ressources. Pour le moment, ces méthodes venant des théories de la complexité implicite ont surtout été appliquées sur des langages plus ou moins jouets. Cette thèse tend à porter ces méthodes sur de “vrais” langages de programmation en s’appliquant au niveau des représentations intermédiaires dans des compilateurs largement utilises. Elle fournit à la communauté un outil permettant de traiter une grande quantité d’exemples et d’avoir une idée plus précise de l’expressivité réelle de ces analyses. De plus cette thèse crée un pont entre deux communautés, celle de la complexité implicite et celle de la compilation, montrant ainsi que chacune peut apporter à l’autre.
|
27 |
Exploiting the Computational Power of Ternary Content Addressable MemoryTirdad, Kamran January 2011 (has links)
Ternary Content Addressable Memory or in short TCAM is a special type of memory that can execute a certain set of operations in parallel on all of its words. Because of
power consumption and relatively small storage capacity, it has only been used in special environments. Over the past few years its cost has been reduced and its storage capacity has increased signifi cantly and these exponential trends are continuing. Hence it can be used in more general environments for larger problems. In this research we study how to exploit its computational power in order to speed up fundamental problems and needless to say that we barely scratched the surface. The main problems that has been addressed in our research are namely Boolean matrix multiplication, approximate subset queries using bloom filters, Fixed universe priority queues and network flow classi cation. For Boolean matrix multiplication our simple algorithm has a run time of O (d(N^2)/w) where N is the
size of the square matrices, w is the number of bits in each word of TCAM and d is the
maximum number of ones in a row of one of the matrices. For the Fixed universe priority
queue problems we propose two data structures one with constant time complexity and
space of O((1/ε)n(U^ε)) and the other one in linear space and amortized time complexity of
O((lg lg U)/(lg lg lg U)) which beats the best possible data structure in the RAM model namely Y-fast trees. Considering each word of TCAM as a bloom filter, we modify the hash functions of the bloom filter and propose a data structure which can use the information capacity of each word of TCAM more efi ciently by using the co-occurrence probability of possible members. And finally in the last chapter we propose a novel technique for network flow classi fication using TCAM.
|
28 |
On the complexity of finding optimal edge rankings余鳳玲, Yue, Fung-ling. January 1996 (has links)
published_or_final_version / abstract / toc / Computer Science / Master / Master of Philosophy
|
29 |
Towards a proportional sampling strategy according to path complexity: a simulation studyYip, Wang, 葉弘 January 2000 (has links)
published_or_final_version / Computer Science and Information Systems / Master / Master of Philosophy
|
30 |
Recursively constructed graph families : membership and linear algorithmsBorie, Richard Bryan January 1988 (has links)
No description available.
|
Page generated in 0.0907 seconds