• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 178
  • 22
  • 18
  • 13
  • 9
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 2
  • 2
  • Tagged with
  • 320
  • 320
  • 105
  • 87
  • 76
  • 67
  • 44
  • 40
  • 37
  • 35
  • 28
  • 28
  • 26
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Complexidade computacional e o problema P vs NP / Computational complexity and the P vs NP problem

Oliveira, Igor Carboni 08 February 2010 (has links)
Orientador: Arnaldo Vieira Moura / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica / Made available in DSpace on 2018-08-16T09:31:55Z (GMT). No. of bitstreams: 1 Oliveira_IgorCarboni_M.pdf: 1109272 bytes, checksum: 3ab44664e4e0b862409cc8038c431a06 (MD5) Previous issue date: 2010 / Resumo: A teoria de complexidade computacional procura estabelecer limites para a eficiência dos algoritmos, investigando a dificuldade inerente dos problemas computacionais. O problema P vs NP é uma questão central em complexidade computacional. Informalmente, ele procura determinar se, para uma classe importante de problemas computacionais, a busca exaustiva por soluções é essencialmente a melhor alternativa algorítmica possível. Esta dissertação oferece tanto uma introdução clássica ao tema, quanto uma exposição a diversos teoremas mais avançados, resultados recentes e problemas em aberto. Em particular, o método da diagonalização é discutido em profundidade. Os principais resultados obtidos por diagonalização são os teoremas de hierarquia de tempo e de espaço (Hartmanis e Stearns [54, 104]). Apresentamos uma generalização desses resultados, obtendo como corolários os teoremas clássicos provados por Hartmanis e Stearns. Essa é a primeira vez que uma prova unificada desses resultados aparece na literatura / Abstract: Computational complexity theory is the field of theoretical computer science that aims to establish limits on the efficiency of algorithms. The main open question in computational complexity is the P vs NP problem. Intuitively, it states that, for several important computational problems, there is no algorithm that performs better than a trivial exhaustive search. We present here an introduction to the subject, followed by more recent and advanced results. In particular, the diagonalization method is discussed in detail. Although it is a classical technique in computational complexity, it is the only method that was able to separate strong complexity classes so far. Some of the most important results in computational complexity theory have been proven by diagonalization. In particular, Hartmanis and Stearns [54, 104] proved that, given more resources, one can solve more computational problems. These results are known as hierarchy theorems. We present a generalization of the deterministic hierarchy theorems, recovering the classical results proved by Hartmanis and Stearns as corollaries. This is the first time that such unified treatment is presented in the literature / Mestrado / Teoria da Computação / Mestre em Ciência da Computação
62

The Role of the Goal in Problem Solving Hard Computational Problems: Do People Really Optimize?

Carruthers, Sarah 03 September 2015 (has links)
Understanding how humans cope with complexity is perhaps one of the most important targets of scientific research. Humans not only excel at solving complex tasks in their day to day life but also take on objectively difficult problems recreationally. Research, which has focused to date on the famous hard optimization problem the Euclidean Traveling Salesperson problem (E-TSP), has indicated that humans are able to find near optimal solutions in linear time to hard optimization problems despite the objective difficulty of the task. The research presented in this work contributes to this research by comparing human performance on the search and optimization versions of two other visually presented computationally hard problems: Vertex Cover and Independent Set. These two problems were selected in part to explore how human performance might differ on not-Euclidean problems. Performance on the optimization version of the problems used in this study is in keeping the previously reported results; however, performance on the search version is even better suggesting that previous problem solving research might have underestimated the power of the human problem solving system. A key result of this work is that differences in performance between the optimization and search versions of these hard problems can be attributed to differences in how problem solvers encode the goal of the task. Consequentially, subjects in these conditions tasked with identical instances of two very nearly identical versions of a problem are in fact solving very different problems. This work presents a framework to improve how human performance results on hard optimization problems are interpreted. It also demonstrates how the search version of hard computational problems can be used to investigate how people cope with complexity free of the confounding aspect of an ill-defined goal. / Graduate
63

Predicting program complexity from Warnier-Orr diagrams

White, Barbara January 1982 (has links)
Typescript (photocopy).
64

Algorithms and complexity for annotated sequence analysis

Evans, Patricia Anne 14 December 2017 (has links)
Molecular biologists use algorithms that compare and otherwise analyze sequences that represent genetic and protein molecules. Most of these algorithms, however, operate on the basic sequence and do not incorporate the additional information that is often known about the molecule and its pieces. This research describes schemes to combinatorially annotate this information onto sequences so that it can be analyzed in tandem with the sequence; the overall result would thus reflect both types of information about the sequence. These annotation schemes include adding colours and arcs to the sequence. Colouring a sequence would produce a same-length sequence of colours or other symbols that highlight or label parts of the sequence. Arcs can be used to link sequence symbols (or coloured substrings) to indicate molecular bonds or other relationships. Adding these annotations to sequence analysis problems such as sequence alignment or finding the longest common subsequence can make the problem more complex, often depending on the complexity of the annotation scheme. This research examines the different annotation schemes and the corresponding problems of verifying annotations, creating annotations, and finding the longest common subsequence of pairs of sequences with annotations. This work involves both the conventional complexity framework and parameterized complexity, and includes algorithms and hardness results for both frameworks. Automata and transducers are created for some annotation verification and creation problems. Different restrictions on layered substring and arc annotation are considered to determine what properties an annotation scheme must have to make its incorporation feasible. Extensions to the algorithms that use weighting schemes are explored. schemes are explored. / Graduate
65

Graph-based Global Illumination

Ricks, Brian C. 28 January 2010 (has links) (PDF)
The slow render times of global illumination algorithms make them impractical in most commercial and academic settings. We propose a novel framework for calculating the computational complexity of global illumination algorithms and show that no other recent improvements have reduced this complexity. We further show that many algorithms use a tree as their rendering paradigm. We propose a new rendering algorithm, pipe casting, which calculates light paths using a graph instead of a tree. Pipe casting significantly reduces both computational complexity and actual render time of rendering. Using an L2 pixel-wise error comparison, on average our algorithm can render a variety of scenes at the same error as traditional algorithms but in about 50% of the time.
66

Disentanglement Puzzles and Computation

Miller, Jacob K. January 2017 (has links)
No description available.
67

Adding Threshold Concepts to the Description Logic EL

Fernández Gil, Oliver 14 June 2016 (has links) (PDF)
We introduce a family of logics extending the lightweight Description Logic EL, that allows us to define concepts in an approximate way. The main idea is to use a graded membership function m, which for each individual and concept yields a number in the interval [0,1] expressing the degree to which the individual belongs to the concept. Threshold concepts C~t for ~ in {<,<=,>,>=} then collect all the individuals that belong to C with degree ~t. We further study this framework in two particular directions. First, we define a specific graded membership function deg and investigate the complexity of reasoning in the resulting Description Logic tEL(deg) w.r.t. both the empty terminology and acyclic TBoxes. Second, we show how to turn concept similarity measures into membership degree functions. It turns out that under certain conditions such functions are well-defined, and therefore induce a wide range of threshold logics. Last, we present preliminary results on the computational complexity landscape of reasoning in such a big family of threshold logics.
68

ICC and Probabilistic Classes

Parisen Toldin, Paolo 08 April 2013 (has links) (PDF)
The thesis applies the ICC tecniques to the probabilistic polinomial complexity classes in order to get an implicit characterization of them. The main contribution lays on the implicit characterization of PP (which stands for Probabilistic Polynomial Time) class, showing a syntactical characterisation of PP and a static complexity analyser able to recognise if an imperative program computes in Probabilistic Polynomial Time. The thesis is divided in two parts. The first part focuses on solving the problem by creating a prototype of functional language (a probabilistic variation of lambda calculus with bounded recursion) that is sound and complete respect to Probabilistic Prolynomial Time. The second part, instead, reverses the problem and develops a feasible way to verify if a program, written with a prototype of imperative programming language, is running in Probabilistic polynomial time or not. This thesis would characterise itself as one of the first step for Implicit Computational Complexity over probabilistic classes. There are still open hard problem to investigate and try to solve. There are a lot of theoretical aspects strongly connected with these topics and I expect that in the future there will be wide attention to ICC and probabilistic classes.
69

Coloration et convexité dans les graphes

Araujo, Julio 13 September 2012 (has links) (PDF)
Dans cette thèse, nous étudions plusieurs problèmes de théorie des graphes concernant la coloration et la convexité des graphes. La plupart des résultats gurant ici sont liés à la complexité de calcul de ces problèmes pour certaines classes de graphes. Dans la première, et principale, partie de cette thèse, nous traitons de coloration des graphes qui est l'un des domaines les plus étudiés de théorie des graphes. Nous considérons d'abord trois problèmes de coloration appelés coloration gloutone, coloration pondérée et coloration pondérée impropre. Ensuite, nous traitons un probl ème de décision, appelé bon étiquetage de arêtes, dont la dé nition a été motivée par le problème d'affectation de longueurs d'onde dans les réseaux optiques. La deuxième partie de cette thèse est consacrée à un paramètre d'optimisation des graphes appelé le nombre enveloppe (géodésique). La dé nition de ce paramètre est motivée par une extension aux graphes des notions d'ensembles et d'enveloppes convexes dans l'espace Euclidien. En n, nous présentons dans l'annexe d'autres travaux développées au cours de cette thèse, l'un sur les hypergraphes orientés Eulériens et Hamiltoniens et l'autre concernant les systèmes de stockage distribués.
70

Counting and sampling problems on Eulerian graphs

Creed, Patrick John January 2010 (has links)
In this thesis we consider two sets of combinatorial structures defined on an Eulerian graph: the Eulerian orientations and Euler tours. We are interested in the computational problems of counting (computing the number of elements in the set) and sampling (generating a random element of the set). Specifically, we are interested in the question of when there exists an efficient algorithm for counting or sampling the elements of either set. The Eulerian orientations of a number of classes of planar lattices are of practical significance as they correspond to configurations of certain models studied in statistical physics. In 1992 Mihail and Winkler showed that counting Eulerian orientations of a general Eulerian graph is #P-complete and demonstrated that the problem of sampling an Eulerian orientation can be reduced to the tractable problem of sampling a perfect matching of a bipartite graph. We present a proof that this problem remains #Pcomplete when the input is restricted to being a planar graph, and analyse a natural algorithm for generating random Eulerian orientations of one of the afore-mentioned planar lattices. Moreover, we make some progress towards classifying the range of planar graphs on which this algorithm is rapidly mixing by exhibiting an infinite class of planar graphs for which the algorithm will always take an exponential amount of time to converge. The problem of counting the Euler tours of undirected graphs has proven to be less amenable to analysis than that of Eulerian orientations. Although it has been known for many years that the number of Euler tours of any directed graph can be computed in polynomial time, until recently very little was known about the complexity of counting Euler tours of an undirected graph. Brightwell and Winkler showed that this problem is #P-complete in 2005 and, apart from a few very simple examples, e.g., series-parellel graphs, there are no known tractable cases, nor are there any good reasons to believe the problem to be intractable. Moreover, despite several unsuccessful attempts, there has been no progress made on the question of approximability. Indeed, this problem was considered to be one of the more difficult open problems in approximate counting since long before the complexity of exact counting was resolved. By considering a randomised input model, we are able to show that a very simple algorithm can sample or approximately count the Euler tours of almost every d-in/d-out directed graph in expected polynomial time. Then, we present some partial results towards showing that this algorithm can be used to sample or approximately count the Euler tours of almost every 2d-regular graph in expected polynomial time. We also provide some empirical evidence to support the unproven conjecture required to obtain this result. As a sideresult of this work, we obtain an asymptotic characterisation of the distribution of the number of Eulerian orientations of a random 2d-regular graph.

Page generated in 0.1998 seconds