• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On The Representation Of Finite Fields

Akleylek, Sedat 01 December 2010 (has links) (PDF)
The representation of field elements has a great impact on the performance of the finite field arithmetic. In this thesis, we give modified version of redundant representation which works for any finite fields of arbitrary characteristics to design arithmetic circuits with small complexity. Using our modified redundant representation, we improve many of the complexity values. We then propose new representations as an alternative way to represent finite fields of characteristic two by using Charlier and Hermite polynomials. We show that multiplication in these representations can be achieved with subquadratic space complexity. Charlier and Hermite representations enable us to find binomial, trinomial or quadranomial irreducible polynomials which allows us faster modular reduction over binary fields when there is no desirable such low weight irreducible polynomial in other representations. These representations are very interesting for the NIST and SEC recommended binary fields GF(2^{283}) and GF(2^{571}) since there is no optimal normal basis (ONB) for the corresponding extensions. It is also shown that in some cases the proposed representations have better space complexity even if there exists an ONB for the corresponding extension.
2

The size and depth of Boolean circuits

Jang, Jing-Tang Keith 27 September 2013 (has links)
We study the relationship between size and depth for Boolean circuits. Over four decades, very few results were obtained for either special or general Boolean circuits. Spira showed in 1971 that any Boolean formula of size s can be simulated in depth O(log s). Spira's result means that an arbitrary Boolean expression can be replaced by an equivalent "balanced" expression, that can be evaluated very efficiently in parallel. For general Boolean circuits, the strongest known result is that Boolean circuits of size s can be simulated in depth O(s / log s). We obtain significant improvements over the general bounds for the size versus depth problem for special classes of Boolean circuits. We show that every layered Boolean circuit of size s can be simulated by a layered Boolean circuit of depth O(sqrt{s log s}). For planar circuits and synchronous circuits of size s, we obtain simulations of depth O(sqrt{s}). Improving any of the above results by polylog factors would immediately improve the bounds for general circuits. We generalize Spira's theorem and show that any Boolean circuit of size s with segregators of size f(s) can be simulated in depth O(f(s)log s). This improves and generalizes a simulation of polynomial-size Boolean circuits of constant treewidth k in depth O(k² log n) by Jansen and Sarma. Since the existence of small balanced separators in a directed acyclic graph implies that the graph also has small segregators, our results also apply to circuits with small separators. Our results imply that the class of languages computed by non-uniform families of polynomial size circuits that have constant size segregators equals non-uniform NC¹. As an application of our simulation of circuits in small depth, we show that the Boolean Circuit Value problem for circuits with constant size segregators (or separators) is in deterministic SPACE (log² n). Our results also imply that the Planar Circuit Value problem, which is known to be P-Complete, is in SPACE (sqrt{n} log n). We also show that the Layered Circuit Value and Synchronous Circuit Value problems, which are both P-complete, are in SPACE(sqrt{n}). Our study of circuits with small separators and segregators led us to obtain space efficient algorithms for computing balanced graph separators. We extend this approach to obtain space efficient approximation algorithms for the search and optimization versions of the SUBSET SUM problem, which is one of the most studied NP-complete problems. Finally we study the relationship between simultaneous time and space bounds on Turing machines and Boolean circuit depth. We observe a new connection between planar circuit size and simultaneous time and space products of input-oblivious Turing machines. We use this to prove quadratic lower bounds on the product of time and space for several explicit functions for input-oblivious Turing machines. / text
3

Computational Complexity of Tree Evaluation Problems and Branching Program Satisfiability Problems / 木構造関数値評価問題と分岐プログラム充足性問題に対する計算複雑さ

Nagao, Atsuki 23 March 2015 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第19129号 / 情博第575号 / 新制||情||101(附属図書館) / 32080 / 京都大学大学院情報学研究科通信情報システム専攻 / (主査)教授 岩間 一雄, 教授 髙木 直史, 教授 五十嵐 淳 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
4

Space in Proof Complexity

Vinyals, Marc January 2017 (has links)
ropositional proof complexity is the study of the resources that are needed to prove formulas in propositional logic. In this thesis we are concerned with the size and space of proofs, and in particular with the latter. Different approaches to reasoning are captured by corresponding proof systems. The simplest and most well studied proof system is resolution, and we try to get our understanding of other proof systems closer to that of resolution. In resolution we can prove a space lower bound just by showing that any proof must have a large clause. We prove a similar relation between resolution width and polynomial calculus space that lets us derive space lower bounds, and we use it to separate degree and space. For cutting planes we show length-space trade-offs. This is, there are formulas that have a proof in small space and a proof in small length, but there is no proof that can optimize both measures at the same time. We introduce a new measure of space, cumulative space, that accounts for the space used throughout a proof rather than only its maximum. This is exploratory work, but we can also prove new results for the usual space measure. We define a new proof system that aims to capture the power of current SAT solvers, and we show a landscape of length-space trade-offs comparable to those in resolution. To prove these results we build and use tools from other areas of computational complexity. One area is pebble games, very simple computational models that are useful for modelling space. In addition to results with applications to proof complexity, we show that pebble game cost is PSPACE-hard to approximate. Another area is communication complexity, the study of the amount of communication that is needed to solve a problem when its description is shared by multiple parties. We prove a simulation theorem that relates the query complexity of a function with the communication complexity of a composed function. / <p>QC 20170509</p>
5

Simplifying an Infinitely Complex State-Space: Real-Time Strategy Optimization in Supreme Commander: Forged Alliance

Cornell, Michael 01 January 2015 (has links)
Supreme Commander: Forged Alliance is considered to be among the most complex real-time strategy games that exist today. I am testing whether or not players can gain meaningful strategic value through statistical analysis. In particular, I have focused my analysis on how players can redefine their path from their initial-state to their goal-state in one versus one games, how players can optimize their chance of winning through faction selection, and how players ought to evaluate balance in team games. Through regression analysis, I have concluded that there is potential for statistics to inform player behavior when it comes to each of these strategies.
6

Design da sala de aula: arranjos espaciais e suas potencialidades

TROVO, Priscila Azzolini 22 August 2017 (has links)
Submitted by Patricia Figuti Venturini (pfiguti@anhembi.br) on 2018-08-20T18:12:22Z No. of bitstreams: 1 451412.pdf: 9245919 bytes, checksum: 516d58e32cba3d59b39f9e2ac238ff38 (MD5) / Approved for entry into archive by Patricia Figuti Venturini (pfiguti@anhembi.br) on 2018-08-20T20:27:49Z (GMT) No. of bitstreams: 1 451412.pdf: 9245919 bytes, checksum: 516d58e32cba3d59b39f9e2ac238ff38 (MD5) / Approved for entry into archive by Patricia Figuti Venturini (pfiguti@anhembi.br) on 2018-08-21T13:46:05Z (GMT) No. of bitstreams: 1 451412.pdf: 9245919 bytes, checksum: 516d58e32cba3d59b39f9e2ac238ff38 (MD5) / Made available in DSpace on 2018-08-21T13:46:21Z (GMT). No. of bitstreams: 1 451412.pdf: 9245919 bytes, checksum: 516d58e32cba3d59b39f9e2ac238ff38 (MD5) Previous issue date: 2017-08-22 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / The school environment forms a system of values and speeches, in which its users are stimulated to learn. In this context, the present dissertation aims to investigate the relationship between the design of the classroom and the possi- ble interactions and experiences of its users, through the analysis of the spatial arrangements and their potentialities. The research involves bibliographical re- view, articulation of theoretical concepts, contextualization of space and place and an understanding of the classroom as an adaptive complex system. In this approach, three spatial arrangements of classrooms, identified by Scott-Webber (2009) as recurrent in educational institutions, are set up in rigid and fluid en- vironments, analyzed through personal space (SOMMER, 1973) and proxemic zones (HALL, 2006) theories. Therefore, it is possible to trace an activity profile and intention by using each arrangement, in order to promote the desired inte- ractions. It is discussed the rigidity of the projected space and its possibilities of adaptability to the needs of the users. In conclusion, the need to design spaces is more flexible and adaptable to contemporary demands. / O ambiente escolar formata um sistema de valores e discursos, no qual seus usuários são estimulados à aprendizagem. Neste sentido, esta dissertação tem como objetivo investigar a relação entre o design da sala de aula e as possíveis interações e experiências dos usuários, a partir da análise dos arranjos espaciais e suas potencialidades. A pesquisa envolve revisão bibliográfica, articulação dos conceitos teóricos, contextualização acerca de espaço e lugar e compreensão da sala de aula como sistema complexo adaptativo. Nesta abordagem, observa- se três arranjos espaciais de salas de aula, apontadas por Scott-Webber (2009) como recorrentes em instituições de ensino, que configuram ambientes rígidos e fluídos, analisados a partir das teorias de espaço pessoal (SOMMER, 1973) e zonas proxêmicas (HALL, 2006). Assim, é possível traçar um perfil de atividade e intenção com o uso de cada um dos arranjos, a fim de promover as interações desejadas. Discute-se a rigidez do espaço projetado e suas possibilidades de adaptabilidade às necessidades dos usuários. Conclui-se a necessidade de projetar espaços mais flexíveis e adaptáveis às demandas contemporâneas.
7

On An Architecture For A Parallel Finite Field Multiplier With Low Complexity Based On Composite Fields

Kindap, Nihal 01 September 2004 (has links) (PDF)
In this thesis, a bit parallel architecture for a parallel finite field multiplier with low complexity in composite fields GF((2n)m) with k = n &middot / m (k 32) is investigated. The architecture has lower complexity when the Karatsuba-Ofman algorithm is applied for certain k. Using particular primitive polynomials for composite fields improves the complexities. We demonstrated for the values m = 2, 4, 8 in details. This thesis is based on the paper &ldquo / A New Architecture for a Parallel Finite Field Multiplier with Low Complexity Based on Composite Fields &rdquo / by Christof Paar. The whole purpose of this thesis is to understand and present a detailed description of the results of the paper of Paar.
8

Étude de la complexité des implémentations d'objets concurrents, sans attente, abandonnables et/ou solo-rapides / On the complexity of wait-free, abortable and/or solo-fast concurrent object implementations

Capdevielle, Claire 03 November 2016 (has links)
Dans un ordinateur multiprocesseur, lors de l'accès à la mémoire partagée, il faut synchroniser les entités de calcul (processus). Cela peut se faire à l'aide de verrous, mais des problèmes se posent (par exemple interblocages, mauvaise tolérance aux pannes). On s'est intéressé à l'implémentation d'abstractions (consensus et construction universelle) qui peuvent faciliter la programmation concurrente sans attente, sans utiliser de verrous mais basés sur des lectures/écritures atomiques (LEA). L'usage exclusive des LEA ne permet pas de réaliser un consensus sans attente. Néanmoins, autoriser l'usage de primitives offrant une puissance de synchronisation plus forte que des LEA, mais coûteuse en temps de calcul, le permet. Nous nous sommes donc intéressés dans cette thèse à des programmes qui limitent l'usage de ces primitives aux seules situations où les processus sont en concurrence, ces programmes sont dit solo-rapides. Une autre piste étudiée est de permettre à l'objet, lorsqu'il y a de la concurrence, de retourner une réponse spéciale "abandon" qui signifie l'abandon des calculs en cours. Ces objets sont dit abandonnables. D'une part, nous donnons des implémentations d'objets concurrents sans attente, abandonnables et/ou solo-rapides. Pour cela, nous proposons une construction universelle qui assure à l'objet implémenté d'être abandonnable et solo-rapide ; nous avons réalisés des algorithmes de consensus solo-rapides et des algorithmes de consensus abandonnable. D'autre part nous étudions la complexité en espace de ces implémentations en proposant des bornes inférieures sur l'implémentation des objets abandonnables et sur le consensus. / In multiprocessor computer, synchronizations between processes are needed for the access to the shared memory. Usually this is done by using locks, but there are some issues as deadlocks or lack of fault-tolerance. We are interested in implementing abstractions (as consensus or universal construction) which ease the programming of wait-free concurrent objects, without using lock but based on atomic Read/Write operations (ARW). Only using the ARW does not permit to implement wait-free consensus. The use of primitives which offer a higher power of synchronization than the ARW is needed. But these primitives are more expensive in computing time. Therefore, we are interested in this thesis in the design of algorithms which restrict the use of these primitives only to the cases where processes are in contention. These algorithms are said solo-fast. Another direction is to allow the object to abort the computation in progress - and to return a special response "abort" - when there is contention. These objects are named abortable. On the one hand we give wait-free, abortable and/or solo-fast concurrent object implementations. Indeed we proposed a universal construction which ensure to the implemented object to be abortable and solo-fast. We have also realized solo-fast consensus algorithms and abortable consensus algorithms. On the other hand, we study the space complexity of these implementations : we prove space lower bound on the implementation of abortable object and consensus.
9

Efficient betweenness Centrality Computations on Hybrid CPU-GPU Systems

Mishra, Ashirbad January 2016 (has links) (PDF)
Analysis of networks is quite interesting, because they can be interpreted for several purposes. Various features require different metrics to measure and interpret them. Measuring the relative importance of each vertex in a network is one of the most fundamental building blocks in network analysis. Between’s Centrality (BC) is one such metric that plays a key role in many real world applications. BC is an important graph analytics application for large-scale graphs. However it is one of the most computationally intensive kernels to execute, and measuring centrality in billion-scale graphs is quite challenging. While there are several existing e orts towards parallelizing BC algorithms on multi-core CPUs and many-core GPUs, in this work, we propose a novel ne-grained CPU-GPU hybrid algorithm that partitions a graph into two partitions, one each for CPU and GPU. Our method performs BC computations for the graph on both the CPU and GPU resources simultaneously, resulting in a very small number of CPU-GPU synchronizations, hence taking less time for communications. The BC algorithm consists of two phases, the forward phase and the backward phase. In the forward phase, we initially and the paths that are needed by either partitions, after which each partition is executed on each processor in an asynchronous manner. We initially compute border matrices for each partition which stores the relative distances between each pair of border vertex in a partition. The matrices are used in the forward phase calculations of all the sources. In this way, our hybrid BC algorithm leverages the multi-source property inherent in the BC problem. We present proof of correctness and the bounds for the number of iterations for each source. We also perform a novel hybrid and asynchronous backward phase, in which each partition communicates with the other only when there is a path that crosses the partition, hence it performs minimal CPU-GPU synchronizations. We use a variety of implementations for our work, like node-based and edge based parallelism, which includes data-driven and topology based techniques. In the implementation we show that our method also works using variable partitioning technique. The technique partitions the graph into unequal parts accounting for the processing power of each processor. Our implementations achieve almost equal percentage of utilization on both the processors due to the technique. For large scale graphs, the size of the border matrix also becomes large, hence to accommodate the matrix we present various techniques. The techniques use the properties inherent in the shortest path problem for reduction. We mention the drawbacks of performing shortest path computations on a large scale and also provide various solutions to it. Evaluations using a large number of graphs with different characteristics show that our hybrid approach without variable partitioning and border matrix reduction gives 67% improvement in performance, and 64-98.5% less CPU-GPU communications than the state of art hybrid algorithm based on the popular Bulk Synchronous Paradigm (BSP) approach implemented in TOTEM. This shows our algorithm's strength which reduces the need for larger synchronizations. Implementing variable partitioning, border matrix reduction and backward phase optimizations on our hybrid algorithm provides up to 10x speedup. We compare our optimized implementation, with CPU and GPU standalone codes based on our forward phase and backward phase kernels, and show around 2-8x speedup over the CPU-only code and can accommodate large graphs that cannot be accommodated in the GPU-only code. We also show that our method`s performance is competitive to the state of art multi-core CPU and performs 40-52% better than GPU implementations, on large graphs. We show the drawbacks of CPU and GPU only implementations and try to motivate the reader about the challenges that graph algorithms face in large scale computing, suggesting that a hybrid or distributed way of approaching the problem is a better way of overcoming the hurdles.
10

Srovnání algoritmů při řešení problému obchodního cestujícího / The Comparison of the Algorithms for the Solution of Travel Sales Problem

Kopřiva, Jan January 2009 (has links)
The Master Thesis deals with logistic module innovation of information system ERP. The principle of innovation is based on implementation of heuristic algorithms which solve Travel Salesman Problems (TSP). The software MATLAB is used for analysis and tests of these algorithms. The goal of Master Thesis is the comparison of selections algorithm, which are suitable for economic purposes (accuracy of solution, speed of calculation and memory demands).

Page generated in 0.0506 seconds