• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 376
  • 52
  • 47
  • 20
  • 12
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 759
  • 304
  • 256
  • 203
  • 180
  • 169
  • 75
  • 69
  • 61
  • 58
  • 52
  • 52
  • 51
  • 47
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Overlap-Free Words and Generalizations

Rampersad, Narad January 2007 (has links)
The study of combinatorics on words dates back at least to the beginning of the 20th century and the work of Axel Thue. Thue was the first to give an example of an infinite word over a three letter alphabet that contains no squares (identical adjacent blocks) xx. This result was eventually used to solve some longstanding open problems in algebra and has remarkable connections to other areas of mathematics and computer science as well. This thesis will consider several different generalizations of Thue's work. In particular we shall study the properties of infinite words avoiding various types of repetitions. In Chapter 1 we introduce the theory of combinatorics on words. We present the basic definitions and give an historical survey of the area. In Chapter 2 we consider the work of Thue in more detail. We present various well-known properties of the Thue-Morse word and give some generalizations. We examine Fife's characterization of the infinite overlap-free words and give a simpler proof of this result. We also present some applications to transcendental number theory, generalizing a classical result of Mahler. In Chapter 3 we generalize a result of Seebold by showing that the only infinite 7/3-power-free binary words that can be obtained by iterating a morphism are the Thue-Morse word and its complement. In Chapter 4 we continue our study of overlap-free and 7/3-power-free words. We discuss the squares that can appear as subwords of these words. We also show that it is possible to construct infinite 7/3-power-free binary words containing infinitely many overlaps. In Chapter 5 we consider certain questions of language theory. In particular, we examine the context-freeness of the set of words containing overlaps. We show that over a three-letter alphabet, this set is not context-free, and over a two-letter alphabet, we show that this set cannot be unambiguously context-free. In Chapter 6 we construct infinite words over a four-letter alphabet that avoid squares in any arithmetic progression of odd difference. Our constructions are based on properties of the paperfolding words. We use these infinite words to construct non-repetitive tilings of the integer lattice. In Chapter 7 we consider approximate squares rather than squares. We give constructions of infinite words that avoid such approximate squares. In Chapter 8 we conclude the work and present some open problems.
62

The combinatorics of the Jack parameter and the genus series for topological maps

La Croix, Michael Andrew January 2009 (has links)
Informally, a rooted map is a topologically pointed embedding of a graph in a surface. This thesis examines two problems in the enumerative theory of rooted maps. The b-Conjecture, due to Goulden and Jackson, predicts that structural similarities between the generating series for rooted orientable maps with respect to vertex-degree sequence, face-degree sequence, and number of edges, and the corresponding generating series for rooted locally orientable maps, can be explained by a unified enumerative theory. Both series specialize M(x,y,z;b), a series defined algebraically in terms of Jack symmetric functions, and the unified theory should be based on the existence of an appropriate integer valued invariant of rooted maps with respect to which M(x,y,z;b) is the generating series for locally orientable maps. The conjectured invariant should take the value zero when evaluated on orientable maps, and should take positive values when evaluated on non-orientable maps, but since it must also depend on rooting, it cannot be directly related to genus. A new family of candidate invariants, η, is described recursively in terms of root-edge deletion. Both the generating series for rooted maps with respect to η and an appropriate specialization of M satisfy the same differential equation with a unique solution. This shows that η gives the appropriate enumerative theory when vertex degrees are ignored, which is precisely the setting required by Goulden, Harer, and Jackson for an application to algebraic geometry. A functional equation satisfied by M and the existence of a bijection between rooted maps on the torus and a restricted set of rooted maps on the Klein bottle show that η has additional structural properties that are required of the conjectured invariant. The q-Conjecture, due to Jackson and Visentin, posits a natural combinatorial explanation, for a functional relationship between a generating series for rooted orientable maps and the corresponding generating series for 4-regular rooted orientable maps. The explanation should take the form of a bijection, ϕ, between appropriately decorated rooted orientable maps and 4-regular rooted orientable maps, and its restriction to undecorated maps is expected to be related to the medial construction. Previous attempts to identify ϕ have suffered from the fact that the existing derivations of the functional relationship involve inherently non-combinatorial steps, but the techniques used to analyze η suggest the possibility of a new derivation of the relationship that may be more suitable to combinatorial analysis. An examination of automorphisms that must be induced by ϕ gives evidence for a refinement of the functional relationship, and this leads to a more combinatorially refined conjecture. The refined conjecture is then reformulated algebraically so that its predictions can be tested numerically.
63

Quantum Information Processing with Adversarial Devices

McKague, Matthew 20 May 2010 (has links)
We consider several applications in black-box quantum computation in which untrusted physical quantum devices are connected together to produce an experiment. By examining the outcome statistics of such an experiment, and comparing them against the desired experiment, we may hope to certify that the physical experiment is implementing the desired experiment. This is useful in order to verify that a calculation has been performed correctly, that measurement outcomes are secure, or that the devices are producing the desired state. First, we introduce constructions for a family of simulations, which duplicate the outcome statistics of an experiment but are not exactly the same as the desired experiment. This places limitations on how strict we may be with the requirements we place on the physical devices. We identify many simulations, and consider their implications for quantum foundations as well as security related applications. The most general application of black-box quantum computing is self-testing circuits, in which a generic physical circuit may be tested against a given circuit. Earlier results were restricted to circuits described on a real Hilbert space. We give new proofs for earlier results and begin work extending them to circuits on a complex Hilbert space with a test that verifies complex measurements. For security applications of black-box quantum computing, we consider device independent quantum key distribution (DIQKD). We may consider DIQKD as an extension of QKD (quantum key distribution) in which the model of the physical measurement devices is replaced with an adversarial model. This introduces many technical problems, such as unbounded dimension, but promises increased security since the many complexities hidden by traditional models are implicitly considered. We extend earlier work by proving security with fewer assumptions. Finally, we consider the case of black-box state characterization. Here the emphasis is placed on providing robust results with operationally meaningful measures. The goal is to certify that a black box device is producing high quality maximally entangled pairs of qubits using only untrusted measurements and a single statistic, the CHSH value, defined using correlations of outcomes from the two parts of the system. We present several measures of quality and prove bounds for them.
64

On the orientation of hypergraphs

Ruiz-Vargas, Andres J. 12 1900 (has links)
This is an expository thesis. In this thesis we study out-orientations of hypergraphs, where every hyperarc has one tail vertex. We study hypergraphs that admit out-orientations covering supermodular-type connectivity requirements. For this, we follow a paper of Frank. We also study the Steiner rooted orientation problem. Given a hypergraph and a subset of vertices S ⊆ V, the goal is to give necessary and sufficient conditions for an orientation such that the connectivity between a root vertex and each vertex of S is at least k, for a positive integer k. We follow a paper by Kiraly and Lau, where they prove that every 2k-hyperedge connected hypergraph has such an orientation.
65

Analyzing Quantum Cryptographic Protocols Using Optimization Techniques

Sikora, Jamie 20 April 2012 (has links)
This thesis concerns the analysis of the unconditional security of quantum cryptographic protocols using convex optimization techniques. It is divided into the study of coin-flipping and oblivious transfer. We first examine a family of coin-flipping protocols. Almost all of the handful of explicitly described coin-flipping protocols are based on bit-commitment. To explore the possibility of finding explicit optimal or near-optimal protocols, we focus on a class which generalizes such protocols. We call these $\BCCF$-protocols, for bit-commitment based coin-flipping. We use the semidefinite programming (SDP) formulation of cheating strategies along the lines of Kitaev to analyze the structure of the protocols. In the first part of the thesis, we show how these semidefinite programs can be used to simplify the analysis of the protocol. In particular, we show that a particular set of cheating strategies contains an optimal strategy. This reduces the problem to optimizing a linear combination of fidelity functions over a polytope which has several benefits. First, it allows one to model cheating probabilities using a simpler class of optimization problems known as second-order cone programs (SOCPs). Second, it helps with the construction of point games due to Kitaev as described in Mochon's work. Point games were developed to give a new perspective for studying quantum protocols. In some sense, the notion of point games is dual to the notion of protocols. There has been increased research activity in optimization concerning generalizing theory and algorithms for linear programming to much wider classes of optimization problems such as semidefinite programming. For example, semidefinite programming provides a tool for potentially improving results based on linear programming or investigating old problems that have eluded analysis by linear programming. In this sense, the history of semidefinite programming is very similar to the history of quantum computation. Quantum computing gives a generalized model of computation to tackle new and old problems, improving on and generalizing older classical techniques. Indeed, there are striking differences between linear programming and semidefinite programming as there are between classical and quantum computation. In this thesis, we strengthen this analogy by studying a family of classical coin-flipping protocols based on classical bit-commitment. Cheating strategies for these ``classical $\BCCF$-protocols'' can be formulated as linear programs (LPs) which are closely related to the semidefinite programs for the quantum version. In fact, we can construct point games for the classical protocols as well using the analysis for the quantum case. Using point games, we prove that every classical $\BCCF$-protocol allows exactly one of the parties to entirely determine the outcome. Also, we rederive Kitaev's lower bound to show that only ``classical'' protocols can saturate Kitaev's analysis. Moreover, if the product of Alice and Bob's optimal cheating probabilities is $1/2$, then at least one party can cheat with probability $1$. The second part concerns the design of an algorithm to search for $\BCCF$-protocols with small bias. Most coin-flipping protocols with more than three rounds have eluded direct analysis. To better understand the properties of optimal $\BCCF$-protocols with four or more rounds, we turn to computational experiments. We design a computational optimization approach to search for the best protocol based on the semidefinite programming formulations of cheating strategies. We create a protocol filter using cheating strategies, some of which build upon known strategies and others are based on convex optimization and linear algebra. The protocol filter efficiently eliminates candidate protocols with too high a bias. Using this protocol filter and symmetry arguments, we perform searches in a matter of days that would have otherwise taken millions of years. Our experiments checked $10^{16}$ four and six-round $\BCCF$-protocols and suggest that the optimal bias is $1/4$. The third part examines the relationship between oblivious transfer, bit-commitment, and coin-flipping. We consider oblivious transfer which succeeds with probability $1$ when the two parties are honest and construct a simple protocol with security provably better than any classical protocol. We also derive a lower bound by constructing a bit-commitment protocol from an oblivious transfer protocol. Known lower bounds for bit-commitment then lead to a constant lower bound on the bias of oblivious transfer. Finally, we show that it is possible to use Kitaev's semidefinite programming formulation of cheating strategies to obtain optimal lower bounds on a ``forcing'' variant of oblivious transfer related to coin-flipping.
66

On the Efficiency and Security of Cryptographic Pairings

Knapp, Edward 04 December 2012 (has links)
Pairing-based cryptography has been employed to obtain several advantageous cryptographic protocols. In particular, there exist several identity-based variants of common cryptographic schemes. The computation of a single pairing is a comparatively expensive operation, since it often requires many operations in the underlying elliptic curve. In this thesis, we explore the efficient computation of pairings. Computation of the Tate pairing is done in two steps. First, a Miller function is computed, followed by the final exponentiation. We discuss the state-of-the-art optimizations for Miller function computation under various conditions. We are able to shave off a fixed number of operations in the final exponentiation. We consider methods to effectively parallelize the computation of pairings in a multi-core setting and discover that the Weil pairing may provide some advantage under certain conditions. This work is extended to the 192-bit security level and some unlikely candidate curves for such a setting are discovered. Electronic Toll Pricing (ETP) aims to improve road tolling by collecting toll fares electronically and without the need to slow down vehicles. In most ETP schemes, drivers are charged periodically based on the locations, times, distances or durations travelled. Many ETP schemes are currently deployed and although these systems are efficient, they require a great deal of knowledge regarding driving habits in order to operate correctly. We present an ETP scheme where pairing-based BLS signatures play an important role. Finally, we discuss the security of pairings in the presence of an efficient algorithm to invert the pairing. We generalize previous results to the setting of asymmetric pairings as well as give a simplified proof in the symmetric setting.
67

Graph searching and a generalized parking function

Kostic, Dimitrije Nenad 15 May 2009 (has links)
Parking functions have been a focus of mathematical research since the mid-1970s. Various generalizations have been introduced since the mid-1990s and deep relationships between these and other areas of mathematics have been discovered. Here, we introduced a new generalization, the G-multiparking function, where G is a simple graph on a totally ordered vertex set {1, 2, . . . , n}. We give an algorithm that converts a G-multiparking function into a rooted spanning forest of G by using a graph searching technique to build a sequence F1, F2, . . . , Fn, where each Fi is a subforest of G and Fn is a spanning forest of G. We also give another algorithm that converts a rooted spanning forest of G to a G-multiparking function and prove that the resulting functions (between the sets of G-multiparking functions and rooted spanning forests of G) are bijections and inverses of each other. Each of these two algorithms relies on a choice function , which is a function from the set of pairs (F,W) (where F is a subforest of G and W is a set of some of the leaves of F) into W. There are many possible choice functions for a given graph, each giving formality to the concept of choosing how a graph searching algorithm should procede at each step (i.e. if the algorithm has reached Fi, then Fi+1 is some forest on the vertex set V (Fi)∪{(Fi,W)} for some W). We also define F-redundant edges, which are edges whose removal from G does not affect the execution of either algorithm mentioned above. This concept leads to a categorization of the edges of G, which in turn provides a new formula for the Tutte polynomial of G and other enumerative results.
68

Topics in additive combinatorics

Fiz Pontiveros, Gonzalo January 2013 (has links)
No description available.
69

Counting coloured boxes

Young, Benjamin 05 1900 (has links)
This thesis consists of the manuscripts of two research papers. In the first paper, we verify a recent conjecture of Kenyon/Szendroi by computing the generating function for pyramid partitions. Pyramid partitions are closely related to Aztec Diamonds; their generating function turns out to be the partition function for the Donaldson-Thomas theory of a non-commutative resolution of the conifold singularity {x₁‚x₂‚‚ - x₃‚ƒx₄‚„ = 0}⊂ C⁴. The proof does not require algebraic geometry; it uses a modified version of the domino (or dimer) shuffling algorithm of Elkies, Kuperberg, Larsen and Propp. In the second paper, we derive two multivariate generating functions for three-dimensional Young diagrams (also called plane partitions). The variables correspond to a colouring of the boxes according to a finite abelian subgroup G of SO(3). These generating functions turn out to be orbifold Donaldson-Thomas partition functions for the orbifold [C³/G]. We need only the vertex operator methods of Okounkov-Reshetikhin-Vafa for the easy case G = Zn; to handle the considerably more difficult case G = Z₂‚‚ x Z₂‚‚, we will also use a refinement of the author's recent q-enumeration of pyramid partitions. In the appendix, written by Jim Bryan, we relate the diagram generating functions to the Donaldson-Thomas partition functions of the orbifold [C³/G]. We find a relationship between the Donaldson-Thomas partition functions of the orbifold and its G-Hilbert scheme resolution. We formulate a crepant resolution conjecture for the Donaldson-Thomas theory of local orbifolds satisfying the Hard Lefschetz condition.
70

Quantum Information Processing with Adversarial Devices

McKague, Matthew 20 May 2010 (has links)
We consider several applications in black-box quantum computation in which untrusted physical quantum devices are connected together to produce an experiment. By examining the outcome statistics of such an experiment, and comparing them against the desired experiment, we may hope to certify that the physical experiment is implementing the desired experiment. This is useful in order to verify that a calculation has been performed correctly, that measurement outcomes are secure, or that the devices are producing the desired state. First, we introduce constructions for a family of simulations, which duplicate the outcome statistics of an experiment but are not exactly the same as the desired experiment. This places limitations on how strict we may be with the requirements we place on the physical devices. We identify many simulations, and consider their implications for quantum foundations as well as security related applications. The most general application of black-box quantum computing is self-testing circuits, in which a generic physical circuit may be tested against a given circuit. Earlier results were restricted to circuits described on a real Hilbert space. We give new proofs for earlier results and begin work extending them to circuits on a complex Hilbert space with a test that verifies complex measurements. For security applications of black-box quantum computing, we consider device independent quantum key distribution (DIQKD). We may consider DIQKD as an extension of QKD (quantum key distribution) in which the model of the physical measurement devices is replaced with an adversarial model. This introduces many technical problems, such as unbounded dimension, but promises increased security since the many complexities hidden by traditional models are implicitly considered. We extend earlier work by proving security with fewer assumptions. Finally, we consider the case of black-box state characterization. Here the emphasis is placed on providing robust results with operationally meaningful measures. The goal is to certify that a black box device is producing high quality maximally entangled pairs of qubits using only untrusted measurements and a single statistic, the CHSH value, defined using correlations of outcomes from the two parts of the system. We present several measures of quality and prove bounds for them.

Page generated in 0.0708 seconds