• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 2
  • 1
  • Tagged with
  • 47
  • 47
  • 47
  • 12
  • 11
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Infinite Structures in Timed Systems

Krcal, Pavel January 2009 (has links)
Real time systems distinguish themselves by explicitly stating timing constraints in the system specification. This requires specific methods and tools in system design to ensure such constraints. We focus on one of the methods applied in the validation phase, namely formal verification. This method automatically establishes correctness of the system model with mathematical rigor. In order to apply mechanical procedures to determine whether the system satisfies the requirements, we first have to model the validated part of the system in a mathematical form. This thesis deals with one such formalism - timed automata - and investigates different types of infinite state structures arising in the verification procedures related to this formalism. There are two different views which open the door for introduction of such structures. First, we turn outwards and extend timed automata with additional infinite data structures - unbounded queues. These queues serve different purposes. In one case, the queues contain computation tasks and, together with a timed automaton, model a real-time system with tasks. The problem of interest in this setting is schedulability analysis. We investigate the decidability boundary in presence of various features such as preemption, variable computation times of tasks, and communication between the timed automaton and the task queue. In the other case, we use queues for asynchronous communication between timed automata running synchronously in parallel. These queues store messages issued by one automaton and waiting to be read by another automaton. Such situations occur among other cases in real-time control systems where several concurrently running tasks communicate via buffers. We study the decidability border for reachability analysis depending on various communication topologies of these systems. Secondly, we turn inwards and study a peculiar feature of timed automata which allows them to enforce behaviors where time distances between events monotonically grow while being bounded by some integer. This feature can be characterized by unbounded counters recording the number of such enforced increases. When we switch from the dense time semantics used for modeling to an implementation with a fixed clock rate (sampled semantics), only behaviors which correspond to a bounded usage of these counters are preserved. We describe operation of these counters as a new type of a counter automaton and prove that one can effectively check whether the counters are used in a bounded way. As a result, it is possible to check for a given timed automaton whether there is an implementation with a fixed sampling rate which preserves all qualitative behaviors.
2

Alternative variants of zero-knowledge proofs

Pass, Rafael January 2004 (has links)
No description available.
3

A Parameterized Framework for Quantum Computation

Mayfield, James L., IV 16 October 2012 (has links)
No description available.
4

Privacy in Complex Sample Based Surveys

Shawn A Merrill (11806802) 20 December 2021 (has links)
In the last few decades, there has been a dramatic uptick in the issues related to protecting user privacy in released data, both in statistical databases and anonymized records. Privacy-preserving data publishing is a field established to handle these releases while avoiding the problems that plagued many earlier attempts. This issue is of particular importance for governmental data, where both the release and the privacy requirements are frequently governed by legislature (e.g., HIPAA, FERPA, Clery Act). This problem is doubly compounded by the complex survey methods employed to counter problems in data collection. The preeminent definition for privacy is that of differential privacy, which protects users by limiting the impact that any individual can have on the result of any query. <br><br>The thesis proposes models for differentially private versions of current survey methodologies and, discusses the evaluation of those models. We focus on the issues of missing data and weighting which are common techniques employed in complex surveys to counter problems with sampling and response rates. First we propose a model for answering queries on datasets with missing data while maintaining differential privacy. Our model uses k-Nearest Neighbor imputation to replicate donor values while protecting the privacy of the donor. Our model provides significantly better bias reduction in realistic experiments using existing data, as well as providing less noise than a naive solution. Our second model proposes a method of performing Iterative Proportional Fitting (IPF) in a differentially private manner, a common technique used to ensure that survey records are weighted consistently with known values. We also focus on the general philosophical need to incorporate privacy when creating new survey methodologies, rather than assuming that privacy can simply be added at a later step.
5

Stochastic Block Model Dynamics

Nithish Kumar Kumar (10725294) 29 April 2021 (has links)
<div>The past few years have seen an increasing focus on fairness and the long-term impact of algorithmic decision making in the context of Machine learning, Artificial Intelligence and other disciplines. In this thesis, we model hiring processes in enterprises and organizations using dynamic mechanism design. Using a stochastic block model to simulate the workings of a hiring process, we study fairness and long-term evolution in the system. </div><div> </div><div> We first present multiple results on a deterministic variant of our model including convergence and an accurate approximate solution describing the state of the deterministic variant after any time period has elapsed. Using the differential equation method, it can be shown that this deterministic variant is in turn an accurate approximation of the evolution of our stochastic block model with high probability.</div><div> </div><div> Finally, we derive upper and lower bounds on the expected state at each time step, and further show that in the limiting case of the long-term, these upper and lower bounds themselves converge to the state evolution of the deterministic system. These results offer conclusions on the long-term behavior of our model, thereby allowing reasoning on how fairness in organizations could be achieved. We conclude that without sufficient, systematic incentives, under-represented groups will wane out from organizations over time.</div>
6

Simple Games on Networks

Kimmel, Jason January 2011 (has links)
No description available.
7

Fine-grained Lower Bounds for Problems on Strings and Graphs

Hoppenworth, Gary Thomas 01 January 2021 (has links)
The motivation of this thesis is to present new lower bounds for important computational problems on strings and graphs, conditioned on plausible conjectures in theoretical computer science. These lower bounds, called conditional lower bounds, are a topic of immense interest in the field of fine-grained complexity, which aims to develop a better understanding of the hardness of problems that can be solved in polynomial time. In this thesis, we give new conditional lower bounds for four interesting computational problems: the median and center string edit distance problems, the pattern matching on labeled graphs problem, and the subtree isomorphism problem. These problems are of interest in the applied topics of computational biology and information retrieval, as well as in theoretical computer science more broadly.
8

Approximation algorithms for multidimensional bin packing

Khan, Arindam 07 January 2016 (has links)
The bin packing problem has been the corner stone of approximation algorithms and has been extensively studied starting from the early seventies. In the classical bin packing problem, we are given a list of real numbers in the range (0, 1], the goal is to place them in a minimum number of bins so that no bin holds numbers summing to more than 1. In this thesis we study approximation algorithms for three generalizations of bin packing: geometric bin packing, vector bin packing and weighted bipartite edge coloring. In two-dimensional (2-D) geometric bin packing, we are given a collection of rectangular items to be packed into a minimum number of unit size square bins. Geometric packing has vast applications in cutting stock, vehicle loading, pallet packing, memory allocation and several other logistics and robotics related problems. We consider the widely studied orthogonal packing case, where the items must be placed in the bin such that their sides are parallel to the sides of the bin. Here two variants are usually studied, (i) where the items cannot be rotated, and (ii) they can be rotated by 90 degrees. We give a polynomial time algorithm with an asymptotic approximation ratio of $\ln(1.5) + 1 \approx 1.405$ for the versions with and without rotations. We have also shown the limitations of rounding based algorithms, ubiquitous in bin packing algorithms. We have shown that any algorithm that rounds at least one side of each large item to some number in a constant size collection values chosen independent of the problem instance, cannot achieve an asymptotic approximation ratio better than 3/2. In d-dimensional vector bin packing (VBP), each item is a d-dimensional vector that needs to be packed into unit vector bins. The problem is of great significance in resource constrained scheduling and also appears in recent virtual machine placement in cloud computing. Even in two dimensions, it has novel applications in layout design, logistics, loading and scheduling problems. We obtain a polynomial time algorithm with an asymptotic approximation ratio of $\ln(1.5) + 1 \approx 1.405$ for 2-D VBP. We also obtain a polynomial time algorithm with almost tight (absolute) approximation ratio of $1+\ln(1.5)$ for 2-D VBP. For $d$ dimensions, we give a polynomial time algorithm with an asymptotic approximation ratio of $\ln(d/2) + 1.5 \approx \ln d+0.81$. We also consider vector bin packing under resource augmentation. We give a polynomial time algorithm that packs vectors into $(1+\epsilon)Opt$ bins when we allow augmentation in (d - 1) dimensions and $Opt$ is the minimum number of bins needed to pack the vectors into (1,1) bins. In weighted bipartite edge coloring problem, we are given an edge-weighted bipartite graph $G=(V,E)$ with weights $w: E \rightarrow [0,1]$. The task is to find a proper weighted coloring of the edges with as few colors as possible. An edge coloring of the weighted graph is called a proper weighted coloring if the sum of the weights of the edges incident to a vertex of any color is at most one. This problem is motivated by rearrangeability of 3-stage Clos networks which is very useful in various applications in interconnected networks and routing. We show a polynomial time approximation algorithm that returns a proper weighted coloring with at most $\lceil 2.2223m \rceil$ colors where $m$ is the minimum number of unit sized bins needed to pack the weight of all edges incident at any vertex. We also show that if all edge weights are $>1/4$ then $\lceil 2.2m \rceil$ colors are sufficient.
9

A model-independent theory of computational complexity : from patience to precision and beyond

Blakey, Edward William January 2010 (has links)
The field of computational complexity theory--which chiefly aims to quantify the difficulty encountered when performing calculations--is, in the case of conventional computers, correctly practised and well understood (some important and fundamental open questions notwithstanding); however, such understanding is, we argue, lacking when unconventional paradigms are considered. As an illustration, we present here an analogue computer that performs the task of natural-number factorization using only polynomial time and space; the system's true, exponential complexity, which arises from requirements concerning precision, is overlooked by a traditional, `time-and-space' approach to complexity theory. Hence, we formulate the thesis that unconventional computers warrant unconventional complexity analysis; the crucial omission from traditional analysis, we suggest, is consideration of relevant resources, these being not only time and space, but also precision, energy, etc. In the presence of this multitude of resources, however, the task of comparing computers' efficiency (formerly a case merely of comparing time complexity) becomes difficult. We resolve this by introducing a notion of overall complexity, though this transpires to be incompatible with an unrestricted formulation of resource; accordingly, we define normality of resource, and stipulate that considered resources be normal, so as to rectify certain undesirable complexity behaviour. Our concept of overall complexity induces corresponding complexity classes, and we prove theorems concerning, for example, the inclusions therebetween. Our notions of resource, overall complexity, normality, etc. form a model-independent framework of computational complexity theory, which allows: insightful complexity analysis of unconventional computers; comparison of large, model-heterogeneous sets of computers, and correspondingly improved bounds upon the complexity of problems; assessment of novel, unconventional systems against existing, Turing-machine benchmarks; increased confidence in the difficulty of problems; etc. We apply notions of the framework to existing disputes in the literature, and consider in the context of the framework various fundamental questions concerning the nature of computation.
10

Spectral Properties and Generation of Realistic Networks

Nicole E Eikmeier (6890684) 13 August 2019 (has links)
Picture the life of a modern person in the western world: They wake up in the morning and check their social networking sites; they drive to work on roads that connect cities to each other; they make phone calls, send emails and messages to colleagues, friends, and family around the world; they use electricity flowing through power-lines; they browse the Internet, searching for information. All of these typical daily activities rely on the structure of networks. A network, in this case, is a set of nodes (people, web pages, etc) connected by edges (physical connection, collaboration, etc). The term graph is sometimes used to represent a more abstract structure - but here we use the terms graph and network interchangeably. The field of network analysis concerns studying and understanding networks in order to solve problems in the world around us. Graph models are used in conjunction with the study of real-world networks. They are used to study how well an algorithm may do on a real-world network, and for testing properties that may further produce faster algorithms. The first piece of this dissertation is an experimental study which explores features of real data, specifically power-law distributions in degrees and spectra. In addition to a comparison between features of real data to existing results in the literature, this study resulted in a hypothesis on power-law structure in spectra of real-world networks being more reliable than that in the degrees. The theoretical contributions of this dissertation are focused primarily on generating realistic networks through existing and novel graph models. The two graph models presented are called HyperKron and the Triangle Generalized Preferential Attachment model. Both of the models incorporate higher-order structure - leading to more sophisticated properties not examined in traditional models. We use the second of our models to further validate the hypothesis on power-laws in the spectra. Due to the structure of our model, we show that the power-law in the spectra is more resilient to sub-sampling. This gives some explanation for why we see power-laws more frequently in the spectra in real world data.

Page generated in 0.512 seconds