• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 505
  • 208
  • 197
  • 160
  • 21
  • Tagged with
  • 1172
  • 765
  • 691
  • 428
  • 428
  • 401
  • 401
  • 398
  • 398
  • 115
  • 115
  • 103
  • 87
  • 86
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Constructions, Lower Bounds, and New Directions in Cryptography and Computational Complexity

Papakonstantinou, Periklis 01 September 2010 (has links)
In the first part of the thesis we show black-box separations in public and private-key cryptography. Our main result answers in the negative the question of whether we can base Identity Based Encryption (IBE) on Trapdoor Permutations. Furthermore, we make progress towards the black-box separation of IBE from the Decisional Diffie-Hellman assumption. We also show the necessity of adaptivity when querying one-way permutations to construct pseudorandom generators a' la Goldreich-Levin; an issue related to streaming models for cryptography. In the second part we introduce streaming techniques in understanding randomness in efficient computation, proving lower bounds for efficiently computable problems, and in computing cryptographic primitives. We observe [Coo71] that logarithmic space-bounded Turing Machines, equipped with an unbounded stack, henceforth called Stack Machines, together with an external random tape of polynomial length characterize RP,BPP an so on. By parametrizing on the number of passes over the random tape we provide a technical perspective bringing together Streaming, Derandomization, and older works in Stack Machines. Our technical developments relate this new model with previous works in derandomization. For example, we show that to derandomize parts of BPP it is in some sense sufficient to derandomize BPNC (a class believed to be much lower than P \subseteq BPP). We also obtain a number of results for variants of the main model, regarding e.g. the fooling power of Nisan's pseudorandom generator (PRG) [N92] for the derandomization of BPNC^1, and the relation of parametrized access to NP-witnesses with width-parametrizations of SAT. A substantial contribution regards a streaming approach to lower bounds for problems in the NC-hierarchy (and above). We apply Communication Complexity to show a streaming lower bound for a model with an unbounded (free-to-access) pushdown storage. In particular, we obtain a $n^{\Omega(1)}$ lower bound simultaneously in the space and in the number of passes over the input, for a variant of inner product. This is the first lower bound for machines that correspond to poly-size circuits, can do Parity, Barrington's language, and decide problems in P-NC assuming EXP \neq PSPACE. Finally, we initiate the study of log-space streaming computation of cryptographic primitives. We observe that the work on Cryptography in NC^0 [AIK08] yields a non-black-box construction of a one-way function computable in an O(log n)-space bounded streaming model.Also, we show that relying on this work is in some sense necessary.
42

Decomposition and Symmetry in Constraint Optimization Problems

Kitching, Matthew 14 November 2011 (has links)
This thesis presents several techniques that advance search-based algorithms for solving Constraint Optimization Problems (COPs). These techniques exploit structural features common in such problems. In particular, the thesis presents a number of innovative algorithms, and associated data structures, designed to exploit decomposition and symmetry in COPs. First, a new technique called component templating is introduced. Component templates are data structures for efficiently representing the disjoint sub-problems that are encountered during search. Information about each disjoint sub-problem can then be reused during search, increasing efficiency. A new algorithm called OR-decomposition is introduced. This algorithm obtains many of the computational benefits of decomposition without the need to resort to separate recursions. That is, the algorithm explores a standard OR tree rather than an AND-OR tree. In this way, the search algorithm gains greater freedom in its variable ordering compared to previous decomposition algorithms. Although decomposition algorithms such as OR-decomposition are effective techniques for solving COPs with low tree-width, existing decomposition algorithms offer little advantage over branch and bound search on problems with high tree-width. A new method for exploiting decomposition on problems with high tree-width is presented. This technique involves detecting and exploiting decompositions on a selected subset of the problem’s objectives. Such decompositions can then be used to more efficiently compute additional bounds that can be used by branch and bound search. The second half of the thesis explores the use of symmetries in COPs. Using component templates, it is possible to exploit dynamic symmetries that appear during search when some of the variables of a problem have been assigned a value. Symmetries have not previously been combined with decomposition in COPs. An algorithm called Set Branching is presented, which exploits almost-symmetries in the values of a variable by clustering similar values together, then branching on sets of values rather than on each single value. The decomposition and symmetry algorithms presented in this thesis increase the efficiency of constraint optimization solvers. The thesis also presents experimental results that test these algorithms on a variety of real world problems, and demonstrate performance improvements over current state-of-the-art techniques.
43

Using Modeler Intent in Software Engineering

Salay, Richard 17 February 2011 (has links)
Models are used widely within software engineering and have been studied from many perspectives. A perspective that has received little attention is the role of modeler intent in modeling. Knowing the intent of the modeler supports both model comprehension by providing the correct context for interpreting the model and model quality by clearly defining what information the model must contain. Furthermore, formal expressions of this intent allow automated support for this. Despite the value that the knowledge of modeler intent can provide, there are no adequate means in the current state of modeling practice for expressing this information. The focus of this thesis is to address this gap by providing mechanisms for expressing modeler intent both explicitly and formally. We approach this problem by recognizing the existence of a role level in modeling where the role each model plays defines what information it should contain and how this is related to the information in other models. The specification of these roles is what we refer to as the expression of modeler intent. We then present a framework that incorporates four aspects of modeler intent at the role level: the existential intent for a model that arises in response to the need for a set of information by stakeholders, the content criteria that express what information the model is intended to contain, model relationships that express how models are intended to constrain one another and the decomposition criteria that express the intent behind how a model is decomposed into a collection of models. A key contribution of this thesis is the specification of the macromodeling language as a new modeling language designed for the role level that supports the expression of all four aspects of modeler intent. We evaluate these techniques by applying them to two real-world modeling examples.
44

Considering Mobile Devices, Context Awareness, and Mobile Users

Su, Jing Chih 17 February 2011 (has links)
Recent years have seen rapid growth and adoption of powerful mobile devices such as smartphones, equipped with sophisticated input and display systems, and multiple communication technologies. This trend has coincided with the rapid deployment and adoption of high-speed Internet services and web-based applications. While this rapid development of mobile technology has provided great opportunities, it also presents significant new challenges compared to traditional desktop computing. Specifically, unlike the traditional desktop computing experience where users are stationary and physically isolated, users in mobile and social settings can be faced with real time demands for their attention. This thesis examines the relationship between mobile devices, context awareness, and mobile users. We propose the use of physical proximity context to adapt and improve system behavior, and enable mobile users to more effectively access and share content in non-desktop settings. This work identifies three distinct challenges in mobile software, and addresses these challenges using physical proximity context awareness. First we address improvements to mobile node network utilization by using proximity awareness to automatically manage local radio resources. Next we address improvements to mobile web-backed applications and services by enabling social proximity awareness. Finally, we enable greater mobility and physical awareness for visually impaired users on mobile devices by providing an interface which enables exploration of spatial geometric layouts.
45

Cryptography: Leakage Resilience, Black Box Separations, and Credential-free Key Exchange

Vahlis, Evgene 17 February 2011 (has links)
We study several basic problems in cryptography: Leakage resilient cryptography: cryptographic schemes are often broken through side-channel attacks on the devices that run them. Such attacks typically involve an adversary that is within short distance from the device, and is able to measure various physical characteristics of the device such as power consumption, timing, heat, and sound emanation. We show how to immunize any cryptographic functionality against arbitrary side-channel attacks using the recently achieved fully homomorphic encryption, and a single piece of secure hardware that samples from a public distribution. Our secure hardware never touches any secret information (such as a private key) and is testable in the sense that its inputs are not influenced by user or adversarial inputs. Credential-free key exchange and sessions: One of the most basic tasks in cryptography is to allow two parties that are connected by a completely insecure channel to communicate securely. Typically, the first step towards achieving this is an exchange of a session key. Such an exchange normally requires an infrastructure, where, for example, public keys of users are stored, and can be securely retrieved. However, often such an infrastructure does not exist, or is too costly to maintain. In such a setting an adversary can always be the Man-In-The-Middle and intercept all communications. However, we argue that a meaningful level of security can still be achieved. We present a definition of secure key exchange in a setting without any infrastructure, and describe a protocol that achieves that type of security. The idea is that an adversary should either know nothing about the session key produced by the protocol, or be forced to participate in two independent instances of the protocol Black-box separations: A complementary aspect of cryptographic research is the study of the limits of cryptographic assumptions. Basing constructions on weaker assumptions gives us more confidence in their security. We therefore wish to find, for each standard cryptographic assumption, what tasks cannot be solved based solely on that assumption. In this thesis we study the limits of a very basic public key primitive: trapdoor permutations (TDPs). We show that TDPs cannot be used to construct Identity Based Encryption or a stronger type of TDPs called correlation secure TDPs. Correlation secure TDPs have been used to build chosen-ciphertext secure public key encryption scheme -- a primitive with a wide range of theoretical and practical applications.
46

Feature-based Control of Physics-based Character Animation

de Lasa, Martin 31 August 2011 (has links)
Creating controllers for physics-based characters is a long-standing open problem in animation and robotics. Such controllers would have numerous applications while potentially yielding insight into human motion. Creating controllers remains difficult: current approaches are either constrained to track motion capture data, are not robust, or provide limited control over style. This thesis presents an approach to control of physics-based characters based on high-level features of human movement, such as center-of-mass, angular momentum, and end-effector motion. Objective terms are used to control each feature, and are combined via optimization. We show how locomotion can be expressed in terms of a small number of features that control balance and end-effectors. This approach is used to build controllers for biped balancing, jumping, walking, and jogging. These controllers provide numerous benefits: human-like qualities such as arm-swing, heel-off, and hip-shoulder counter-rotation emerge automatically during walking; controllers are robust to changes in body parameters; control parameters apply to intuitive properties; and controller may be mapped onto entirely new bipeds with different topology and mass distribution, without controller modifications. Transitions between multiple types of gaits, including walking, jumping, and jogging, emerge automatically. Controllers can traverse challenging terrain while following high-level user commands at interactive rates. This approach uses no motion capture or off-line optimization process. Although we focus on the challenging case of bipedal locomotion, many other types of controllers stand to benefit from our approach.
47

Neural Representation, Learning and Manipulation of Uncertainty

Natarajan, Rama 21 April 2010 (has links)
Uncertainty is inherent in neural processing due to noise in sensation and the sensory transmission processes, the ill-posed nature of many perceptual tasks, and temporal dynamics of the natural environment, to name a few causes. A wealth of evidence from physiological and behavioral experiments show that these various forms of uncertainty have major effects on perceptual learning and inference. In order to use sensory inputs efficiently to make decisions and guide behavior, neural systems must represent and manipulate information about uncertainty in their computations. In this thesis, we first consider how spiking neural populations might encode and decode information about continuous dynamic stimulus variables including the uncertainty about them. We explore the efficacy of a complex encoder that is paired with a simple decoder which allows computationally straightforward representation and manipulation of dynamically changing uncertainty. The encoder we present takes the form of a biologically plausible recurrent spiking neural network where the output population recodes its inputs to produce spikes that are independently decodeable. We show that this network can be learned in a supervised manner, by a simple, local learning rule. We also demonstrate that the coding scheme can be applied recursively to carry out meaningful uncertainty-sensitive computations such as dynamic cue combination. Next, we explore the computational principles that underlie non-linear response characteristics such as perceptual bias and uncertainty observed in audiovisual spatial illusions that involve multisensory interactions with conflicting cues. We examine in detail, the explanatory power of one particular causal model in characterizing the impact of conflicting inputs on perception and behavior. We also attempt to understand from a computational perspective, whether and how different task instructions might modulate the interaction of information from individual (visual and auditory) senses. Our analyses reveal some new properties of the sensory likelihoods and stimulus prior which were thought to be well described by Gaussian functions. Our results conclude that task-specific expectations can influence perception in ways that relate to a choice of inference strategy.
48

Neural Representation, Learning and Manipulation of Uncertainty

Natarajan, Rama 21 April 2010 (has links)
Uncertainty is inherent in neural processing due to noise in sensation and the sensory transmission processes, the ill-posed nature of many perceptual tasks, and temporal dynamics of the natural environment, to name a few causes. A wealth of evidence from physiological and behavioral experiments show that these various forms of uncertainty have major effects on perceptual learning and inference. In order to use sensory inputs efficiently to make decisions and guide behavior, neural systems must represent and manipulate information about uncertainty in their computations. In this thesis, we first consider how spiking neural populations might encode and decode information about continuous dynamic stimulus variables including the uncertainty about them. We explore the efficacy of a complex encoder that is paired with a simple decoder which allows computationally straightforward representation and manipulation of dynamically changing uncertainty. The encoder we present takes the form of a biologically plausible recurrent spiking neural network where the output population recodes its inputs to produce spikes that are independently decodeable. We show that this network can be learned in a supervised manner, by a simple, local learning rule. We also demonstrate that the coding scheme can be applied recursively to carry out meaningful uncertainty-sensitive computations such as dynamic cue combination. Next, we explore the computational principles that underlie non-linear response characteristics such as perceptual bias and uncertainty observed in audiovisual spatial illusions that involve multisensory interactions with conflicting cues. We examine in detail, the explanatory power of one particular causal model in characterizing the impact of conflicting inputs on perception and behavior. We also attempt to understand from a computational perspective, whether and how different task instructions might modulate the interaction of information from individual (visual and auditory) senses. Our analyses reveal some new properties of the sensory likelihoods and stimulus prior which were thought to be well described by Gaussian functions. Our results conclude that task-specific expectations can influence perception in ways that relate to a choice of inference strategy.
49

Bone Graphs: Medial Abstraction for Shape Parsing and Object Recognition

Macrini, Diego 31 August 2010 (has links)
The recognition of 3-D objects from their silhouettes demands a shape representation which is invariant to minor changes in viewpoint and articulation. This invariance can be achieved by parsing a silhouette into parts and relationships that are stable across similar object views. Medial descriptions, such as skeletons and shock graphs, attempt to decompose a shape into parts, but suffer from instabilities that lead to similar shapes being represented by dissimilar part sets. We propose a novel shape parsing approach based on identifying and regularizing the ligature structure of a given medial axis. The result of this process is a bone graph, a new medial shape abstraction that captures a more intuitive notion of an object’s parts than a skeleton or a shock graph, and offers improved stability and within-class deformation invariance over the shock graph. The bone graph, unlike the shock graph, has attributed edges that specify how and where two medial parts meet. We propose a novel shape matching framework that exploits this relational information by formulating the problem as an inexact directed acyclic graph matching, and extending a leading bipartite graph-based matching framework introduced for matching shock graphs. In addition to accommodating the relational information, our new framework is better able to enforce hierarchical and sibling constraints between nodes, resulting in a more general and more powerful matching framework. We evaluate our matching framework with respect to a competing shock graph matching framework, and show that for the task of view-based object categorization, our matching framework applied to bone graphs outperforms the competing framework. Moreover, our matching framework applied to shock graphs also outperforms the competing shock graph matching algorithm, demonstrating the generality and improved performance of our matching algorithm.
50

TCP Adaptation Framework in Data Centers

Ghobadi, Monia 09 January 2014 (has links)
Congestion control has been extensively studied for many years. Today, the Transmission Control Protocol (TCP) is used in a wide range of networks (LAN, WAN, data center, campus network, enterprise network, etc.) as the de facto congestion control mechanism. Despite its common usage, TCP operates in these networks with little knowledge of the underlying network or traffic characteristics. As a result, it is doomed to continuously increase or decrease its congestion window size in order to handle changes in the network or traffic conditions. Thus, TCP frequently overshoots or undershoots the ideal rate making it a "Jack of all trades, master of none" congestion control protocol. In light of the emerging popularity of centrally controlled Software-Defined Networks (SDNs), we ask whether we can take advantage of the information available at the central controller to improve TCP. Specifically, in this thesis, we examine the design and implementation of OpenTCP, a dynamic and programmable TCP adaptation framework for SDN-enabled data centers. OpenTCP gathers global information about the status of the network and traffic conditions through the SDN controller, and uses this information to adapt TCP. OpenTCP periodically sends updates to end-hosts which, in turn, update their behaviour using a simple kernel module. In this thesis, we first present two real-world TCP adaptation experiments in depth: (1) using TCP pacing in inter-data center communications with shallow buffers, and (2) using Trickle to rate limit TCP video streaming. We explain the design, implementation, limitation, and benefits of each TCP adaptation to highlight the potential power of having a TCP adaptation framework in today's networks. We then discuss the architectural design of OpenTCP, as well as its implementation and deployment at SciNet, Canada's largest supercomputer center. Furthermore, we study use-cases of OpenTCP using the ns-2 network simulator. We conclude that OpenTCP-based congestion control simplifies the process of adapting TCP to network conditions, leads to improvements in TCP performance, and is practical in real-world settings.

Page generated in 0.0238 seconds