Spelling suggestions: "subject:"informationtheoretic"" "subject:"informationstheoretisch""
1 |
Unconditionally Secure Cryptography: Signature Schemes, User-Private Information Retrieval, and the Generalized Russian Cards ProblemSwanson, Colleen M January 2013 (has links)
We focus on three different types of multi-party cryptographic protocols. The first is in the area of unconditionally secure signature schemes, the goal of which is to provide users the ability to electronically sign documents without the reliance on computational assumptions needed in traditional digital signatures. The second is on cooperative protocols in which users help each other maintain privacy while querying a database, called user-private information retrieval protocols. The third is concerned with the generalized Russian cards problem, in which two card players wish to communicate their hands to each other via public announcements without the third player learning the card deal. The latter two problems have close ties to the field of combinatorial designs, and properly fit within the field of combinatorial cryptography. All of these problems have a common thread, in that they are grounded in the information-theoretically secure or unconditionally secure setting.
|
2 |
Unconditionally Secure Cryptography: Signature Schemes, User-Private Information Retrieval, and the Generalized Russian Cards ProblemSwanson, Colleen M January 2013 (has links)
We focus on three different types of multi-party cryptographic protocols. The first is in the area of unconditionally secure signature schemes, the goal of which is to provide users the ability to electronically sign documents without the reliance on computational assumptions needed in traditional digital signatures. The second is on cooperative protocols in which users help each other maintain privacy while querying a database, called user-private information retrieval protocols. The third is concerned with the generalized Russian cards problem, in which two card players wish to communicate their hands to each other via public announcements without the third player learning the card deal. The latter two problems have close ties to the field of combinatorial designs, and properly fit within the field of combinatorial cryptography. All of these problems have a common thread, in that they are grounded in the information-theoretically secure or unconditionally secure setting.
|
3 |
Constrained Coding and Signal Processing for HolographyGarani, Shayan Srinivasa 05 July 2006 (has links)
The increasing demand for high density storage devices has led to innovative data recording paradigms like optical holographic memories that record and read data in a two-dimensional page-oriented manner. In order to overcome the effects of inter-symbol-interference and noise in holographic channels, sophisticated constrained modulation codes and error correction codes are needed in these systems. This dissertation deals with the information-theoretic and signal processing aspects of holographic storage. On the information-theoretic front, the capacity of two-dimensional runlength-limited channels is analyzed. The construction of two-dimensional runlength-limited codes achieving the capacity lower bounds is discussed. This is a theoretical study on one of the open problems in symbolic dynamics and mathematical physics. The analysis of achievable storage density in holographic channels is useful for building practical systems. In this work, fundamental limits for the achievable volumetric storage density in holographic channels dominated by optical scattering are analyzed for two different recording mechanisms, namely angle multiplexed holography and localized recording. Pixel misregistration is an important signal processing problem in holographic systems. In this dissertation, algorithms for compensating two-dimensional translation and rotational misalignments are discussed and analyzed for Nyquist size apertures with low fill factors. These techniques are applicable for general optical imaging systems
|
4 |
Information-theoretic security under computational, bandwidth, and randomization constraintsChou, Remi 21 September 2015 (has links)
The objective of the proposed research is to develop and analyze coding schemes for information-theoretic security, which could bridge a gap between theory an practice. We focus on two fundamental models for information-theoretic security: secret-key generation for a source model and secure communication over the wire-tap channel. Many results for these models only provide existence of codes, and few attempts have been made to design practical schemes. The schemes we would like to propose should account for practical constraints. Specifically, we formulate the following constraints to avoid oversimplifying the problems. We should assume: (1) computationally bounded legitimate users and not solely rely on proofs showing existence of code with exponential complexity in the block-length; (2) a rate-limited public communication channel for the secret-key generation model, to account for bandwidth constraints; (3) a non-uniform and rate-limited source of randomness at the encoder for the wire-tap channel model, since a perfectly uniform and rate-unlimited source of randomness might be an expensive resource. Our work focuses on developing schemes for secret-key generation and the wire-tap channel that satisfy subsets of the aforementioned constraints.
|
5 |
Error Control for Network CodingSilva, Danilo 03 March 2010 (has links)
Network coding has emerged as a new paradigm for communication in networks, allowing packets to be algebraically combined at internal nodes, rather than simply routed or replicated. The very nature of packet-mixing, however, makes the system highly sensitive to error propagation. Classical error correction approaches are therefore insufficient to solve the problem, which calls for novel techniques and insights.
The main portion of this work is devoted to the problem of error control assuming an adversarial or worst-case error model. We start by proposing a general coding theory for adversarial channels, whose aim is to characterize the correction capability of a code. We then specialize this theory to the cases of coherent and noncoherent network coding. For coherent network coding, we show that the correction capability is given by the rank metric, while for noncoherent network coding, it is given by a new metric, called the injection metric. For both cases, optimal or near-optimal coding schemes are proposed based on rank-metric codes. In addition, we show how existing decoding algorithms for rank-metric codes can be conveniently adapted to work over a network coding channel. We also present several speed improvements that make these algorithms the fastest known to date.
The second part of this work investigates a probabilistic error model. Upper and lower bounds on capacity are obtained for any channel parameters, and asymptotic expressions are provided in the limit of long packet length and/or large field size. A simple coding scheme is presented that achieves capacity in both limiting cases. The scheme has fairly low decoding complexity and a probability of failure that decreases exponentially both in the packet length and in the field size in bits. Extensions of the scheme are provided for several variations of the channel.
A final contribution of this work is to apply rank-metric codes to a closely related problem: securing a network coding system against an eavesdropper. We show that the maximum possible rate can be achieved with a coset coding scheme based on rank-metric codes. Unlike previous schemes, our scheme has the distinctive property of being universal: it can be applied on top of any communication network without requiring knowledge of or any modifications on the underlying network code. In addition, the scheme can be easily combined with a rank-metric-based error control scheme to provide both security and reliability.
|
6 |
Error Control for Network CodingSilva, Danilo 03 March 2010 (has links)
Network coding has emerged as a new paradigm for communication in networks, allowing packets to be algebraically combined at internal nodes, rather than simply routed or replicated. The very nature of packet-mixing, however, makes the system highly sensitive to error propagation. Classical error correction approaches are therefore insufficient to solve the problem, which calls for novel techniques and insights.
The main portion of this work is devoted to the problem of error control assuming an adversarial or worst-case error model. We start by proposing a general coding theory for adversarial channels, whose aim is to characterize the correction capability of a code. We then specialize this theory to the cases of coherent and noncoherent network coding. For coherent network coding, we show that the correction capability is given by the rank metric, while for noncoherent network coding, it is given by a new metric, called the injection metric. For both cases, optimal or near-optimal coding schemes are proposed based on rank-metric codes. In addition, we show how existing decoding algorithms for rank-metric codes can be conveniently adapted to work over a network coding channel. We also present several speed improvements that make these algorithms the fastest known to date.
The second part of this work investigates a probabilistic error model. Upper and lower bounds on capacity are obtained for any channel parameters, and asymptotic expressions are provided in the limit of long packet length and/or large field size. A simple coding scheme is presented that achieves capacity in both limiting cases. The scheme has fairly low decoding complexity and a probability of failure that decreases exponentially both in the packet length and in the field size in bits. Extensions of the scheme are provided for several variations of the channel.
A final contribution of this work is to apply rank-metric codes to a closely related problem: securing a network coding system against an eavesdropper. We show that the maximum possible rate can be achieved with a coset coding scheme based on rank-metric codes. Unlike previous schemes, our scheme has the distinctive property of being universal: it can be applied on top of any communication network without requiring knowledge of or any modifications on the underlying network code. In addition, the scheme can be easily combined with a rank-metric-based error control scheme to provide both security and reliability.
|
7 |
Quantifying patterns and select correlates of the spatially and temporally explicit distribution of a fish predator (Blue Catfish, Ictalurus furcatus) throughout a large reservoir ecosystemPeterson, Zachary James January 1900 (has links)
Master of Science / Division of Biology / Martha E. Mather / Understanding how and why fish distribution is related to specific habitat characteristics underlies many ecological patterns and is crucial for effective research and management. Blue Catfish, Ictalurus furcatus, are an important concern for many fisheries agencies; however, lack of information about their distribution and habitat use remains a hindrance to proper management. Here, over all time periods and across months, I quantified Blue Catfish distribution and environmental correlates of distribution in Milford Reservoir, the largest reservoir in Kansas. I tested relationships among acoustically tagged Blue Catfish and three groups of variables postulated to influence Blue Catfish distribution in the literature (i. localized microhabitat variables, ii. larger-scale mesohabitat variables, iii. biotic variables). Blue Catfish were consistently aggregated in two locations of the reservoir across five months during summer and fall, 2013. Using multiple linear regression and an information theoretic model selection approach, consistent correlates of distribution included localized, microhabitat variables (i.e., dissolved oxygen, slope) larger-scale, mesohabitat variables (i.e., distance to channel, river kilometer from the dam) and a biotic variable (i.e., Secchi depth). This research identified which 5 of the 12 variables identified in the literature were most influential in determining Blue Catfish distribution. As a guide for future hypothesis generation and research, I propose that Blue Catfish distribution was driven by three ecologically-relevant tiers of influence. First, Blue Catfish avoided extremely low dissolved oxygen concentrations that cause physiological stress. Second, Blue Catfish aggregated near the channel, an area of bathymetric heterogeneity that may offer a foraging advantage. Third, Blue Catfish aggregated near low Secchi depths, shown here to be associated with increased productivity and prey abundance. Building on my results, future research into the distribution and habitat use of Blue Catfish should incorporate aggregated distributions of fish into research designs, focus on how both small and large scale relationships interact to produce patterns of distribution, and explore further the mechanisms, consequences, and interactions among the three tiers of influence identified here.
|
8 |
Bayesian and Information-Theoretic Learning of High Dimensional DataChen, Minhua January 2012 (has links)
<p>The concept of sparseness is harnessed to learn a low dimensional representation of high dimensional data. This sparseness assumption is exploited in multiple ways. In the Bayesian Elastic Net, a small number of correlated features are identified for the response variable. In the sparse Factor Analysis for biomarker trajectories, the high dimensional gene expression data is reduced to a small number of latent factors, each with a prototypical dynamic trajectory. In the Bayesian Graphical LASSO, the inverse covariance matrix of the data distribution is assumed to be sparse, inducing a sparsely connected Gaussian graph. In the nonparametric Mixture of Factor Analyzers, the covariance matrices in the Gaussian Mixture Model are forced to be low-rank, which is closely related to the concept of block sparsity. </p><p>Finally in the information-theoretic projection design, a linear projection matrix is explicitly sought for information-preserving dimensionality reduction. All the methods mentioned above prove to be effective in learning both simulated and real high dimensional datasets.</p> / Dissertation
|
9 |
Physical-layer securityBloch, Matthieu 05 May 2008 (has links)
As wireless networks continue to flourish worldwide and play an increasingly prominent role, it has become crucial to provide effective solutions to the inherent security issues associated with a wireless transmission medium. Unlike traditional solutions, which usually handle security at the application layer, the primary concern of this thesis is to analyze and develop solutions based on coding techniques at the physical layer.
First, an information-theoretically secure communication protocol for quasi-static fading channels was developed and its performance with respect to theoretical limits was analyzed. A key element of the protocol is a reconciliation scheme for secret-key agreement based on low-density parity-check codes, which is specifically designed to operate on non-binary random variables and offers high reconciliation efficiency.
Second, the fundamental trade-offs between cooperation and security were analyzed by investigating the transmission of confidential messages to cooperative relays. This information-theoretic study highlighted the importance of jamming as a means to increase secrecy and confirmed the importance of carefully chosen relaying strategies.
Third, other applications of physical-layer security were investigated. Specifically, the use of secret-key agreement techniques for alternative cryptographic purposes was analyzed, and a framework for the design of practical information-theoretic commitment protocols over noisy channels was proposed.
Finally, the benefit of using physical-layer coding techniques beyond the physical layer was illustrated by studying security issues in client-server networks. A coding scheme exploiting packet losses at the network layer was proposed to ensure reliable communication between clients and servers and security against colluding attackers.
|
10 |
Articulation Rate and Surprisal in Swedish Child-Directed SpeechSjons, Johan January 2022 (has links)
Child-directed speech (CDS) differs from adult-directed speech (ADS) in several respects whose possible facilitating effects for language acquisition are still being studied. One such difference concerns articulation rate --- the number of linguistic units by the number of time units, excluding pauses --- which has been shown to be generally lower than in ADS. However, while it is well-established that ADS exhibits an inverse relation between articulation rate and information-theoretic surprisal --- the amount of information encoded in a linguistic unit --- this measure has been conspicuously absent in the study of articulation rate in CDS. Another issue is if the lower articulation rate in CDS is stable across utterances or an effect of local variation, such as final lengthening. The aim of this work is to arrive at a more comprehensive model of articulation rate in CDS by including surprisal and final lengthening. In particular, one-word utterances were studied, also in relation to word-length effects (the phenomenon that longer words generally have a higher articulation rate). To this end, a methodology for large-scale automatic phoneme-alignment was developed, which was applied to two longitudinal corpora of Swedish CDS. It was investigated i) how articulation rate in CDS varied with respect to child age, ii) whether there was a negative relation between articulation rate and surprisal in CDS, and iii) to what extent articulation rate was lower in CDS than in ADS. The results showed i) a weak positive effectof child age on articulation rate, ii) a negative relation between articulation rate and surprisal, and iii) that there was a lower articulation rate in CDS but that the difference could almost exclusively be attributed to one-word utterances and final lengthening. In other words, adults seem to adapt how fast they speak to their children's age, speaking faster to children is correlated with a reduced amount of information, and the difference in articulation rate between CDS and ADS is most prominent in isolated words and final lengthening. More generally, the results suggest that CDS is well-suited for word segmentation, since lower articulation rate in one-word utterances provides an additional cue.
|
Page generated in 0.0992 seconds