• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 10
  • 2
  • Tagged with
  • 37
  • 37
  • 37
  • 33
  • 33
  • 15
  • 15
  • 14
  • 12
  • 10
  • 9
  • 8
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Physical-layer security: practical aspects of channel coding and cryptography

Harrison, Willie K. 21 June 2012 (has links)
In this work, a multilayer security solution for digital communication systems is provided by considering the joint effects of physical-layer security channel codes with application-layer cryptography. We address two problems: first, the cryptanalysis of error-prone ciphertext; second, the design of a practical physical-layer security coding scheme. To our knowledge, the cryptographic attack model of the noisy-ciphertext attack is a novel concept. The more traditional assumption that the attacker has the ciphertext is generally assumed when performing cryptanalysis. However, with the ever-increasing amount of viable research in physical-layer security, it now becomes essential to perform the analysis when ciphertext is unreliable. We do so for the simple substitution cipher using an information-theoretic framework, and for stream ciphers by characterizing the success or failure of fast-correlation attacks when the ciphertext contains errors. We then present a practical coding scheme that can be used in conjunction with cryptography to ensure positive error rates in an eavesdropper's observed ciphertext, while guaranteeing error-free communications for legitimate receivers. Our codes are called stopping set codes, and provide a blanket of security that covers nearly all possible system configurations and channel parameters. The codes require a public authenticated feedback channel. The solutions to these two problems indicate the inherent strengthening of security that can be obtained by confusing an attacker about the ciphertext, and then give a practical method for providing the confusion. The aggregate result is a multilayer security solution for transmitting secret data that showcases security enhancements over standalone cryptography.
32

Performance Of Pseudo-random And Quasi-cyclic Low Density Parity Check Codes

Kazanci, Onur Husnu 01 December 2007 (has links) (PDF)
Low Density Parity Check (LDPC) codes are the parity check codes of long block length, whose parity check matrices have relatively few non-zero entries. To improve the performance at relatively short block lengths, LDPC codes are constructed by either pseudo-random or quasi-cyclic methods instead of random construction methods. In this thesis, pseudo-random code construction methods, the effects of closed loops and the graph connectivity on the performance of pseudo-random LDPC codes are investigated. Moreover, quasi-cyclic LDPC codes, which have encoding and storage advantages over pseudo-random LDPC codes, their construction methods and performances are reviewed. Finally, performance comparison between pseudo-random and quasi-cyclic LDPC codes is given for both regular and irregular cases.
33

Reliable Communications under Limited Knowledge of the Channel

Yazdani, Raman Unknown Date
No description available.
34

A Modified Sum-Product Algorithm over Graphs with Short Cycles

Raveendran, Nithin January 2015 (has links) (PDF)
We investigate into the limitations of the sum-product algorithm for binary low density parity check (LDPC) codes having isolated short cycles. Independence assumption among messages passed, assumed reasonable in all configurations of graphs, fails the most in graphical structures with short cycles. This research work is a step forward towards understanding the effect of short cycles on error floors of the sum-product algorithm. We propose a modified sum-product algorithm by considering the statistical dependency of the messages passed in a cycle of length 4. We also formulate a modified algorithm in the log domain which eliminates the numerical instability and precision issues associated with the probability domain. Simulation results show a signal to noise ratio (SNR) improvement for the modified sum-product algorithm compared to the original algorithm. This suggests that dependency among messages improves the decisions and successfully mitigates the effects of length-4 cycles in the Tanner graph. The improvement is significant at high SNR region, suggesting a possible cause to the error floor effects on such graphs. Using density evolution techniques, we analysed the modified decoding algorithm. The threshold computed for the modified algorithm is higher than the threshold computed for the sum-product algorithm, validating the observed simulation results. We also prove that the conditional entropy of a codeword given the estimate obtained using the modified algorithm is lower compared to using the original sum-product algorithm.
35

Fountain codes and their typical application in wireless standards like edge

Grobler, Trienko Lups 26 January 2009 (has links)
One of the most important technologies used in modern communication systems is channel coding. Channel coding dates back to a paper published by Shannon in 1948 [1] entitled “A Mathematical Theory of Communication”. The basic idea behind channel coding is to send redundant information (parity) together with a message to make the transmission more error resistant. There are different types of codes that can be used to generate the parity required, including block, convolutional and concatenated codes. A special subclass of codes consisting of the codes mentioned in the previous paragraph, is sparse graph codes. The structure of sparse graph codes can be depicted via a graphical representation: the factor graph which has sparse connections between its elements. Codes belonging to this subclass include Low-Density-Parity-Check (LDPC) codes, Repeat Accumulate (RA), Turbo and fountain codes. These codes can be decoded by using the belief propagation algorithm, an iterative algorithm where probabilistic information is passed to the nodes of the graph. This dissertation focuses on noisy decoding of fountain codes using belief propagation decoding. Fountain codes were originally developed for erasure channels, but since any factor graph can be decoded using belief propagation, noisy decoding of fountain codes can easily be accomplished. Three fountain codes namely Tornado, Luby Transform (LT) and Raptor codes were investigated during this dissertation. The following results were obtained: <ol> <li>The Tornado graph structure is unsuitable for noisy decoding since the code structure protects the first layer of parity instead of the original message bits (a Tornado graph consists of more than one layer).</li> <li> The successful decoding of systematic LT codes were verified.</li> <li>A systematic Raptor code was introduced and successfully decoded. The simulation results show that the Raptor graph structure can improve on its constituent codes (a Raptor code consists of more than one code).</li></ol> Lastly an LT code was used to replace the convolutional incremental redundancy scheme used by the 2G mobile standard Enhanced Data Rates for GSM Evolution (EDGE). The results show that a fountain incremental redundancy scheme outperforms a convolutional approach if the frame lengths are long enough. For the EDGE platform the results also showed that the fountain incremental redundancy scheme outperforms the convolutional approach after the second transmission is received. Although EDGE is an older technology, it still remains a good platform for testing different incremental redundancy schemes, since it was one of the first platforms to use incremental redundancy. / Dissertation (MEng)--University of Pretoria, 2008. / Electrical, Electronic and Computer Engineering / MEng / unrestricted
36

Experimental Studies On A New Class Of Combinatorial LDPC Codes

Dang, Rajdeep Singh 05 1900 (has links)
We implement a package for the construction of a new class of Low Density Parity Check (LDPC) codes based on a new random high girth graph construction technique, and study the performance of the codes so constructed on both the Additive White Gaussian Noise (AWGN) channel as well as the Binary Erasure Channel (BEC). Our codes are “near regular”, meaning thereby that the the left degree of any node in the Tanner graph constructed varies by at most 1 from the average left degree and so also the right degree. The simulations for rate half codes indicate that the codes perform better than both the regular Progressive Edge Growth (PEG) codes which are constructed using a similar random technique, as well as the MacKay random codes. For high rates the ARG (Almost Regular high Girth) codes perform better than the PEG codes at low to medium SNR’s but the PEG codes seem to do better at high SNR’s. We have tried to track both near codewords as well as small weight codewords for these codes to examine the performance at high rates. For the binary erasure channel the performance of the ARG codes is better than that of the PEG codes. We have also proposed a modification of the sum-product decoding algorithm, where a quantity called the “node credibility” is used to appropriately process messages to check nodes. This technique substantially reduces the error rates at signal to noise ratios of 2.5dB and beyond for the codes experimented on. The average number of iterations to achieve this improved performance is practically the same as that for the traditional sum-product algorithm.
37

Coding for wireless ad-hoc and sensor networks: unequal error protection and efficient data broadcasting

Rahnavard, Nazanin 27 August 2007 (has links)
This thesis investigates both theoretical and practical aspects of the design and analysis of modern error-control coding schemes, namely low-density parity-check (LDPC) codes and rateless codes for unequal error protection (UEP). It also studies the application of modern error-control codes in efficient data dissemination in wireless ad-hoc and sensor networks. Two methodologies for the design and analysis of UEP-LDPC codes are proposed. For these proposed ensembles, density evolution formulas over the binary erasure channel are derived and used to optimize the degree distribution of the codes. Furthermore, for the first time, rateless codes that can provide UEP are developed. In addition to providing UEP, the proposed codes can be used in applications for which unequal recovery time is desirable, i.e., when more important parts of data are required to be recovered faster than less important parts. Asymptotic behavior of the UEP-rateless codes under the iterative decoding is investigated. In addition, the performance of the proposed codes is examined under the maximum-likelihood decoding, when the codes have short to moderate lengths. Results show that UEP-rateless codes are able to provide very low error rates for more important bits with only a subtle loss in the performance of less important bits. Moreover, it is shown that given a target bit error rate, different parts of the information symbols can be decoded after receiving different numbers of encoded symbols. This implies that information can be recovered in a progressive manner, which is of interest in many practical applications such as media-on-demand systems. This work also explores fundamental research problems related to applying error-control coding such as rateless coding to the problem of reliable and energy-efficient broadcasting in multihop wireless ad-hoc sensor networks. The proposed research touches on the four very large fields of wireless networking, coding theory, graph theory, and percolation theory. Based on the level of information that each node has about the network topology, several reliable and energy-efficient schemes are proposed, all of which are distributed and have low complexity of implementation. The first protocol does not require any information about the network topology. Another protocol, which is more energy efficient, assumes each node has local information about the network topology. In addition, this work proposes a distributed scheme for finding low-cost broadcast trees in wireless networks. This scheme takes into account various parameters such as distances between nodes and link losses. This protocol is then extended to find low-cost multicast trees. Several schemes are extensively simulated and are compared.

Page generated in 0.0452 seconds