151 |
Applications of Random Graphs to Design and Analysis of LDPC Codes and Sensor NetworksPishro-Nik, Hossein 12 1900 (has links)
This thesis investigates a graph and information theoretic approach to design and analysis of low-density parity-check (LDPC) codes and wireless networks. In this work, both LDPC codes and wireless networks are considered as random graphs. This work proposes solutions to important theoretic and practical open problems in LDPC coding, and for the first time introduces a framework for analysis of finite wireless networks. LDPC codes are considered to be one of the best classes of error-correcting codes. In this thesis, several problems in this area are studied. First, an improved decoding algorithm for LDPC codes is introduced. Compared to the standard iterative decoding, the proposed decoding algorithm can result in several orders of magnitude lower bit error rates, while having almost the same complexity. Second, this work presents a variety of bounds on the achievable performance of different LDPC coding scenarios. Third, it studies rate-compatible LDPC codes and provides fundamental properties of these codes. It also shows guidelines for optimal design of rate-compatible codes. Finally, it studies non-uniform and unequal error protection using LDPC codes and explores their applications to data storage systems and communication networks. It presents a new error-control scheme for volume holographic memory (VHM) systems and shows that the new method can increase the storage capacity by more than fifty percent compared to previous schemes. This work also investigates the application of random graphs to the design and analysis of wireless ad hoc and sensor networks. It introduces a framework for analysis of finite wireless networks. Such framework was lacking from the literature. Using the framework, different network properties such as capacity, connectivity, coverage, and routing and security algorithms are studied. Finally, connectivity properties of large-scale sensor networks are investigated. It is shown how unreliability of sensors, link failures, and non-uniform distribution of nodes affect the connectivity of sensor networks.
|
152 |
Computational Problems In Codes On GraphsKrishnan, K Murali 07 1900 (has links)
Two standard graph representations for linear codes are the Tanner graph and the tailbiting trellis. Such graph representations allow the decoding problem for a code to be phrased as a computational problem on the corresponding graph and yield graph theoretic criteria for good codes. When a Tanner graph for a code is used for communication across a binary erasure channel (BEC) and decoding is performed using the standard iterative decoding algorithm, the maximum number of correctable erasures is determined by the stopping distance of the Tanner graph. Hence the computational problem of determining the stopping distance of a Tanner graph is of interest.
In this thesis it is shown that computing stopping distance of a Tanner graph is NP hard. It is also shown that there can be no (1 + є ) approximation algorithm for the problem for any є > 0 unless P = NP and that approximation ratio of 2(log n)1- є for any є > 0 is impossible unless NPCDTIME(npoly(log n)).
One way to construct Tanner graphs of large stopping distance is to ensure that the graph has large girth. It is known that stopping distance increases exponentially with the girth of the Tanner graph. A new elementary combinatorial construction algorithm for an almost regular LDPC code family with provable Ώ(log n) girth and O(n2) construction complexity is presented. The bound on the girth is close within a factor of two to the best known upper bound on girth.
The problem of linear time exact maximum likelihood decoding of tailbiting trellis has remained open for several years. An O(n) complexity approximate maximum likelihood decoding algorithm for tail-biting trellises is presented and analyzed. Experiments indicate that the algorithm performs close to the ideal maximum likelihood decoder.
|
153 |
On The Analysis of Spatially-Coupled GLDPC Codes and The Weighted Min-Sum AlgorithmJian, Yung-Yih 16 December 2013 (has links)
This dissertation studies methods to achieve reliable communication over unreliable channels. Iterative decoding algorithms for low-density parity-check (LDPC) codes and generalized LDPC (GLDPC) codes are analyzed.
A new class of error-correcting codes to enhance the reliability of the communication for high-speed systems, such as optical communication systems, is proposed. The class of spatially-coupled GLDPC codes is studied, and a new iterative hard- decision decoding (HDD) algorithm for GLDPC codes is introduced. The main result is that the minimal redundancy allowed by Shannon’s Channel Coding Theorem can be achieved by using the new iterative HDD algorithm with spatially-coupled GLDPC codes. A variety of low-density parity-check (LDPC) ensembles have now been observed to approach capacity with iterative decoding. However, all of them use soft (i.e., non-binary) messages and a posteriori probability (APP) decoding of their component codes. To the best of our knowledge, this is the first system that can approach the channel capacity using iterative HDD.
The optimality of a codeword returned by the weighted min-sum (WMS) algorithm, an iterative decoding algorithm which is widely used in practice, is studied as well. The attenuated max-product (AttMP) decoding and weighted min-sum (WMS) decoding for LDPC codes are analyzed. Applying the max-product (and belief- propagation) algorithms to loopy graphs are now quite popular for best assignment problems. This is largely due to their low computational complexity and impressive performance in practice. Still, there is no general understanding of the conditions required for convergence and/or the optimality of converged solutions. This work presents an analysis of both AttMP decoding and WMS decoding for LDPC codes which guarantees convergence to a fixed point when a weight factor, β, is sufficiently small. It also shows that, if the fixed point satisfies some consistency conditions, then it must be both a linear-programming (LP) and maximum-likelihood (ML) decoding solution.
|
154 |
Iterative joint detection and decoding of LDPC-Coded V-BLAST systemsTsai, Meng-Ying (Brady) 10 July 2008 (has links)
Soft iterative detection and decoding techniques have been shown to be able to achieve near-capacity performance in multiple-antenna systems. To obtain the optimal soft information by marginalization over the entire observation space is intractable; and the current literature is unable to guide us towards the best way to obtain the suboptimal soft information. In this thesis, several existing soft-input soft-output (SISO) detectors, including minimum mean-square error-successive interference cancellation (MMSE-SIC), list sphere decoding (LSD), and Fincke-Pohst maximum-a-posteriori (FPMAP), are examined. Prior research has demonstrated that LSD and FPMAP outperform soft-equalization methods (i.e., MMSE-SIC); however, it is unclear which of the two scheme is superior in terms of performance-complexity trade-off. A comparison is conducted to resolve the matter. In addition, an improved scheme is proposed to modify LSD and FPMAP, providing error performance improvement and a reduction in computational complexity simultaneously. Although list-type detectors such as LSD and FPMAP provide outstanding error performance, issues such as the optimal initial sphere radius, optimal radius update strategy, and their highly variable computational complexity are still unresolved. A new detection scheme is proposed to address the above issues with fixed detection complexity, making the scheme suitable for practical implementation. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2008-07-08 19:29:17.66
|
155 |
Nonparametric statistical inference for functional brain information mappingStelzer, Johannes 26 May 2014 (has links) (PDF)
An ever-increasing number of functional magnetic resonance imaging (fMRI) studies are now using information-based multi-voxel pattern analysis (MVPA) techniques to decode mental states. In doing so, they achieve a significantly greater sensitivity compared to when they use univariate analysis frameworks. Two most prominent MVPA methods for information mapping are searchlight decoding and classifier weight mapping. The new MVPA brain mapping methods, however, have also posed new challenges for analysis and statistical inference on the group level. In this thesis, I discuss why the usual procedure of performing t-tests on MVPA derived information maps across subjects in order to produce a group statistic is inappropriate. I propose a fully nonparametric solution to this problem, which achieves higher sensitivity than the most commonly used t-based procedure. The proposed method is based on resampling methods and preserves the spatial dependencies in the MVPA-derived information maps. This enables to incorporate a cluster size control for the multiple testing problem. Using a volumetric searchlight decoding procedure and classifier weight maps, I demonstrate the validity and sensitivity of the new approach using both simulated and real fMRI data sets. In comparison to the standard t-test procedure implemented in SPM8, the new results showed a higher sensitivity and spatial specificity.
The second goal of this thesis is the comparison of the two widely used information mapping approaches -- the searchlight technique and classifier weight mapping. Both methods take into account the spatially distributed patterns of activation in order to predict stimulus conditions, however the searchlight method solely operates on the local scale. The searchlight decoding technique has furthermore been found to be prone to spatial inaccuracies. For instance, the spatial extent of informative areas is generally exaggerated, and their spatial configuration is distorted. In this thesis, I compare searchlight decoding with linear classifier weight mapping, both using the formerly proposed non-parametric statistical framework using a simulation and ultra-high-field 7T experimental data. It was found that the searchlight method led to spatial inaccuracies that are especially noticeable in high-resolution fMRI data. In contrast, the weight mapping method was more spatially precise, revealing both informative anatomical structures as well as the direction by which voxels contribute to the classification. By maximizing the spatial accuracy of ultra-high-field fMRI results, such global multivariate methods provide a substantial improvement for characterizing structure-function relationships.
|
156 |
Energibolag genom den unga miljöopportunistens lins : En receptionsstudie i studenters tolkningar av energibolags miljörelaterade kommunikationMöller, Evelina, Matts, Daniella January 2014 (has links)
No description available.
|
157 |
Coding Theorems via Jar DecodingMeng, Jin January 2013 (has links)
In the development of digital communication and information theory, every channel decoding rule has resulted in a revolution at the time when it was invented. In the area of information theory, early channel coding theorems were established mainly by maximum likelihood decoding, while the arrival of typical sequence decoding signaled the era of multi-user information theory, in which achievability proof became simple and intuitive. Practical channel code design, on the other hand, was based on minimum distance decoding at the early stage. The invention of belief propagation decoding with soft input and soft output, leading to the birth of turbo codes and low-density-parity check (LDPC) codes which are indispensable coding techniques in current communication systems, changed the whole research area so dramatically that people started to use the term "modern coding theory'' to refer to the research based on this decoding rule. In this thesis, we propose a new decoding rule, dubbed jar decoding, which would be expected to bring some new thoughts to both the code performance analysis and the code design.
Given any channel with input alphabet X and output alphabet Y, jar decoding rule can be simply expressed as follows: upon receiving the channel output y^n ∈ Y^n, the decoder first forms a set (called a jar) of sequences x^n ∈ X^n considered to be close to y^n and pick any codeword (if any) inside this jar as the decoding output. The way how the decoder forms the jar is defined independently with the actual channel code and even the channel statistics in certain cases. Under this jar decoding, various coding theorems are proved in this thesis. First of all, focusing on the word error probability, jar decoding is shown to be near optimal by the achievabilities proved via jar decoding and the converses proved via a proof technique, dubbed the outer mirror image of jar, which is also quite related to jar decoding. Then a Taylor-type expansion of optimal channel coding rate with finite block length is discovered by combining those achievability and converse theorems, and it is demonstrated that jar decoding is optimal up to the second order in this Taylor-type expansion. Flexibility of jar decoding is then illustrated by proving LDPC coding theorems via jar decoding, where the bit error probability is concerned. And finally, we consider a coding scenario, called interactive encoding and decoding, and show that jar decoding can be also used to prove coding theorems and guide the code design in the scenario of two-way communication.
|
158 |
Aplicação de transformação conforme em codificação e decodificação de imagens / Conformal mapping applied to images encoding and decodingSilva, Alan Henrique Ferreira 31 March 2016 (has links)
Submitted by JÚLIO HEBER SILVA (julioheber@yahoo.com.br) on 2017-03-24T17:48:37Z
No. of bitstreams: 2
Dissertação - Alan Henrique Ferreira Silva - 2016.pdf: 10881029 bytes, checksum: 1c411277f8b103cc8a55709053ed7f9b (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2017-03-27T15:13:01Z (GMT) No. of bitstreams: 2
Dissertação - Alan Henrique Ferreira Silva - 2016.pdf: 10881029 bytes, checksum: 1c411277f8b103cc8a55709053ed7f9b (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2017-03-27T15:13:01Z (GMT). No. of bitstreams: 2
Dissertação - Alan Henrique Ferreira Silva - 2016.pdf: 10881029 bytes, checksum: 1c411277f8b103cc8a55709053ed7f9b (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Previous issue date: 2016-03-31 / This work proposes method to encode and decode imas using conformal mapping. Conformal mapping modifies domains without modifyung physical characteristics between them. Real images are processed between these domains using encoding keys, also called transforming functions. The advantage of this methodology is the ability to carry the message as an encoded image in printed media for posterior-decoding. / Este trabalho propõe método que utiliza transformações conformes para codificar e decodificar imagens. As transformações conformes modificam os domínios em estudos sem modificar as características físicas entre eles. As imagens reais são transformadas entre estes domínios utilizando chaves, que são funções transformadoras. o diferencial desta metodologia é a capacidade de transportar a mensagem contida na imagem em meio impresso codificado e depois, decodificá-la.
|
159 |
MIMO block-fading channels with mismatched CSIAsyhari, A.Taufiq, Guillen i Fabregas, A. 23 August 2014 (has links)
Yes / We study transmission over multiple-input multiple-output (MIMO) block-fading channels with
imperfect channel state information (CSI) at both the transmitter and receiver. Specifically, based on
mismatched decoding theory for a fixed channel realization, we investigate the largest achievable rates
with independent and identically distributed inputs and a nearest neighbor decoder. We then study the
corresponding information outage probability in the high signal-to-noise ratio (SNR) regime and analyze
the interplay between estimation error variances at the transmitter and at the receiver to determine
the optimal outage exponent, defined as the high-SNR slope of the outage probability plotted in a
logarithmic-logarithmic scale against the SNR. We demonstrate that despite operating with imperfect
CSI, power adaptation can offer substantial gains in terms of outage exponent. / A. T. Asyhari was supported in part by the Yousef Jameel Scholarship, University of Cambridge, Cambridge, U.K., and the National Science Council of Taiwan under grant NSC 102-2218-E-009-001. A. Guillén i Fàbregas was supported in part by the European Research Council under ERC grant agreement 259663 and the Spanish Ministry of Economy and Competitiveness under grant TEC2012-38800-C03-03.
|
160 |
A systems perspective on structure-function relationships in the human brainBoeken, Ole Jonas 18 July 2024 (has links)
Das Ziel der kognitiven Neurowissenschaften ist es, Struktur-Funktions-Beziehungen im Gehirn aufzudecken. Durch Fortschritte in der funktionellen Bildgebung und der Graphentheorie konnten zentrale Knotenpunkte im Gehirn identifiziert werden, welche neuronale Informationen integrieren und dementielle Prozesse erklären können. Datenbankgetriebene metaanalytische Methoden wurden genutzt, um tausende Bildgebungsstudien auszuwerten und das funktionelle Profil von Hirnregionen zu dekodieren. Das Systems-Level Decoding zielt darauf ab, eine Seed-Region und mit ihr verbundene kortikale Regionen funktionell zu dekodieren. In Studie 1 wurden thalamische Subregionen und in Studie 2 drei Subregionen des Intraparietalen Sulcus (hIP) als Seed-Regionen verwendet. Studie 3 untersuchte Unterschiede in der Rich-Club-Struktur zwischen Alzheimer-Patienten, Patienten mit milder kognitiver Beeinträchtigung und gesunden Probanden. Es wurden zwei große, Thalamus-zentrierte Systeme identifiziert, die mit autobiographischem Gedächtnis und Nozizeption assoziiert sind; für den hIP wurden neun Systeme, die mit Prozessen wie Arbeitsgedächtnis und numerischem Denken zusammenhängen, gefunden. Studie 3 fand Hinweise, dass periphere Regionen bei Patienten im Vergleich zu gesunden Kontrollen stärker aktiviert sind als Rich-Club-Regionen. Das Systems-Level Decoding lieferte neue Erkenntnisse über die Einbettung des Thalamus und des hIP in kortikale funktionelle Systeme. Allerdings waren die Ergebnisse hinsichtlich der funktionellen Charakterisierung thalamischer Kerne und der Aktivitätsunterschiede in Rich-Club- und peripheren Regionen zwischen Patienten und gesunden Kontrollpersonen begrenzt. Diese Ergebnisse lassen sich möglicherweise durch Limitationen des metaanalytischen Ansatzes erklären. Das Systems-Level Decoding ist insgesamt ein vielversprechender Ansatz für die Formulierung von Hypothesen über Struktur-Funktionsbeziehungen innerhalb der Netzwerkarchitektur des menschlichen Gehirns. / In cognitive neuroscience, there is great interest in unraveling structure-function relationships in the human brain. Advances in functional neuroimaging and graph theory methods have identified key brain nodes relevant for neuronal information integration and functional deficits in degenerative diseases like Alzheimer's. Database-driven meta-analytical methods have also evaluated knowledge from thousands of neuroimaging studies, to decode the functional profile of brain regions. The systems-level decoding aims to identify brain systems that provide insight into the functional characteristics of a seed region and its connected cortical regions. In Study 1, thalamic subregions were used as seed regions. Study 2 applied systems-level decoding to three distinct regions in the intraparietal sulcus (hIP). In Study 3, we attempted to substantiate activation differences in the Rich Club structure between Alzheimer's patients, patients with mild cognitive impairment, and healthy subjects. Two major, thalamus-centered systems associated with autobiographical memory and nociception were identified. Additionally, nine large systems associated with processes such as working memory, numerical cognition, and recognition memory were uncovered for the hIP. Finally, evidence showed that peripheral regions in patients are more activated than central Rich Club regions compared to healthy controls. Systems-level decoding provided significant new insights into the embedding of the thalamus and intraparietal sulcus in functional systems. However, results were limited in the functional characterization of thalamic nuclei and activity differences in Rich Club and peripheral regions between patients and healthy controls. Limitations of the meta-analytical approach may explain these findings. The systems-level decoding represents a promising approach for formulating hypotheses about brain structure-function relationships within the functional network architecture.
|
Page generated in 0.0552 seconds