• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 262
  • 98
  • 48
  • 29
  • 21
  • 11
  • 9
  • 6
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 555
  • 101
  • 93
  • 88
  • 79
  • 64
  • 64
  • 63
  • 63
  • 57
  • 49
  • 48
  • 45
  • 42
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Computational Problems In Codes On Graphs

Krishnan, K Murali 07 1900 (has links)
Two standard graph representations for linear codes are the Tanner graph and the tailbiting trellis. Such graph representations allow the decoding problem for a code to be phrased as a computational problem on the corresponding graph and yield graph theoretic criteria for good codes. When a Tanner graph for a code is used for communication across a binary erasure channel (BEC) and decoding is performed using the standard iterative decoding algorithm, the maximum number of correctable erasures is determined by the stopping distance of the Tanner graph. Hence the computational problem of determining the stopping distance of a Tanner graph is of interest. In this thesis it is shown that computing stopping distance of a Tanner graph is NP hard. It is also shown that there can be no (1 + є ) approximation algorithm for the problem for any є > 0 unless P = NP and that approximation ratio of 2(log n)1- є for any є > 0 is impossible unless NPCDTIME(npoly(log n)). One way to construct Tanner graphs of large stopping distance is to ensure that the graph has large girth. It is known that stopping distance increases exponentially with the girth of the Tanner graph. A new elementary combinatorial construction algorithm for an almost regular LDPC code family with provable Ώ(log n) girth and O(n2) construction complexity is presented. The bound on the girth is close within a factor of two to the best known upper bound on girth. The problem of linear time exact maximum likelihood decoding of tailbiting trellis has remained open for several years. An O(n) complexity approximate maximum likelihood decoding algorithm for tail-biting trellises is presented and analyzed. Experiments indicate that the algorithm performs close to the ideal maximum likelihood decoder.
152

On The Analysis of Spatially-Coupled GLDPC Codes and The Weighted Min-Sum Algorithm

Jian, Yung-Yih 16 December 2013 (has links)
This dissertation studies methods to achieve reliable communication over unreliable channels. Iterative decoding algorithms for low-density parity-check (LDPC) codes and generalized LDPC (GLDPC) codes are analyzed. A new class of error-correcting codes to enhance the reliability of the communication for high-speed systems, such as optical communication systems, is proposed. The class of spatially-coupled GLDPC codes is studied, and a new iterative hard- decision decoding (HDD) algorithm for GLDPC codes is introduced. The main result is that the minimal redundancy allowed by Shannon’s Channel Coding Theorem can be achieved by using the new iterative HDD algorithm with spatially-coupled GLDPC codes. A variety of low-density parity-check (LDPC) ensembles have now been observed to approach capacity with iterative decoding. However, all of them use soft (i.e., non-binary) messages and a posteriori probability (APP) decoding of their component codes. To the best of our knowledge, this is the first system that can approach the channel capacity using iterative HDD. The optimality of a codeword returned by the weighted min-sum (WMS) algorithm, an iterative decoding algorithm which is widely used in practice, is studied as well. The attenuated max-product (AttMP) decoding and weighted min-sum (WMS) decoding for LDPC codes are analyzed. Applying the max-product (and belief- propagation) algorithms to loopy graphs are now quite popular for best assignment problems. This is largely due to their low computational complexity and impressive performance in practice. Still, there is no general understanding of the conditions required for convergence and/or the optimality of converged solutions. This work presents an analysis of both AttMP decoding and WMS decoding for LDPC codes which guarantees convergence to a fixed point when a weight factor, β, is sufficiently small. It also shows that, if the fixed point satisfies some consistency conditions, then it must be both a linear-programming (LP) and maximum-likelihood (ML) decoding solution.
153

Iterative joint detection and decoding of LDPC-Coded V-BLAST systems

Tsai, Meng-Ying (Brady) 10 July 2008 (has links)
Soft iterative detection and decoding techniques have been shown to be able to achieve near-capacity performance in multiple-antenna systems. To obtain the optimal soft information by marginalization over the entire observation space is intractable; and the current literature is unable to guide us towards the best way to obtain the suboptimal soft information. In this thesis, several existing soft-input soft-output (SISO) detectors, including minimum mean-square error-successive interference cancellation (MMSE-SIC), list sphere decoding (LSD), and Fincke-Pohst maximum-a-posteriori (FPMAP), are examined. Prior research has demonstrated that LSD and FPMAP outperform soft-equalization methods (i.e., MMSE-SIC); however, it is unclear which of the two scheme is superior in terms of performance-complexity trade-off. A comparison is conducted to resolve the matter. In addition, an improved scheme is proposed to modify LSD and FPMAP, providing error performance improvement and a reduction in computational complexity simultaneously. Although list-type detectors such as LSD and FPMAP provide outstanding error performance, issues such as the optimal initial sphere radius, optimal radius update strategy, and their highly variable computational complexity are still unresolved. A new detection scheme is proposed to address the above issues with fixed detection complexity, making the scheme suitable for practical implementation. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2008-07-08 19:29:17.66
154

Nonparametric statistical inference for functional brain information mapping

Stelzer, Johannes 26 May 2014 (has links) (PDF)
An ever-increasing number of functional magnetic resonance imaging (fMRI) studies are now using information-based multi-voxel pattern analysis (MVPA) techniques to decode mental states. In doing so, they achieve a significantly greater sensitivity compared to when they use univariate analysis frameworks. Two most prominent MVPA methods for information mapping are searchlight decoding and classifier weight mapping. The new MVPA brain mapping methods, however, have also posed new challenges for analysis and statistical inference on the group level. In this thesis, I discuss why the usual procedure of performing t-tests on MVPA derived information maps across subjects in order to produce a group statistic is inappropriate. I propose a fully nonparametric solution to this problem, which achieves higher sensitivity than the most commonly used t-based procedure. The proposed method is based on resampling methods and preserves the spatial dependencies in the MVPA-derived information maps. This enables to incorporate a cluster size control for the multiple testing problem. Using a volumetric searchlight decoding procedure and classifier weight maps, I demonstrate the validity and sensitivity of the new approach using both simulated and real fMRI data sets. In comparison to the standard t-test procedure implemented in SPM8, the new results showed a higher sensitivity and spatial specificity. The second goal of this thesis is the comparison of the two widely used information mapping approaches -- the searchlight technique and classifier weight mapping. Both methods take into account the spatially distributed patterns of activation in order to predict stimulus conditions, however the searchlight method solely operates on the local scale. The searchlight decoding technique has furthermore been found to be prone to spatial inaccuracies. For instance, the spatial extent of informative areas is generally exaggerated, and their spatial configuration is distorted. In this thesis, I compare searchlight decoding with linear classifier weight mapping, both using the formerly proposed non-parametric statistical framework using a simulation and ultra-high-field 7T experimental data. It was found that the searchlight method led to spatial inaccuracies that are especially noticeable in high-resolution fMRI data. In contrast, the weight mapping method was more spatially precise, revealing both informative anatomical structures as well as the direction by which voxels contribute to the classification. By maximizing the spatial accuracy of ultra-high-field fMRI results, such global multivariate methods provide a substantial improvement for characterizing structure-function relationships.
155

Energibolag genom den unga miljöopportunistens lins : En receptionsstudie i studenters tolkningar av energibolags miljörelaterade kommunikation

Möller, Evelina, Matts, Daniella January 2014 (has links)
No description available.
156

Coding Theorems via Jar Decoding

Meng, Jin January 2013 (has links)
In the development of digital communication and information theory, every channel decoding rule has resulted in a revolution at the time when it was invented. In the area of information theory, early channel coding theorems were established mainly by maximum likelihood decoding, while the arrival of typical sequence decoding signaled the era of multi-user information theory, in which achievability proof became simple and intuitive. Practical channel code design, on the other hand, was based on minimum distance decoding at the early stage. The invention of belief propagation decoding with soft input and soft output, leading to the birth of turbo codes and low-density-parity check (LDPC) codes which are indispensable coding techniques in current communication systems, changed the whole research area so dramatically that people started to use the term "modern coding theory'' to refer to the research based on this decoding rule. In this thesis, we propose a new decoding rule, dubbed jar decoding, which would be expected to bring some new thoughts to both the code performance analysis and the code design. Given any channel with input alphabet X and output alphabet Y, jar decoding rule can be simply expressed as follows: upon receiving the channel output y^n ∈ Y^n, the decoder first forms a set (called a jar) of sequences x^n ∈ X^n considered to be close to y^n and pick any codeword (if any) inside this jar as the decoding output. The way how the decoder forms the jar is defined independently with the actual channel code and even the channel statistics in certain cases. Under this jar decoding, various coding theorems are proved in this thesis. First of all, focusing on the word error probability, jar decoding is shown to be near optimal by the achievabilities proved via jar decoding and the converses proved via a proof technique, dubbed the outer mirror image of jar, which is also quite related to jar decoding. Then a Taylor-type expansion of optimal channel coding rate with finite block length is discovered by combining those achievability and converse theorems, and it is demonstrated that jar decoding is optimal up to the second order in this Taylor-type expansion. Flexibility of jar decoding is then illustrated by proving LDPC coding theorems via jar decoding, where the bit error probability is concerned. And finally, we consider a coding scenario, called interactive encoding and decoding, and show that jar decoding can be also used to prove coding theorems and guide the code design in the scenario of two-way communication.
157

Aplicação de transformação conforme em codificação e decodificação de imagens / Conformal mapping applied to images encoding and decoding

Silva, Alan Henrique Ferreira 31 March 2016 (has links)
Submitted by JÚLIO HEBER SILVA (julioheber@yahoo.com.br) on 2017-03-24T17:48:37Z No. of bitstreams: 2 Dissertação - Alan Henrique Ferreira Silva - 2016.pdf: 10881029 bytes, checksum: 1c411277f8b103cc8a55709053ed7f9b (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2017-03-27T15:13:01Z (GMT) No. of bitstreams: 2 Dissertação - Alan Henrique Ferreira Silva - 2016.pdf: 10881029 bytes, checksum: 1c411277f8b103cc8a55709053ed7f9b (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2017-03-27T15:13:01Z (GMT). No. of bitstreams: 2 Dissertação - Alan Henrique Ferreira Silva - 2016.pdf: 10881029 bytes, checksum: 1c411277f8b103cc8a55709053ed7f9b (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2016-03-31 / This work proposes method to encode and decode imas using conformal mapping. Conformal mapping modifies domains without modifyung physical characteristics between them. Real images are processed between these domains using encoding keys, also called transforming functions. The advantage of this methodology is the ability to carry the message as an encoded image in printed media for posterior-decoding. / Este trabalho propõe método que utiliza transformações conformes para codificar e decodificar imagens. As transformações conformes modificam os domínios em estudos sem modificar as características físicas entre eles. As imagens reais são transformadas entre estes domínios utilizando chaves, que são funções transformadoras. o diferencial desta metodologia é a capacidade de transportar a mensagem contida na imagem em meio impresso codificado e depois, decodificá-la.
158

MIMO block-fading channels with mismatched CSI

Asyhari, A.Taufiq, Guillen i Fabregas, A. 23 August 2014 (has links)
Yes / We study transmission over multiple-input multiple-output (MIMO) block-fading channels with imperfect channel state information (CSI) at both the transmitter and receiver. Specifically, based on mismatched decoding theory for a fixed channel realization, we investigate the largest achievable rates with independent and identically distributed inputs and a nearest neighbor decoder. We then study the corresponding information outage probability in the high signal-to-noise ratio (SNR) regime and analyze the interplay between estimation error variances at the transmitter and at the receiver to determine the optimal outage exponent, defined as the high-SNR slope of the outage probability plotted in a logarithmic-logarithmic scale against the SNR. We demonstrate that despite operating with imperfect CSI, power adaptation can offer substantial gains in terms of outage exponent. / A. T. Asyhari was supported in part by the Yousef Jameel Scholarship, University of Cambridge, Cambridge, U.K., and the National Science Council of Taiwan under grant NSC 102-2218-E-009-001. A. Guillén i Fàbregas was supported in part by the European Research Council under ERC grant agreement 259663 and the Spanish Ministry of Economy and Competitiveness under grant TEC2012-38800-C03-03.
159

Kan en intensivträning av avkodningsförmåga i åk 3 leda till förbättrad läsförståelse och läsintresse? : En studie av Rydaholmsmetoden

Albertsson, Anneli January 2016 (has links)
The aim of the study was to examine whether an intervention with the Rydaholms method leads to better decoding skills, improved reading comprehension and increased interest in reading. The participants were the third grade primary school children. A five-week training with the method was performed and children's results in reading speed, decoding and reading comprehension were compared to the pretest results. The interest in reading was measured with a questionnaire prior and after the invention was done. The results showed that the children had improved their decoding but not their reading comprehension. All the children reported a higher level of reading interest after the intervention. The results are discussed in relation to the research favoring training in spelling and decoding as a primary method to improve both decoding and reading comprehension and methods that combine training in spelling and comprehension. The study could show that improved decoding skills do not automatically lead to better reading comprehension due to a short-term memory advantage but training in comprehension strategies is needed. The finding that improved decoding lead to increased interest in reading gives support for the research that claims that decoding skills are fundamental for children's own view on reading.
160

Exact sampling and optimisation in statistical machine translation

Aziz, Wilker Ferreira January 2014 (has links)
In Statistical Machine Translation (SMT), inference needs to be performed over a high-complexity discrete distribution de ned by the intersection between a translation hypergraph and a target language model. This distribution is too complex to be represented exactly and one typically resorts to approximation techniques either to perform optimisation { the task of searching for the optimum translation { or sampling { the task of nding a subset of translations that is statistically representative of the goal distribution. Beam-search is an example of an approximate optimisation technique, where maximisation is performed over a heuristically pruned representation of the goal distribution. For inference tasks other than optimisation, rather than nding a single optimum, one is really interested in obtaining a set of probabilistic samples from the distribution. This is the case in training where one wishes to obtain unbiased estimates of expectations in order to t the parameters of a model. Samples are also necessary in consensus decoding where one chooses from a sample of likely translations the one that minimises a loss function. Due to the additional computational challenges posed by sampling, n-best lists, a by-product of optimisation, are typically used as a biased approximation to true probabilistic samples. A more direct procedure is to attempt to directly draw samples from the underlying distribution rather than rely on n-best list approximations. Markov Chain Monte Carlo (MCMC) methods, such as Gibbs sampling, o er a way to overcome the tractability issues in sampling, however their convergence properties are hard to assess. That is, it is di cult to know when, if ever, an MCMC sampler is producing samples that are compatible iii with the goal distribution. Rejection sampling, a Monte Carlo (MC) method, is more fundamental and natural, it o ers strong guarantees, such as unbiased samples, but is typically hard to design for distributions of the kind addressed in SMT, rendering an intractable method. A recent technique that stresses a uni ed view between the two types of inference tasks discussed here | optimisation and sampling | is the OS approach. OS can be seen as a cross between Adaptive Rejection Sampling (an MC method) and A optimisation. In this view the intractable goal distribution is upperbounded by a simpler (thus tractable) proxy distribution, which is then incrementally re ned to be closer to the goal until the maximum is found, or until the sampling performance exceeds a certain level. This thesis introduces an approach to exact optimisation and exact sampling in SMT by addressing the tractability issues associated with the intersection between the translation hypergraph and the language model. The two forms of inference are handled in a uni ed framework based on the OS approach. In short, an intractable goal distribution, over which one wishes to perform inference, is upperbounded by tractable proposal distributions. A proposal represents a relaxed version of the complete space of weighted translation derivations, where relaxation happens with respect to the incorporation of the language model. These proposals give an optimistic view on the true model and allow for easier and faster search using standard dynamic programming techniques. In the OS approach, such proposals are used to perform a form of adaptive rejection sampling. In rejection sampling, samples are drawn from a proposal distribution and accepted or rejected as a function of the mismatch between the proposal and the goal. The technique is adaptive in that rejected samples are used to motivate a re nement of the upperbound proposal that brings it closer to the goal, improving the rate of acceptance. Optimisation can be connected to an extreme form of sampling, thus the framework introduced here suits both exact optimisation and exact iv sampling. Exact optimisation means that the global maximum is found with a certi cate of optimality. Exact sampling means that unbiased samples are independently drawn from the goal distribution. We show that by using this approach exact inference is feasible using only a fraction of the time and space that would be required by a full intersection, without recourse to pruning techniques that only provide approximate solutions. We also show that the vast majority of the entries (n-grams) in a language model can be summarised by shorter and optimistic entries. This means that the computational complexity of our approach is less sensitive to the order of the language model distribution than a full intersection would be. Particularly in the case of sampling, we show that it is possible to draw exact samples compatible with distributions which incorporate a high-order language model component from proxy distributions that are much simpler. In this thesis, exact inference is performed in the context of both hierarchical and phrase-based models of translation, the latter characterising a problem that is NP-complete in nature.

Page generated in 0.0553 seconds