• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 11
  • 6
  • 5
  • 4
  • Tagged with
  • 78
  • 78
  • 18
  • 17
  • 17
  • 16
  • 16
  • 13
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Finding Implicit Citations in Scientific Publications : Improvements to Citation Context Detection Methods

Murray, Jonathan January 2015 (has links)
This thesis deals with the task of identifying implicit citations between scientific publications. Apart from being useful knowledge on their own, the citations may be used as input to other problems such as determining an author’s sentiment towards a reference, or summarizing a paper based on what others have written about it. We extend two recently proposed methods, a Machine Learning classifier and an iterative Belief Propagation algorithm. Both are implemented and evaluated on a common pre-annotated dataset. Several changes to the algorithms are then presented, incorporating new sentence features, different semantic text similarity measures as well as combining the methods into a single classifier. Our main finding is that the introduction of new sentence features yield significantly improved F-scores for both approaches. / Detta examensarbete behandlar frågan om att hitta implicita citeringar mellan vetenskapliga publikationer. Förutom att vara intressanta på egen hand kan dessa citeringar användas inom andra problem, såsom att bedöma en författares inställning till en referens eller att sammanfatta en rapport utifrån hur den har blivit citerad av andra. Vi utgår från två nyliga metoder, en maskininlärningsbaserad klassificerare och en iterativ algoritm baserad på en grafmodell. Dessa implementeras och utvärderas på en gemensam förannoterad datamängd. Ett antal förändringar till algoritmerna presenteras i form av nya särdrag hos meningarna (eng. sentence features), olika semantiska textlikhetsmått och ett sätt att kombinera de två metoderna. Arbetets huvudsakliga resultat är att de nya meningssärdragen leder till anmärkningsvärt förbättrade F-värden för de båda metoderna.
52

Low-density Parity-Check decoding Algorithms / Low-density Parity-Check avkodare algoritm

Pirou, Florent January 2004 (has links)
<p>Recently, low-density parity-check (LDPC) codes have attracted much attention because of their excellent error correcting performance and highly parallelizable decoding scheme. However, the effective VLSI implementation of and LDPC decoder remains a big challenge and is a crucial issue in determining how well we can exploit the benefits of the LDPC codes in the real applications. In this master thesis report, following a error coding background, we describe Low-Density Parity-Check codes and their decoding algorithm, and also requirements and architectures of LPDC decoder implementations.</p>
53

Efficient Message Passing Decoding Using Vector-based Messages

Grimnell, Mikael, Tjäder, Mats January 2005 (has links)
<p>The family of Low Density Parity Check (LDPC) codes is a strong candidate to be used as Forward Error Correction (FEC) in future communication systems due to its strong error correction capability. Most LDPC decoders use the Message Passing algorithm for decoding, which is an iterative algorithm that passes messages between its variable nodes and check nodes. It is not until recently that computation power has become strong enough to make Message Passing on LDPC codes feasible. Although locally simple, the LDPC codes are usually large, which increases the required computation power. Earlier work on LDPC codes has been concentrated on the binary Galois Field, GF(2), but it has been shown that codes from higher order fields have better error correction capability. However, the most efficient LDPC decoder, the Belief Propagation Decoder, has a squared complexity increase when moving to higher order Galois Fields. Transmission over a channel with M-PSK signalling is a common technique to increase spectral efficiency. The information is transmitted as the phase angle of the signal.</p><p>The focus in this Master’s Thesis is on simplifying the Message Passing decoding when having inputs from M-PSK signals transmitted over an AWGN channel. Symbols from higher order Galois Fields were mapped to M-PSK signals, since M-PSK is very bandwidth efficient and the information can be found in the angle of the signal. Several simplifications of the Belief Propagation has been developed and tested. The most promising is the Table Vector Decoder, which is a Message Passing Decoder that uses a table lookup technique for check node operations and vector summation as variable node operations. The table lookup is used to approximate the check node operation in a Belief Propagation decoder. Vector summation is used as an equivalent operation to the variable node operation. Monte Carlo simulations have shown that the Table Vector Decoder can achieve a performance close to the Belief Propagation. The capability of the Table Vector Decoder depends on the number of reconstruction points and the placement of them. The main advantage of the Table Vector Decoder is that its complexity is unaffected by the Galois Field used. Instead, there will be a memory space requirement which depends on the desired number of reconstruction points.</p>
54

Low-density Parity-Check decoding Algorithms / Low-density Parity-Check avkodare algoritm

Pirou, Florent January 2004 (has links)
Recently, low-density parity-check (LDPC) codes have attracted much attention because of their excellent error correcting performance and highly parallelizable decoding scheme. However, the effective VLSI implementation of and LDPC decoder remains a big challenge and is a crucial issue in determining how well we can exploit the benefits of the LDPC codes in the real applications. In this master thesis report, following a error coding background, we describe Low-Density Parity-Check codes and their decoding algorithm, and also requirements and architectures of LPDC decoder implementations.
55

Efficient Message Passing Decoding Using Vector-based Messages

Grimnell, Mikael, Tjäder, Mats January 2005 (has links)
The family of Low Density Parity Check (LDPC) codes is a strong candidate to be used as Forward Error Correction (FEC) in future communication systems due to its strong error correction capability. Most LDPC decoders use the Message Passing algorithm for decoding, which is an iterative algorithm that passes messages between its variable nodes and check nodes. It is not until recently that computation power has become strong enough to make Message Passing on LDPC codes feasible. Although locally simple, the LDPC codes are usually large, which increases the required computation power. Earlier work on LDPC codes has been concentrated on the binary Galois Field, GF(2), but it has been shown that codes from higher order fields have better error correction capability. However, the most efficient LDPC decoder, the Belief Propagation Decoder, has a squared complexity increase when moving to higher order Galois Fields. Transmission over a channel with M-PSK signalling is a common technique to increase spectral efficiency. The information is transmitted as the phase angle of the signal. The focus in this Master’s Thesis is on simplifying the Message Passing decoding when having inputs from M-PSK signals transmitted over an AWGN channel. Symbols from higher order Galois Fields were mapped to M-PSK signals, since M-PSK is very bandwidth efficient and the information can be found in the angle of the signal. Several simplifications of the Belief Propagation has been developed and tested. The most promising is the Table Vector Decoder, which is a Message Passing Decoder that uses a table lookup technique for check node operations and vector summation as variable node operations. The table lookup is used to approximate the check node operation in a Belief Propagation decoder. Vector summation is used as an equivalent operation to the variable node operation. Monte Carlo simulations have shown that the Table Vector Decoder can achieve a performance close to the Belief Propagation. The capability of the Table Vector Decoder depends on the number of reconstruction points and the placement of them. The main advantage of the Table Vector Decoder is that its complexity is unaffected by the Galois Field used. Instead, there will be a memory space requirement which depends on the desired number of reconstruction points.
56

Multi-view Video Coding Via Dense Depth Field

Ozkalayci, Burak Oguz 01 September 2006 (has links) (PDF)
Emerging 3-D applications and 3-D display technologies raise some transmission problems of the next-generation multimedia data. Multi-view Video Coding (MVC) is one of the challenging topics in this area, that is on its road for standardization via ISO MPEG. In this thesis, a 3-D geometry-based MVC approach is proposed and analyzed in terms of its compression performance. For this purpose, the overall study is partitioned into three preceding parts. The first step is dense depth estimation of a view from a fully calibrated multi-view set. The calibration information and smoothness assumptions are utilized for determining dense correspondences via a Markov Random Field (MRF) model, which is solved by Belief Propagation (BP) method. In the second part, the estimated dense depth maps are utilized for generating (predicting) arbitrary (other camera) views of a scene, that is known as novel view generation. A 3-D warping algorithm, which is followed by an occlusion-compatible hole-filling process, is implemented for this aim. In order to suppress the occlusion artifacts, an intermediate novel view generation method, which fuses two novel views generated from different source views, is developed. Finally, for the last part, dense depth estimation and intermediate novel view generation tools are utilized in the proposed H.264-based MVC scheme for the removal of the spatial redundancies between different views. The performance of the proposed approach is compared against the simulcast coding and a recent MVC proposal, which is expected to be the standard recommendation for MPEG in the near future. These results show that the geometric approaches in MVC can still be utilized, especially in certain 3-D applications, in addition to conventional temporal motion compensation techniques, although the rate-distortion performances of geometry-free approaches are quite superior.
57

Read Channel Modeling, Detection, Capacity Estimation and Two-Dimensional Modulation Codes for TDMR

Khatami, Seyed Mehrdad January 2015 (has links)
Magnetic recording systems have reached a point where the grain size can no longer be reduced due to energy stability constraints. As a new magnetic recording paradigm, two-dimensional magnetic recording (TDMR) relies on sophisticated signal processing and coding algorithms, a much less expensive alternative to radically altering the media or the read/write head as required for the other technologies. Due to 1) the significant reduction of grains per bit, and 2) the aggressive shingled writing, TDMR faces several formidable challenges. Firstly, severe interference is introduced in both down-track and cross-track directions due to the read/write head dimensions. Secondly, reduction in the number of grains per bit results in variations of bit boundaries which consequently lead to data-dependent jitter noise. Moreover, the bit to grain ratio reduction will cause some bits not to be properly magnetized or to be overwritten which introduces write errors to the system. The nature of write and read processes in TDMR necessitates that the information storage be viewed as a two-dimensional (2D) system. The challenges in TDMR signal processing are 1) an accurate read channel model, 2) mitigating the effect of inter-track interference (ITI) and inter-symbol interference (ISI) by using an equalizer, 3) developing 2D modulation/error correcting codes matching the TDMR channel model, 4) design of truly 2D detectors, and 5) computing the lower bounds on capacity of TDMR channel. The work is concerned with several objectives in regard to the challenges in TDMR systems. 1. TDMR Channel Modeling: As one of the challenges of the project, the 2D Microcell model is introduced as a read channel model for TDMR. This model captures the data-dependent properties of the media noise and it is well suited in regard to detector design. In line with what has been already done in TDMR channel models, improvements can be made to tune the 2D Microcell model for different bit to grain densities. Furthermore, the 2D Microcell model can be modified to take into account dependency between adjacent microtrack borders positions. This assumption will lead to more accurate model in term of closeness to the Voronoi model. 2. Detector Design: The need for 2D detection is not unique to TDMR systems. However, it is still largely an open problem to develop detectors that are close to optimal maximum likelihood (ML) detection for the 2D case. As one of the important blocks of the TDMR system, the generalized belief propagation (GBP) detector is developed and introduced as a near ML detector. Furthermore, this detector is tuned to improve the performance for the TDMR channel model. 3. Channel Capacity Estimation: Two dimensional magnetic recording (TDMR) is a new paradigm in data storage which envisions densities up to 10 Tb/in² as a result of drastically reducing bit to grain ratio. In order to reach this goal aggressive write (shingled writing) and read process are used in TDMR. Kavcic et al. proposed a simple magnetic grain model called the granular tiling model which captures the essence of read/write process in TDMR. Capacity bounds for this model indicate that 0.6 user bit per grain densities are possible, however, previous attempt to reach capacities are not close to the channel capacity. We provide a truly two-dimensional detection scheme for the granular tiling model based on generalized belief propagation (GBP). Factor graph interpretation of the detection problem is provided and formulated in this section. Then, GBP is employed to compute marginal a posteriori probabilities for the constructed factor graph. Simulation results show huge improvements in detection. A lower bound on the mutual information rate (MIR) is also derived for this model based on GBP detector. Moreover, for the Voronoi channel model, the MIR is estimated for the case of constrained and unconstrained input. 4. Modulation Codes: Constrained codes also known as modulation codes are a key component in the digital magnetic recording systems. The constrained code forbids particular input data patterns which lead to some of the dominant error events or higher media noise. The goal of the dissertation in regard to modulation codes is to construct a 2D modulation code for the TDMR channel which improves the overall performance of the TDMR system. Furthermore, we implement an algorithm to estimate the capacity of the 2D modulation codes based on generalized belief propagation (GBP) algorithm. The capacity is also calculated in presence of white and colored noise which is the case for TDMR channel. 5. Joint Detection and Decoding Schemes: In data recording systems, a concatenated approach toward the constrained code and error-correcting code (ECC) is typically used and the decoding is done independently. We show the improvement in combining the decoding of the constrained code and the ECC using GBP algorithm. We consider the performance of a combined modulation constraints and the ECC on a binary-input additive white Gaussian noise (AWGN) channel (BIAWGNC) and also over one-dimensional (1D) and 2D ISI channels. We will show that combining the detection, demodulation and decoding results in a superior performance compared to concatenated schemes.
58

On The Analysis of Spatially-Coupled GLDPC Codes and The Weighted Min-Sum Algorithm

Jian, Yung-Yih 16 December 2013 (has links)
This dissertation studies methods to achieve reliable communication over unreliable channels. Iterative decoding algorithms for low-density parity-check (LDPC) codes and generalized LDPC (GLDPC) codes are analyzed. A new class of error-correcting codes to enhance the reliability of the communication for high-speed systems, such as optical communication systems, is proposed. The class of spatially-coupled GLDPC codes is studied, and a new iterative hard- decision decoding (HDD) algorithm for GLDPC codes is introduced. The main result is that the minimal redundancy allowed by Shannon’s Channel Coding Theorem can be achieved by using the new iterative HDD algorithm with spatially-coupled GLDPC codes. A variety of low-density parity-check (LDPC) ensembles have now been observed to approach capacity with iterative decoding. However, all of them use soft (i.e., non-binary) messages and a posteriori probability (APP) decoding of their component codes. To the best of our knowledge, this is the first system that can approach the channel capacity using iterative HDD. The optimality of a codeword returned by the weighted min-sum (WMS) algorithm, an iterative decoding algorithm which is widely used in practice, is studied as well. The attenuated max-product (AttMP) decoding and weighted min-sum (WMS) decoding for LDPC codes are analyzed. Applying the max-product (and belief- propagation) algorithms to loopy graphs are now quite popular for best assignment problems. This is largely due to their low computational complexity and impressive performance in practice. Still, there is no general understanding of the conditions required for convergence and/or the optimality of converged solutions. This work presents an analysis of both AttMP decoding and WMS decoding for LDPC codes which guarantees convergence to a fixed point when a weight factor, β, is sufficiently small. It also shows that, if the fixed point satisfies some consistency conditions, then it must be both a linear-programming (LP) and maximum-likelihood (ML) decoding solution.
59

Trapping Sets in Fountain Codes over Noisy Channels

OROZCO, VIVIAN 04 November 2009 (has links)
Fountain codes have demonstrated great results for the binary erasure channel and have already been incorporated into several international standards to recover lost packets at the application layer. These include multimedia broadcast/multicast sessions and digital video broadcasting on global internet-protocol. The rateless property of Fountain codes holds great promise for noisy channels. These are more sophisticated mathematical models representing errors on communications links rather than only erasures. The practical implementation of Fountain codes for these channels, however, is hampered by high decoding cost and delay. In this work we study trapping sets in Fountain codes over noisy channels and their effect on the decoding process. While trapping sets have received much attention for low-density parity-check (LDPC) codes, to our knowledge they have never been fully explored for Fountain codes. Our study takes into account the different code structure and the dynamic nature of Fountain codes. We show that 'error-free' trapping sets exist for Fountain codes. When the decoder is caught in an error-free trapping set it actually has the correct message estimate, but is unable to detect this is the case. Thus, the decoding process continues, increasing the decoding cost and delay for naught. The decoding process for rateless codes consists of one or more decoding attempts. We show that trapping sets may reappear as part of other trapping sets on subsequent decoding attempts or be defeated by the reception of more symbols. Based on our observations we propose early termination methods that use trapping set detection to obtain improvements in realized rate, latency, and decoding cost for Fountain codes. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2009-10-29 14:33:06.548
60

Novel 3D Back Reconstruction using Stereo Digital Cameras

Kumar, Anish Unknown Date
No description available.

Page generated in 0.1271 seconds