• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 228
  • 64
  • 42
  • 19
  • 18
  • 14
  • 11
  • 8
  • 6
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 500
  • 93
  • 92
  • 86
  • 86
  • 84
  • 73
  • 66
  • 63
  • 59
  • 51
  • 43
  • 41
  • 39
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Μελέτη της συμπεριφοράς αποκωδικοποιητών LDPC στην περιοχή του Error Floor

Γιαννακοπούλου, Γεωργία 07 May 2015 (has links)
Σε διαγράμματα BER, με τα οποία αξιολογείται ένα σύστημα αποκωδικοποίησης, και σε χαμηλά επίπεδα θορύβου, παρατηρείται πολλές φορές η περιοχή Error Floor, όπου η απόδοση του αποκωδικοποιητή δε βελτιώνεται πλέον, καθώς μειώνεται ο θόρυβος. Με πραγματοποίηση εξομοίωσης σε software, το Error Floor συνήθως δεν είναι ορατό, κι έτσι κύριο ζητούμενο είναι η πρόβλεψη της συμπεριφοράς του αποκωδικοποιητή, αλλά και γενικότερα η βελτιστοποίηση της απόδοσής του σε αυτήν την περιοχή. Στην παρούσα διπλωματική εργασία μελετάται η ανεπιτυχής αποκωδικοποίηση ορισμένων κωδικών λέξεων καθώς και ο μηχανισμός ενεργοποίησης των Trapping Sets, δηλαδή δομών, οι οποίες φαίνεται να είναι το κύριο αίτιο εμφάνισης του Error Floor. Xρησιμοποιείται το AWGN μοντέλο καναλιού και κώδικας με αραιό πίνακα ελέγχου ισοτιμίας (LDPC), ενώ οι εξομοιώσεις επαναληπτικών αποκωδικοποιήσεων πραγματοποιούνται σε επίπεδα (Layers), με αλγορίθμους ανταλλαγής μηνυμάτων (Message Passing). Αναλύονται προτεινόμενοι τροποποιημένοι αλγόριθμοι και μελετώνται οι επιπτώσεις του κβαντισμού των δεδομένων. Τέλος, προσδιορίζεται η επίδραση του θορύβου στην αποκωδικοποίηση και αναπτύσσεται ένα ημιαναλυτικό μοντέλο υπολογισμού της πιθανότητας ενεργοποίησης ενός Trapping Set και της πιθανότητας εμφάνισης σφάλματος κατά τη μετάδοση. / In BER plots, which are used in order to evaluate a decoding system, and at low-noise level, the Error Floor region is sometimes observed, where the decoder performance is no longer improved, as noise is reduced. When a simulation is executed using software, the Error Floor region is usually not visible, so the main goal is the prediction of the decoder's behavior, as well as the improvement in general of its performance in that particular region. In this thesis, we study the conditions which result in a decoding failure for specific codewords and a Trapping Set activation. Trapping Sets are structures in a code, which seem to be the main cause of the Error Floor presence in BER plots. For the purpose of our study, we use the AWGN channel model and a linear block code with low density parity check matrix (LDPC), while iterative decoding simulations are executed by splitting the parity check matrix into layers (Layered Decoding) and by using Message Passing algorithms. We propose and analyze three new modified algorithms and we study the effects caused by data quantization. Finally, we determine the noise effects on the decoding procedure and we develop a semi-analytical model used for calculating the probability of a Trapping Set activation and for calculating the error probability during transmission.
172

Υλοποίηση επαναληπτικής αποκωδικοποίησης κωδικών LDPC για ασύρματους δέκτες MIMO

Φρέσκος, Σταμάτιος 08 March 2010 (has links)
Στα πλαίσια αυτής της διπλωματικής εργασίας μελετήσαμε μεθόδους κωδικοποίησης με χρήση πινάκων ισοτιμίας μεγάλων διαστάσεων που έχουν χρησιμοποιηθεί και εφαρμοσθεί μέχρι τώρα σε προηγούμενες μελέτες. Επιλέξαμε τη σχεδίαση ενός αποκωδικοποιητή, που στηρίζεται στο WiMAX – 802.16e ΙΕΕΕ πρότυπο μετάδοσης και συγκεκριμένα με χρήση πομπού και δέκτη με περισσότερες από μία κεραίες. Παρουσιάζουμε, λοιπόν τη θεωρία που συσχετίζεται με το θέμα αυτό τόσο από την πλευρά της κωδικοποίησης όσο κι από την πλευρά της ασύρματης ΜΙΜΟ μετάδοσης και το πρότυπο WiMAX. Αναλύουμε κάθε τμήμα του συστήματός που προσομοιώνουμε και παραθέτουμε τα αποτελέσματα της προσομοίωσης. / -
173

Iterative Decoding Beyond Belief Propagation of Low-Density Parity-Check Codes

Planjery, Shiva Kumar January 2013 (has links)
The recent renaissance of one particular class of error-correcting codes called low-density parity-check (LDPC) codes has revolutionized the area of communications leading to the so-called field of modern coding theory. At the heart of this theory lies the fact that LDPC codes can be efficiently decoded by an iterative inference algorithm known as belief propagation (BP) which operates on a graphical model of a code. With BP decoding, LDPC codes are able to achieve an exceptionally good error-rate performance as they can asymptotically approach Shannon's capacity. However, LDPC codes under BP decoding suffer from the error floor phenomenon, an abrupt degradation in the error-rate performance of the code in the high signal-to-noise ratio region, which prevents the decoder from achieving very low error-rates. It arises mainly due to the sub-optimality of BP decoding on finite-length loopy graphs. Moreover, the effects of finite precision that stem from hardware realizations of BP decoding can further worsen the error floor phenomenon. Over the past few years, the error floor problem has emerged as one of the most important problems in coding theory with applications now requiring very low error rates and faster processing speeds. Further, addressing the error floor problem while taking finite precision into account in the decoder design has remained a challenge. In this dissertation, we introduce a new paradigm for finite precision iterative decoding of LDPC codes over the binary symmetric channel (BSC). These novel decoders, referred to as finite alphabet iterative decoders (FAIDs), are capable of surpassing the BP in the error floor region at a much lower complexity and memory usage than BP without any compromise in decoding latency. The messages propagated by FAIDs are not quantized probabilities or log-likelihoods, and the variable node update functions do not mimic the BP decoder. Rather, the update functions are simple maps designed to ensure a higher guaranteed error correction capability which improves the error floor performance. We provide a methodology for the design of FAIDs on column-weight-three codes. Using this methodology, we design 3-bit precision FAIDs that can surpass the BP (floating-point) in the error floor region on several column-weight-three codes of practical interest. While the proposed FAIDs are able to outperform the BP decoder with low precision, the analysis of FAIDs still proves to be a difficult issue. Furthermore, their achievable guaranteed error correction capability is still far from what is achievable by the optimal maximum-likelihood (ML) decoding. In order to address these two issues, we propose another novel class of decoders called decimation-enhanced FAIDs for LDPC codes. For this class of decoders, the technique of decimation is incorporated into the variable node update function of FAIDs. Decimation, which involves fixing certain bits of the code to a particular value during decoding, can significantly reduce the number of iterations required to correct a fixed number of errors while maintaining the good performance of a FAID, thereby making such decoders more amenable to analysis. We illustrate this for 3-bit precision FAIDs on column-weight-three codes and provide insights into the analysis of such decoders. We also show how decimation can be used adaptively to further enhance the guaranteed error correction capability of FAIDs that are already good on a given code. The new adaptive decimation scheme proposed has marginally added complexity but can significantly increase the slope of the error floor in the error-rate performance of a particular FAID. On certain high-rate column-weight-three codes of practical interest, we show that adaptive decimation-enhanced FAIDs can achieve a guaranteed error-correction capability that is close to the theoretical limit achieved by ML decoding.
174

Protograph-Based Generalized LDPC Codes: Enumerators, Design, and Applications

Abu-Surra, Shadi Ali January 2009 (has links)
Among the recent advances in the area of low-density parity-check (LDPC) codes, protograph-based LDPC codes have the advantages of a simple design procedure and highly structured encoders and decoders. These advantages can also be exploited in the design of protograph-based generalized LDPC (G-LDPC) codes. In this dissertation we provide analytical tools which aid the design of protograph-based LDPC and G-LDPC codes. Specifically, we propose a method for computing the codeword-weight enumerators for finite-length protograph-based G-LDPC code ensembles, and then we consider the asymptotic case when the block-length goes to infinity. These results help the designer identify good ensembles of protograph-based G-LDPC codes in the minimum distance sense (i.e., ensembles which have minimum distances grow linearly with code length). Furthermore, good code ensembles can be characterized by good stopping set, trapping set, or pseudocodeword properties, which assist in the design of G-LDPC codes with low floors. We leverage our method for computing codeword-weight enumerators to compute stopping-set, and pseudocodeword enumerators for the finite-length and the asymptotic ensembles of protograph-based G-LDPC codes. Moreover, we introduce a method for computing trapping set enumerators for finite-length (and asymptotic) protograph-based LDPC code ensembles. Trapping set enumerators for G-LDPC codes represents a more complex problem which we do not consider here. Inspired by our method for computing trapping set enumerators for protograph-based LDPC code ensembles, we developed an algorithm for estimating the trapping set enumerators for a specific LDPC code given its parity-check matrix. We used this algorithm to enumerate trapping sets for several LDPC codes from communication standards. Finally, we study coded-modulation schemes with LDPC codes and pulse position modulation (LDPC-PPM) over the free-space optical channel. We present three different decoding schemes and compare their performances. In addition, we developed a new density evolution tool for use in the design of LDPC codes with good performances over this channel.
175

Capacity estimation and code design principles for continuous phase modulation (CPM)

Ganesan, Aravind 30 September 2004 (has links)
Continuous Phase Modulation is a popular digital modulation scheme for systems which have tight spectral efficiency and Peak-to-Average ratio (PAR) constraints. In this thesis we propose a method of estimating the capacity for a Continuous Phase Modulation (CPM) system and also describe techniques for design of codes for this system. We note that the CPM modulator can be decomposed into a trellis code followed by a memoryless modulator. This decomposition enables us to perform iterative demodulation of the signal and improve the performance of the system. Thus we have the option of either performing iterative demodulation, where the channel decoder and the demodulator are invoked in an iterative fashion, or a non-iterative demodulation, where the demodulation is performed only once followed by the decoding of the message. We highlight the recent results in the estimation of capacity for channels with memory and apply it to a CPM system. We estimate two different types of capacity of the CPM system over an Additive White Gaussian Noise (AWGN). The first capacity assumes that optimum demodulation and decoding is done, and the second one assumes that the demodulation is done only once. Having obtained the capacity of the system we try to approach this capacity by designing outer codes matched to the CPM system. We utilized LDPC codes, since they can be designed to perform very close to capacity limit of the system. The design complexity for LDPC codes can be reduced by assuming that the input to the decoder is Gaussian distributed. We explore three different ways of approximating the CPM demodulator output to a Gaussian distribution and use it to design LDPC codes for a Bit Interleaved Coded Modulation (BICM) system. Finally we describe the design of Multi Level Codes (MLC) for CPM systems using the capacity matching rule.
176

Why do IKEA's products have different prices in different countries?

Chen, Mengling, Huang, Xin January 2012 (has links)
During the past decade, the law of one price and purchasing power parity theories have been empirically tested for their validity. IKEA, as a world famous furnishing company, sells identical products in different countries with different prices. The main emphasis of this paper is placed on the problem of if and why IKEA’s pricing actually departs from the law of one price and purchasing power parity. We focus on the following three main explaining factors: the existence of trade cost, the influences of non-traded parts cost of the goods, and other possible pricing behaviors of the firms. To be able to fulfill our objectives, a regression model combined with the theoretical framework and the institutional framework of IKEA have been used in this paper. The remarkable outcomes are gotten as below: (Ⅰ) The price variation still exist after removing the influences of transportation cost, trade barriers, taxes. (Ⅱ) Higher productivity contributes to higher national prices, but higher labor cost has no significant effect on price variation. (Ⅲ) Price discrimination and special market strategies in specific areas do play a role in the price variation.
177

Magnetic field simulation and mapping for the Qweak experiment

Wang, Peiqing 07 June 2007 (has links)
The Qweak experiment at Thomas Jefferson National Accelerator Facility (Jefferson Lab) will measure the proton's weak charge by measuring the parity violating asymmetry in elastic electron-proton scattering at very low momentum transfer, with the aim of determining the proton's weak charge with 4% combined statistical and systematic errors. The experimental apparatus includes a longitudinally polarized electron beam, a liquid hydrogen target, a room temperature toroidal magnetic spectrometer, and a set of precision detectors for the scattered electrons. The toroidal magnetic spectrometer, which will deflect away the inelastic scattered electrons and focus the elastic scattered electrons onto the detectors, plays a crucially important role in the experiment. In this thesis, in order to meet the requirements for the installation and calibration of the toroidal magnetic spectrometer, the numerical simulation of the spectrometer's magnetic field based on a realistic magnet model is discussed, a precise 3D field mapping is introduced, and some simulation results are provided. The zero-crossing analysis technique, which can be used to precisely infer the individual coil locations of the toroidal magnet, is presented and explored in detail.
178

BREAST CANCER TRENDS AMONG KENTUCKY WOMEN, 2004-2007

Hagan, Kara Ann 01 January 2011 (has links)
The purpose of this study is to investigate the discrepancies of female breast cancer mortality between the Appalachian and Non-Appalachian regions of Kentucky using data from the Kentucky Cancer Registry. This study identified subtype, reproductive, and regional differences in women with breast cancer in Kentucky. Among women with breast cancer living in Kentucky from 2004 to 2007, one and three live births significantly increased a woman’s risk of breast cancer mortality by 91% and 58% respectively, compared to a woman with zero live births. Progesterone receptornegative tumor status significantly increased a woman’s risk of breast cancer mortality by 64% compared to women with progesterone receptor-positive breast cancer. Residence in the Appalachian region significantly increased a woman’s risk of breast cancer mortality by 3.14-fold. After adjusting for regional interactions, progesterone receptor-negative tumor status in the Appalachian region increased a woman’s risk of breast cancer mortality by 3.13-fold. These findings suggest parity and estrogen receptor tumor status do not contribute to the breast cancer differences between the Appalachian and Non-Appalachian region of Kentucky. The association between progesterone receptor status and Appalachian residency suggest factors associated with the Appalachian region provide the poorest prognosis for a woman with breast cancer in Kentucky.
179

Parity violating asymmetries in the Gº experiment: Pion photoproduction on the Δ resonance

Coppens, Alexandre Francois Constant 13 September 2010 (has links)
Symmetry tests and more precisely parity violation experiments using the properties of the weak interaction give us unique insight into the internal hadronic structure of matter. The Gº experiment at Jefferson Laboratory used parity violating electron scattering to probe the strange quark contribution to the electromagnetic nucleon form factors, (GMs and GEs) as well as the axial contribution, (GAe). The data taken during the experiment provide further information on the axial transition form factor of the N - $\Delta$ transition, (GANΔ), as well as the scale of the low energy constant (dΔ) characterizing the parity violating γNΔ coupling. The analysis of backward angle Gº data taken with a liquid deuterium target to deduce the parity violating asymmetry for pion photoproduction on the Δ resonance, and the first experimental constraint on the value of dΔ, are reported in this thesis. The results showed that dΔ = (8.3 ± 25.3) gπ where the uncertainty is dominated by statistics, and that 75 percent of the theory range would be excluded by this measurement at 1 sigma.
180

Magnetic field simulation and mapping for the Qweak experiment

Wang, Peiqing 07 June 2007 (has links)
The Qweak experiment at Thomas Jefferson National Accelerator Facility (Jefferson Lab) will measure the proton's weak charge by measuring the parity violating asymmetry in elastic electron-proton scattering at very low momentum transfer, with the aim of determining the proton's weak charge with 4% combined statistical and systematic errors. The experimental apparatus includes a longitudinally polarized electron beam, a liquid hydrogen target, a room temperature toroidal magnetic spectrometer, and a set of precision detectors for the scattered electrons. The toroidal magnetic spectrometer, which will deflect away the inelastic scattered electrons and focus the elastic scattered electrons onto the detectors, plays a crucially important role in the experiment. In this thesis, in order to meet the requirements for the installation and calibration of the toroidal magnetic spectrometer, the numerical simulation of the spectrometer's magnetic field based on a realistic magnet model is discussed, a precise 3D field mapping is introduced, and some simulation results are provided. The zero-crossing analysis technique, which can be used to precisely infer the individual coil locations of the toroidal magnet, is presented and explored in detail.

Page generated in 0.0478 seconds